From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Mosberger Date: Thu, 25 Oct 2001 04:27:42 +0000 Subject: [Linux-ia64] kernel update (relative to 2.4.13) Message-Id: List-Id: References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: quoted-printable To: linux-ia64@vger.kernel.org An updated ia64 patch for 2.4.13 is now available at ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/ in file: linux-2.4.11-ia64-011010.diff* change log: - support readahead() syscall added by 2.4.13 (both for ia64 and ia32) - console log level fix by Jesper Juhl - half-hearted attempt add supporting reading of "default LDT entry" in ia= 32 modify_ldt() syscall; someone who understands what this is supposed to do should take a look at this... - palinfo update by Stephane Eranian - die() fix by Keith Owens - unaligned handler fix for rotating fp regs by Tony Luck - ACPI fix to get AGP bus scanned again by Chris Ahna - implementation ia64 version of wbinvd() for ACPI; this hasn't been tested and may not work; shouldn't be an issue at the moment as this is needed = only for ACPI functionality that is not supported on Itanium; still someone w= ho knows ACPI better may want to take a look at this - update PCI DMA interface to support page-based mapping/unmapping and the optional DAC interface This kernel has been tested with gcc-3.0 on Big Sur, Lion, and HP simulator. Both UP and MP seem to compile fine. As usual, your mileage may vary. Enjoy, --david diff -urN linux-2.4.13/Documentation/Configure.help linux-2.4.13-lia/Docume= ntation/Configure.help --- linux-2.4.13/Documentation/Configure.help Wed Oct 24 10:17:38 2001 +++ linux-2.4.13-lia/Documentation/Configure.help Wed Oct 24 10:21:05 2001 @@ -2632,6 +2632,16 @@ the GLX component for XFree86 3.3.6, which can be downloaded from http://utah-glx.sourceforge.net/ . =20 +Intel 460GX support +CONFIG_AGP_I460 + This option gives you AGP support for the Intel 460GX chipset. This + chipset, the first to support Intel Itanium processors, is new and + this option is correspondingly a little experimental. + + If you don't have a 460GX based machine (such as BigSur) with an AGP=20 + slot then this option isn't going to do you much good. If you're + dying to do Direct Rendering on IA-64, this is what you're looking for. + Intel I810/I810 DC100/I810e support CONFIG_AGP_I810 This option gives you AGP support for the Xserver on the Intel 810, @@ -12846,6 +12856,18 @@ Say Y here if you would like to be able to read the hard disk partition table format used by SGI machines. =20 +Intel EFI GUID partition support +CONFIG_EFI_PARTITION + Say Y here if you would like to use hard disks under Linux which + were partitioned using EFI GPT. Presently only useful on the + IA-64 platform. + +/dev/guid support (EXPERIMENTAL) +CONFIG_DEVFS_GUID + Say Y here if you would like to access disks and partitions by + their Globally Unique Identifiers (GUIDs) which will appear as + symbolic links in /dev/guid. + Ultrix partition support CONFIG_ULTRIX_PARTITION Say Y here if you would like to be able to read the hard disk @@ -18964,11 +18986,22 @@ so the "DIG-compliant" option is usually the right choice. =20 HP-simulator For the HP simulator (http://software.hp.com/ia64linux/). - SN1-simulator For the SGI SN1 simulator. + SGI-SN1 For SGI SN1 Platforms. + SGI-SN2 For SGI SN2 Platforms. DIG-compliant For DIG ("Developer's Interface Guide") compliant system. =20 If you don't know what to do, choose "generic". =20 +CONFIG_IA64_SGI_SN_SIM + Build a kernel that runs on both the SGI simulator AND on hardware. + There is a very slight performance penalty on hardware for including this + option. + +CONFIG_IA64_SGI_SN_DEBUG + This enables addition debug code that helps isolate + platform/kernel bugs. There is a small but measurable performance + degradation when this option is enabled. + Kernel page size CONFIG_IA64_PAGE_SIZE_4KB =20 @@ -18986,56 +19019,13 @@ =20 If you don't know what to do, choose 8KB. =20 -Enable Itanium A-step specific code -CONFIG_ITANIUM_ASTEP_SPECIFIC - Select this option to build a kernel for an Itanium prototype system - with an A-step CPU. You have an A-step CPU if the "revision" field in - /proc/cpuinfo is 0. - Enable Itanium B-step specific code CONFIG_ITANIUM_BSTEP_SPECIFIC Select this option to build a kernel for an Itanium prototype system - with a B-step CPU. You have a B-step CPU if the "revision" field in - /proc/cpuinfo has a value in the range from 1 to 4. - -Enable Itanium B0-step specific code -CONFIG_ITANIUM_B0_SPECIFIC - Select this option to bild a kernel for an Itanium prototype system - with a B0-step CPU. You have a B0-step CPU if the "revision" field in - /proc/cpuinfo is 1. - -Force interrupt redirection -CONFIG_IA64_HAVE_IRQREDIR - Select this option if you know that your system has the ability to - redirect interrupts to different CPUs. Select N here if you're - unsure. - -Enable use of global TLB purge instruction (ptc.g) -CONFIG_ITANIUM_PTCG - Say Y here if you want the kernel to use the IA-64 "ptc.g" - instruction to flush the TLB on all CPUs. Select N here if - you're unsure. - -Enable SoftSDV hacks -CONFIG_IA64_SOFTSDV_HACKS - Say Y here to enable hacks to make the kernel work on the Intel - SoftSDV simulator. Select N here if you're unsure. - -Enable AzusA hacks -CONFIG_IA64_AZUSA_HACKS - Say Y here to enable hacks to make the kernel work on the NEC - AzusA platform. Select N here if you're unsure. - -Force socket buffers below 4GB? -CONFIG_SKB_BELOW_4GB - Most of today's network interface cards (NICs) support DMA to - the low 32 bits of the address space only. On machines with - more then 4GB of memory, this can cause the system to slow - down if there is no I/O TLB hardware. Turning this option on - avoids the slow-down by forcing socket buffers to be allocated - from memory below 4GB. The downside is that your system could - run out of memory below 4GB before all memory has been used up. - If you're unsure how to answer this question, answer Y. + with a B-step CPU. Only B3 step CPUs are supported. You have a B3-step + CPU if the "revision" field in /proc/cpuinfo is equal to 4. If the + "revision" field shows a number bigger than 4, you do not have to turn + on this option. =20 Enable IA-64 Machine Check Abort CONFIG_IA64_MCA @@ -19055,6 +19045,15 @@ Layer) information in /proc/pal. This contains useful information about the processors in your systems, such as cache and TLB sizes and the PAL firmware version in use. + + To use this option, you have to check that the "/proc file system + support" (CONFIG_PROC_FS) is enabled, too. + +/proc/efi/vars support +CONFIG_EFI_VARS + If you say Y here, you are able to get EFI (Extensible Firmware + Interface) variable information in /proc/efi/vars. You may read, + write, create, and destroy EFI variables through this interface. =20 To use this option, you have to check that the "/proc file system support" (CONFIG_PROC_FS) is enabled, too. diff -urN linux-2.4.13/Documentation/kernel-parameters.txt linux-2.4.13-lia= /Documentation/kernel-parameters.txt --- linux-2.4.13/Documentation/kernel-parameters.txt Wed Jun 20 11:21:33 20= 01 +++ linux-2.4.13-lia/Documentation/kernel-parameters.txt Wed Oct 10 17:33:2= 6 2001 @@ -17,6 +17,7 @@ CD Appropriate CD support is enabled. DEVFS devfs support is enabled.=20 DRM Direct Rendering Management support is enabled.=20 + EFI EFI Partitioning (GPT) is enabled EIDE EIDE/ATAPI support is enabled. FB The frame buffer device is enabled. HW Appropriate hardware is enabled. @@ -211,6 +212,9 @@ gc_3=3D [HW,JOY] =20 gdth=3D [HW,SCSI] + + gpt [EFI] Forces disk with valid GPT signature but + invalid Protective MBR to be treated as GPT. =20 gscd=3D [HW,CD] =20 diff -urN linux-2.4.13/Makefile linux-2.4.13-lia/Makefile --- linux-2.4.13/Makefile Wed Oct 24 10:17:41 2001 +++ linux-2.4.13-lia/Makefile Wed Oct 24 10:21:05 2001 @@ -88,7 +88,7 @@ =20 CPPFLAGS :=3D -D__KERNEL__ -I$(HPATH) =20 -CFLAGS :=3D $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -O2 \ +CFLAGS :=3D $(CPPFLAGS) -Wall -Wstrict-prototypes -Wno-trigraphs -g -O2 \ -fomit-frame-pointer -fno-strict-aliasing -fno-common AFLAGS :=3D -D__ASSEMBLY__ $(CPPFLAGS) =20 @@ -137,7 +137,8 @@ drivers/net/net.o \ drivers/media/media.o DRIVERS-$(CONFIG_AGP) +=3D drivers/char/agp/agp.o -DRIVERS-$(CONFIG_DRM) +=3D drivers/char/drm/drm.o +DRIVERS-$(CONFIG_DRM_NEW) +=3D drivers/char/drm/drm.o +DRIVERS-$(CONFIG_DRM_OLD) +=3D drivers/char/drm-4.0/drm.o DRIVERS-$(CONFIG_NUBUS) +=3D drivers/nubus/nubus.a DRIVERS-$(CONFIG_ISDN) +=3D drivers/isdn/isdn.a DRIVERS-$(CONFIG_NET_FC) +=3D drivers/net/fc/fc.o @@ -241,14 +242,14 @@ =20 include arch/$(ARCH)/Makefile =20 -export CPPFLAGS CFLAGS AFLAGS +export CPPFLAGS CFLAGS CFLAGS_KERNEL AFLAGS AFLAGS_KERNEL =20 export NETWORKS DRIVERS LIBS HEAD LDFLAGS LINKFLAGS MAKEBOOT ASFLAGS =20 .S.s: - $(CPP) $(AFLAGS) -traditional -o $*.s $< + $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -o $*.s $< .S.o: - $(CC) $(AFLAGS) -traditional -c -o $*.o $< + $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -c -o $*.o $< =20 Version: dummy @rm -f include/linux/compile.h diff -urN linux-2.4.13/arch/i386/lib/usercopy.c linux-2.4.13-lia/arch/i386/= lib/usercopy.c --- linux-2.4.13/arch/i386/lib/usercopy.c Mon Sep 24 15:06:13 2001 +++ linux-2.4.13-lia/arch/i386/lib/usercopy.c Thu Oct 4 00:21:39 2001 @@ -14,6 +14,7 @@ unsigned long __generic_copy_to_user(void *to, const void *from, unsigned long n) { + prefetch(from); if (access_ok(VERIFY_WRITE, to, n)) { if(n<512) @@ -27,6 +28,7 @@ unsigned long __generic_copy_from_user(void *to, const void *from, unsigned long n) { + prefetchw(to); if (access_ok(VERIFY_READ, from, n)) { if(n<512) diff -urN linux-2.4.13/arch/i386/mm/fault.c linux-2.4.13-lia/arch/i386/mm/f= ault.c --- linux-2.4.13/arch/i386/mm/fault.c Wed Oct 10 16:31:44 2001 +++ linux-2.4.13-lia/arch/i386/mm/fault.c Wed Oct 24 18:11:25 2001 @@ -27,8 +27,6 @@ =20 extern void die(const char *,struct pt_regs *,long); =20 -extern int console_loglevel; - /* * Ugly, ugly, but the goto's result in better assembly.. */ diff -urN linux-2.4.13/arch/ia64/Makefile linux-2.4.13-lia/arch/ia64/Makefi= le --- linux-2.4.13/arch/ia64/Makefile Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/Makefile Thu Oct 4 00:21:52 2001 @@ -17,13 +17,15 @@ AFLAGS_KERNEL :=3D -mconstant-gp EXTRA =20 -CFLAGS :=3D $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-range=F10-f15,f32= -f127 -falign-functions2 +CFLAGS :=3D $(CFLAGS) -pipe $(EXTRA) -ffixed-r13 -mfixed-range=F10-f15,f32= -f127 \ + -falign-functions2 +# -ffunction-sections CFLAGS_KERNEL :=3D -mconstant-gp =20 GCC_VERSION=3D$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc versi= on' | cut -f3 -d' ' | cut -f1 -d'.') =20 ifneq ($(GCC_VERSION),2) - CFLAGS +=3D -frename-registers + CFLAGS +=3D -frename-registers --param max-inline-insns@0 endif =20 ifeq ($(CONFIG_ITANIUM_BSTEP_SPECIFIC),y) @@ -32,7 +34,7 @@ =20 ifdef CONFIG_IA64_GENERIC CORE_FILES :=3D arch/$(ARCH)/hp/hp.a \ - arch/$(ARCH)/sn/sn.a \ + arch/$(ARCH)/sn/sn.o \ arch/$(ARCH)/dig/dig.a \ arch/$(ARCH)/sn/io/sgiio.o \ $(CORE_FILES) @@ -52,15 +54,14 @@ $(CORE_FILES) endif =20 -ifdef CONFIG_IA64_SGI_SN1 +ifdef CONFIG_IA64_SGI_SN CFLAGS +=3D -DBRINGUP - SUBDIRS :=3D arch/$(ARCH)/sn/sn1 \ - arch/$(ARCH)/sn \ + SUBDIRS :=3D arch/$(ARCH)/sn/kernel \ arch/$(ARCH)/sn/io \ arch/$(ARCH)/sn/fprom \ $(SUBDIRS) - CORE_FILES :=3D arch/$(ARCH)/sn/sn.a \ - arch/$(ARCH)/sn/io/sgiio.o\ + CORE_FILES :=3D arch/$(ARCH)/sn/kernel/sn.o \ + arch/$(ARCH)/sn/io/sgiio.o \ $(CORE_FILES) endif =20 @@ -105,7 +106,7 @@ =20 compressed: vmlinux $(OBJCOPY) --strip-all vmlinux vmlinux-tmp - gzip -9 vmlinux-tmp + gzip vmlinux-tmp mv vmlinux-tmp.gz vmlinux.gz =20 rawboot: diff -urN linux-2.4.13/arch/ia64/config.in linux-2.4.13-lia/arch/ia64/confi= g.in --- linux-2.4.13/arch/ia64/config.in Wed Oct 24 10:17:42 2001 +++ linux-2.4.13-lia/arch/ia64/config.in Wed Oct 24 10:21:06 2001 @@ -28,6 +28,7 @@ =20 if [ "$CONFIG_IA64_HP_SIM" =3D "n" ]; then define_bool CONFIG_ACPI y + define_bool CONFIG_ACPI_EFI y define_bool CONFIG_ACPI_INTERPRETER y define_bool CONFIG_ACPI_KERNEL_CONFIG y fi @@ -40,7 +41,8 @@ "generic CONFIG_IA64_GENERIC \ DIG-compliant CONFIG_IA64_DIG \ HP-simulator CONFIG_IA64_HP_SIM \ - SGI-SN1 CONFIG_IA64_SGI_SN1" generic + SGI-SN1 CONFIG_IA64_SGI_SN1 \ + SGI-SN2 CONFIG_IA64_SGI_SN2" generic =20 choice 'Kernel page size' \ "4KB CONFIG_IA64_PAGE_SIZE_4KB \ @@ -51,25 +53,6 @@ if [ "$CONFIG_ITANIUM" =3D "y" ]; then define_bool CONFIG_IA64_BRL_EMU y bool ' Enable Itanium B-step specific code' CONFIG_ITANIUM_BSTEP_SPECIFIC - if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" =3D "y" ]; then - bool ' Enable Itanium B0-step specific code' CONFIG_ITANIUM_B0_SPECIF= IC - fi - if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" =3D "y" ]; then - bool ' Enable Itanium B1-step specific code' CONFIG_ITANIUM_B1_SPECIF= IC - fi - if [ "$CONFIG_ITANIUM_BSTEP_SPECIFIC" =3D "y" ]; then - bool ' Enable Itanium B2-step specific code' CONFIG_ITANIUM_B2_SPECIF= IC - fi - bool ' Enable Itanium C-step specific code' CONFIG_ITANIUM_CSTEP_SPECIFIC - if [ "$CONFIG_ITANIUM_CSTEP_SPECIFIC" =3D "y" ]; then - bool ' Enable Itanium C0-step specific code' CONFIG_ITANIUM_C0_SPECIF= IC - fi - if [ "$CONFIG_ITANIUM_B0_SPECIFIC" =3D "y" \ - -o "$CONFIG_ITANIUM_B1_SPECIFIC" =3D "y" -o "$CONFIG_ITANIUM_B2_SPEC= IFIC" =3D "y" ]; then - define_bool CONFIG_ITANIUM_PTCG n - else - define_bool CONFIG_ITANIUM_PTCG y - fi if [ "$CONFIG_IA64_SGI_SN1" =3D "y" ]; then define_int CONFIG_IA64_L1_CACHE_SHIFT 7 # align cache-sensitive data to= 128 bytes else @@ -78,7 +61,6 @@ fi =20 if [ "$CONFIG_MCKINLEY" =3D "y" ]; then - define_bool CONFIG_ITANIUM_PTCG y define_int CONFIG_IA64_L1_CACHE_SHIFT 7 bool ' Enable McKinley A-step specific code' CONFIG_MCKINLEY_ASTEP_SPECI= FIC if [ "$CONFIG_MCKINLEY_ASTEP_SPECIFIC" =3D "y" ]; then @@ -87,28 +69,32 @@ fi =20 if [ "$CONFIG_IA64_DIG" =3D "y" ]; then - bool ' Force interrupt redirection' CONFIG_IA64_HAVE_IRQREDIR bool ' Enable IA-64 Machine Check Abort' CONFIG_IA64_MCA define_bool CONFIG_PM y fi =20 -if [ "$CONFIG_IA64_SGI_SN1" =3D "y" ]; then - bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN1_SIM - define_bool CONFIG_DEVFS_DEBUG y +if [ "$CONFIG_IA64_SGI_SN1" =3D "y" ] || [ "$CONFIG_IA64_SGI_SN2" =3D "y" = ]; then + define_bool CONFIG_IA64_SGI_SN y + bool ' Enable extra debugging code' CONFIG_IA64_SGI_SN_DEBUG n + bool ' Enable SGI Medusa Simulator Support' CONFIG_IA64_SGI_SN_SIM + bool ' Enable autotest (llsc). Option to run cache test instead of booti= ng' \ + CONFIG_IA64_SGI_AUTOTEST n define_bool CONFIG_DEVFS_FS y - define_bool CONFIG_IA64_BRL_EMU y + if [ "$CONFIG_DEVFS_FS" =3D "y" ]; then + bool ' Enable DEVFS Debug Code' CONFIG_DEVFS_DEBUG n + fi + bool ' Enable protocol mode for the L1 console' CONFIG_SERIAL_SGI_L1_PRO= TOCOL y + define_bool CONFIG_DISCONTIGMEM y define_bool CONFIG_IA64_MCA y - define_bool CONFIG_ITANIUM y - define_bool CONFIG_SGI_IOC3_ETH y + define_bool CONFIG_NUMA y define_bool CONFIG_PERCPU_IRQ y - define_int CONFIG_CACHE_LINE_SHIFT 7 - bool ' Enable DISCONTIGMEM support' CONFIG_DISCONTIGMEM - bool ' Enable NUMA support' CONFIG_NUMA + tristate ' PCIBA support' CONFIG_PCIBA fi =20 define_bool CONFIG_KCORE_ELF y # On IA-64, we always want an ELF /proc/kco= re. =20 bool 'SMP support' CONFIG_SMP +tristate 'Support running of Linux/x86 binaries' CONFIG_IA32_SUPPORT bool 'Performance monitor support' CONFIG_PERFMON tristate '/proc/pal support' CONFIG_IA64_PALINFO tristate '/proc/efi/vars support' CONFIG_EFI_VARS @@ -270,19 +256,19 @@ mainmenu_option next_comment comment 'Kernel hacking' =20 -#bool 'Debug kmalloc/kfree' CONFIG_DEBUG_MALLOC -if [ "$CONFIG_EXPERIMENTAL" =3D "y" ]; then - tristate 'Kernel support for IA-32 emulation' CONFIG_IA32_SUPPORT - tristate 'Kernel FP software completion' CONFIG_MATHEMU -else - define_bool CONFIG_MATHEMU y +bool 'Kernel debugging' CONFIG_DEBUG_KERNEL +if [ "$CONFIG_DEBUG_KERNEL" !=3D "n" ]; then + bool ' Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZAR= DS + bool ' Disable VHPT' CONFIG_DISABLE_VHPT + bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ + +# early printk is currently broken for SMP: the secondary processors get s= tuck... +# bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK + + bool ' Debug memory allocations' CONFIG_DEBUG_SLAB + bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK + bool ' Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_= DEBUG_CMPXCHG + bool ' Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ fi - -bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ -bool 'Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK -bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG= _CMPXCHG -bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ -bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS -bool 'Disable VHPT' CONFIG_DISABLE_VHPT =20 endmenu diff -urN linux-2.4.13/arch/ia64/defconfig linux-2.4.13-lia/arch/ia64/defco= nfig --- linux-2.4.13/arch/ia64/defconfig Thu Jun 22 07:09:44 2000 +++ linux-2.4.13-lia/arch/ia64/defconfig Thu Oct 4 00:21:39 2001 @@ -3,53 +3,131 @@ # =20 # +# Code maturity level options +# +CONFIG_EXPERIMENTAL=3Dy + +# +# Loadable module support +# +CONFIG_MODULES=3Dy +CONFIG_MODVERSIONS=3Dy +# CONFIG_KMOD is not set + +# # General setup # CONFIG_IA64=3Dy # CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set # CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=3Dy +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=3Dy +CONFIG_ACPI_EFI=3Dy +CONFIG_ACPI_INTERPRETER=3Dy +CONFIG_ACPI_KERNEL_CONFIG=3Dy +CONFIG_ITANIUM=3Dy +# CONFIG_MCKINLEY is not set # CONFIG_IA64_GENERIC is not set -CONFIG_IA64_HP_SIM=3Dy -# CONFIG_IA64_SGI_SN1_SIM is not set -# CONFIG_IA64_DIG is not set +CONFIG_IA64_DIG=3Dy +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set # CONFIG_IA64_PAGE_SIZE_4KB is not set # CONFIG_IA64_PAGE_SIZE_8KB is not set CONFIG_IA64_PAGE_SIZE_16KB=3Dy # CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=3Dy +CONFIG_ITANIUM_BSTEP_SPECIFIC=3Dy +CONFIG_IA64_L1_CACHE_SHIFT=3D6 +CONFIG_IA64_MCA=3Dy +CONFIG_PM=3Dy CONFIG_KCORE_ELF=3Dy -# CONFIG_SMP is not set -# CONFIG_PERFMON is not set -# CONFIG_NET is not set -# CONFIG_SYSVIPC is not set +CONFIG_SMP=3Dy +CONFIG_IA32_SUPPORT=3Dy +CONFIG_PERFMON=3Dy +CONFIG_IA64_PALINFO=3Dy +CONFIG_EFI_VARS=3Dy +CONFIG_NET=3Dy +CONFIG_SYSVIPC=3Dy # CONFIG_BSD_PROCESS_ACCT is not set -# CONFIG_SYSCTL is not set -# CONFIG_BINFMT_ELF is not set +CONFIG_SYSCTL=3Dy +CONFIG_BINFMT_ELF=3Dy # CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set CONFIG_PCI=3Dy CONFIG_PCI_NAMES=3Dy # CONFIG_HOTPLUG is not set # CONFIG_PCMCIA is not set =20 # -# Code maturity level options +# Parallel port support # -CONFIG_EXPERIMENTAL=3Dy +# CONFIG_PARPORT is not set =20 # -# Loadable module support +# Networking options # -# CONFIG_MODULES is not set +CONFIG_PACKET=3Dy +CONFIG_PACKET_MMAP=3Dy +# CONFIG_NETLINK is not set +# CONFIG_NETFILTER is not set +CONFIG_FILTER=3Dy +CONFIG_UNIX=3Dy +CONFIG_INET=3Dy +# CONFIG_IP_MULTICAST is not set +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_INET_ECN is not set +# CONFIG_SYN_COOKIES is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# =20 +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set =20 # -# Parallel port support +# QoS and/or fair queueing # -# CONFIG_PARPORT is not set +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set =20 # # Plug and Play configuration # # CONFIG_PNP is not set # CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set =20 # # Block devices @@ -58,14 +136,12 @@ # CONFIG_BLK_DEV_XD is not set # CONFIG_PARIDE is not set # CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set # CONFIG_BLK_DEV_DAC960 is not set - -# -# Additional Block Devices -# -# CONFIG_BLK_DEV_LOOP is not set -# CONFIG_BLK_DEV_MD is not set +CONFIG_BLK_DEV_LOOP=3Dy +# CONFIG_BLK_DEV_NBD is not set # CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set =20 # # I2O device support @@ -73,10 +149,23 @@ # CONFIG_I2O is not set # CONFIG_I2O_PCI is not set # CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set # CONFIG_I2O_SCSI is not set # CONFIG_I2O_PROC is not set =20 # +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# # ATA/IDE/MFM/RLL support # CONFIG_IDE=3Dy @@ -92,12 +181,21 @@ # CONFIG_BLK_DEV_HD_IDE is not set # CONFIG_BLK_DEV_HD is not set CONFIG_BLK_DEV_IDEDISK=3Dy -# CONFIG_IDEDISK_MULTI_MODE is not set +CONFIG_IDEDISK_MULTI_MODE=3Dy +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set # CONFIG_BLK_DEV_IDECS is not set CONFIG_BLK_DEV_IDECD=3Dy # CONFIG_BLK_DEV_IDETAPE is not set -# CONFIG_BLK_DEV_IDEFLOPPY is not set -# CONFIG_BLK_DEV_IDESCSI is not set +CONFIG_BLK_DEV_IDEFLOPPY=3Dy +CONFIG_BLK_DEV_IDESCSI=3Dy =20 # # IDE chipset support/bugfixes @@ -109,45 +207,209 @@ CONFIG_BLK_DEV_IDEPCI=3Dy CONFIG_IDEPCI_SHARE_IRQ=3Dy CONFIG_BLK_DEV_IDEDMA_PCI=3Dy +CONFIG_BLK_DEV_ADMA=3Dy # CONFIG_BLK_DEV_OFFBOARD is not set -CONFIG_IDEDMA_PCI_AUTO=3Dy +# CONFIG_IDEDMA_PCI_AUTO is not set CONFIG_BLK_DEV_IDEDMA=3Dy -CONFIG_IDEDMA_PCI_EXPERIMENTAL=3Dy # CONFIG_IDEDMA_PCI_WIP is not set # CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set -# CONFIG_BLK_DEV_AEC6210 is not set -# CONFIG_AEC6210_TUNING is not set +# CONFIG_BLK_DEV_AEC62XX is not set +# CONFIG_AEC62XX_TUNING is not set # CONFIG_BLK_DEV_ALI15X3 is not set # CONFIG_WDC_ALI15X3 is not set -# CONFIG_BLK_DEV_AMD7409 is not set -# CONFIG_AMD7409_OVERRIDE is not set +# CONFIG_BLK_DEV_AMD74XX is not set +# CONFIG_AMD74XX_OVERRIDE is not set # CONFIG_BLK_DEV_CMD64X is not set -# CONFIG_CMD64X_RAID is not set # CONFIG_BLK_DEV_CY82C693 is not set # CONFIG_BLK_DEV_CS5530 is not set # CONFIG_BLK_DEV_HPT34X is not set # CONFIG_HPT34X_AUTODMA is not set # CONFIG_BLK_DEV_HPT366 is not set -# CONFIG_HPT366_FIP is not set -# CONFIG_HPT366_MODE3 is not set CONFIG_BLK_DEV_PIIX=3Dy -CONFIG_PIIX_TUNING=3Dy +# CONFIG_PIIX_TUNING is not set # CONFIG_BLK_DEV_NS87415 is not set # CONFIG_BLK_DEV_OPTI621 is not set # CONFIG_BLK_DEV_PDC202XX is not set # CONFIG_PDC202XX_BURST is not set -# CONFIG_PDC202XX_MASTER is not set +# CONFIG_PDC202XX_FORCE is not set +# CONFIG_BLK_DEV_SVWKS is not set # CONFIG_BLK_DEV_SIS5513 is not set +# CONFIG_BLK_DEV_SLC90E66 is not set # CONFIG_BLK_DEV_TRM290 is not set # CONFIG_BLK_DEV_VIA82CXXX is not set # CONFIG_IDE_CHIPSETS is not set -CONFIG_IDEDMA_AUTO=3Dy +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_IDEDMA_IVB is not set +# CONFIG_DMA_NONPCI is not set CONFIG_BLK_DEV_IDE_MODES=3Dy +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set =20 # # SCSI support # -# CONFIG_SCSI is not set +CONFIG_SCSI=3Dy + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=3Dy +CONFIG_SD_EXTRA_DEVS@ +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +CONFIG_SCSI_DEBUG_QUEUES=3Dy +# CONFIG_SCSI_MULTI_LUN is not set +CONFIG_SCSI_CONSTANTS=3Dy +CONFIG_SCSI_LOGGING=3Dy + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR_D700 is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +# CONFIG_SCSI_QLOGIC_FC is not set +CONFIG_SCSI_QLOGIC_1280=3Dy +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=3Dy + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +CONFIG_DUMMY=3Dy +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=3Dy +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +CONFIG_NET_PCI=3Dy +# CONFIG_PCNET32 is not set +# CONFIG_ADAPTEC_STARFIRE is not set +# CONFIG_APRICOT is not set +# CONFIG_CS89x0 is not set +# CONFIG_TULIP is not set +# CONFIG_DE4X5 is not set +# CONFIG_DGRS is not set +# CONFIG_DM9102 is not set +CONFIG_EEPRO100=3Dy +# CONFIG_LNE390 is not set +# CONFIG_FEALNX is not set +# CONFIG_NATSEMI is not set +# CONFIG_NE2K_PCI is not set +# CONFIG_NE3210 is not set +# CONFIG_ES3210 is not set +# CONFIG_8139TOO is not set +# CONFIG_8139TOO_PIO is not set +# CONFIG_8139TOO_TUNE_TWISTER is not set +# CONFIG_8139TOO_8129 is not set +# CONFIG_SIS900 is not set +# CONFIG_EPIC100 is not set +# CONFIG_SUNDANCE is not set +# CONFIG_TLAN is not set +# CONFIG_VIA_RHINE is not set +# CONFIG_WINBOND_840 is not set +# CONFIG_LAN_SAA9730 is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set =20 # # Amateur Radio support @@ -165,13 +427,27 @@ # CONFIG_CD_NO_IDESCSI is not set =20 # +# Input core support +# +CONFIG_INPUT=3Dy +CONFIG_INPUT_KEYBDEV=3Dy +CONFIG_INPUT_MOUSEDEV=3Dy +CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 +CONFIG_INPUT_MOUSEDEV_SCREEN_Yv8 +# CONFIG_INPUT_JOYDEV is not set +CONFIG_INPUT_EVDEV=3Dy + +# # Character devices # -# CONFIG_VT is not set -# CONFIG_SERIAL is not set +CONFIG_VT=3Dy +CONFIG_VT_CONSOLE=3Dy +CONFIG_SERIAL=3Dy +CONFIG_SERIAL_CONSOLE=3Dy # CONFIG_SERIAL_EXTENDED is not set # CONFIG_SERIAL_NONSTANDARD is not set -# CONFIG_UNIX98_PTYS is not set +CONFIG_UNIX98_PTYS=3Dy +CONFIG_UNIX98_PTY_COUNT%6 =20 # # I2C support @@ -182,97 +458,382 @@ # Mice # # CONFIG_BUSMOUSE is not set -# CONFIG_MOUSE is not set +CONFIG_MOUSE=3Dy +CONFIG_PSMOUSE=3Dy +# CONFIG_82C710_MOUSE is not set +# CONFIG_PC110_PAD is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set +# CONFIG_INPUT_NS558 is not set +# CONFIG_INPUT_LIGHTNING is not set +# CONFIG_INPUT_PCIGAME is not set +# CONFIG_INPUT_CS461X is not set +# CONFIG_INPUT_EMU10K1 is not set +CONFIG_INPUT_SERIO=3Dy +CONFIG_INPUT_SERPORT=3Dy =20 # # Joysticks # -# CONFIG_JOYSTICK is not set +# CONFIG_INPUT_ANALOG is not set +# CONFIG_INPUT_A3D is not set +# CONFIG_INPUT_ADI is not set +# CONFIG_INPUT_COBRA is not set +# CONFIG_INPUT_GF2K is not set +# CONFIG_INPUT_GRIP is not set +# CONFIG_INPUT_INTERACT is not set +# CONFIG_INPUT_TMDC is not set +# CONFIG_INPUT_SIDEWINDER is not set +# CONFIG_INPUT_IFORCE_USB is not set +# CONFIG_INPUT_IFORCE_232 is not set +# CONFIG_INPUT_WARRIOR is not set +# CONFIG_INPUT_MAGELLAN is not set +# CONFIG_INPUT_SPACEORB is not set +# CONFIG_INPUT_SPACEBALL is not set +# CONFIG_INPUT_STINGER is not set +# CONFIG_INPUT_DB9 is not set +# CONFIG_INPUT_GAMECON is not set +# CONFIG_INPUT_TURBOGRAFX is not set # CONFIG_QIC02_TAPE is not set =20 # # Watchdog Cards # # CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set # CONFIG_NVRAM is not set # CONFIG_RTC is not set CONFIG_EFI_RTC=3Dy - -# -# Video For Linux -# -# CONFIG_VIDEO_DEV is not set # CONFIG_DTLK is not set # CONFIG_R3964 is not set # CONFIG_APPLICOM is not set +# CONFIG_SONYPI is not set =20 # # Ftape, the floppy tape device driver # # CONFIG_FTAPE is not set -# CONFIG_DRM is not set -# CONFIG_DRM_TDFX is not set -# CONFIG_AGP is not set +CONFIG_AGP=3Dy +# CONFIG_AGP_INTEL is not set +CONFIG_AGP_I460=3Dy +# CONFIG_AGP_I810 is not set +# CONFIG_AGP_VIA is not set +# CONFIG_AGP_AMD is not set +# CONFIG_AGP_SIS is not set +# CONFIG_AGP_ALI is not set +# CONFIG_AGP_SWORKS is not set +CONFIG_DRM=3Dy +# CONFIG_DRM_NEW is not set +CONFIG_DRM_OLD=3Dy +CONFIG_DRM40_TDFX=3Dy +# CONFIG_DRM40_GAMMA is not set +# CONFIG_DRM40_R128 is not set +# CONFIG_DRM40_RADEON is not set +# CONFIG_DRM40_I810 is not set +# CONFIG_DRM40_MGA is not set =20 # -# USB support +# Multimedia devices +# +CONFIG_VIDEO_DEV=3Dy + +# +# Video For Linux # -# CONFIG_USB is not set +CONFIG_VIDEO_PROC_FS=3Dy +# CONFIG_I2C_PARPORT is not set + +# +# Video Adapters +# +# CONFIG_VIDEO_PMS is not set +# CONFIG_VIDEO_CPIA is not set +# CONFIG_VIDEO_SAA5249 is not set +# CONFIG_TUNER_3036 is not set +# CONFIG_VIDEO_STRADIS is not set +# CONFIG_VIDEO_ZORAN is not set +# CONFIG_VIDEO_ZR36120 is not set +# CONFIG_VIDEO_MEYE is not set + +# +# Radio Adapters +# +# CONFIG_RADIO_CADET is not set +# CONFIG_RADIO_RTRACK is not set +# CONFIG_RADIO_RTRACK2 is not set +# CONFIG_RADIO_AZTECH is not set +# CONFIG_RADIO_GEMTEK is not set +# CONFIG_RADIO_GEMTEK_PCI is not set +# CONFIG_RADIO_MAXIRADIO is not set +# CONFIG_RADIO_MAESTRO is not set +# CONFIG_RADIO_MIROPCM20 is not set +# CONFIG_RADIO_MIROPCM20_RDS is not set +# CONFIG_RADIO_SF16FMI is not set +# CONFIG_RADIO_TERRATEC is not set +# CONFIG_RADIO_TRUST is not set +# CONFIG_RADIO_TYPHOON is not set +# CONFIG_RADIO_ZOLTRIX is not set =20 # # File systems # # CONFIG_QUOTA is not set -# CONFIG_AUTOFS_FS is not set +CONFIG_AUTOFS_FS=3Dy # CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set # CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set # CONFIG_AFFS_FS is not set # CONFIG_HFS_FS is not set # CONFIG_BFS_FS is not set -# CONFIG_FAT_FS is not set -# CONFIG_MSDOS_FS is not set +CONFIG_FAT_FS=3Dy +CONFIG_MSDOS_FS=3Dy # CONFIG_UMSDOS_FS is not set -# CONFIG_VFAT_FS is not set +CONFIG_VFAT_FS=3Dy # CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set # CONFIG_CRAMFS is not set -# CONFIG_ISO9660_FS is not set +# CONFIG_TMPFS is not set +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=3Dy # CONFIG_JOLIET is not set # CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set # CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set # CONFIG_HPFS_FS is not set -# CONFIG_PROC_FS is not set +CONFIG_PROC_FS=3Dy # CONFIG_DEVFS_FS is not set # CONFIG_DEVFS_MOUNT is not set # CONFIG_DEVFS_DEBUG is not set -# CONFIG_DEVPTS_FS is not set +CONFIG_DEVPTS_FS=3Dy # CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set # CONFIG_ROMFS_FS is not set -# CONFIG_EXT2_FS is not set +CONFIG_EXT2_FS=3Dy # CONFIG_SYSV_FS is not set # CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set # CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=3Dy +CONFIG_NFS_V3=3Dy +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=3Dy +CONFIG_NFSD_V3=3Dy +CONFIG_SUNRPC=3Dy +CONFIG_LOCKD=3Dy +CONFIG_LOCKD_V4=3Dy +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set =20 # # Partition Types # -# CONFIG_PARTITION_ADVANCED is not set +CONFIG_PARTITION_ADVANCED=3Dy +# CONFIG_ACORN_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set CONFIG_MSDOS_PARTITION=3Dy -# CONFIG_NLS is not set -# CONFIG_NLS is not set +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +CONFIG_EFI_PARTITION=3Dy +# CONFIG_DEVFS_GUID is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_SMB_NLS is not set +CONFIG_NLS=3Dy + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT=3D"iso8859-1" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=3Dy + +# +# Frame-buffer support +# +# CONFIG_FB is not set =20 # # Sound # -# CONFIG_SOUND is not set +CONFIG_SOUND=3Dy +# CONFIG_SOUND_BT878 is not set +# CONFIG_SOUND_CMPCI is not set +# CONFIG_SOUND_EMU10K1 is not set +# CONFIG_MIDI_EMU10K1 is not set +# CONFIG_SOUND_FUSION is not set +CONFIG_SOUND_CS4281=3Dy +# CONFIG_SOUND_ES1370 is not set +# CONFIG_SOUND_ES1371 is not set +# CONFIG_SOUND_ESSSOLO1 is not set +# CONFIG_SOUND_MAESTRO is not set +# CONFIG_SOUND_MAESTRO3 is not set +# CONFIG_SOUND_ICH is not set +# CONFIG_SOUND_RME96XX is not set +# CONFIG_SOUND_SONICVIBES is not set +# CONFIG_SOUND_TRIDENT is not set +# CONFIG_SOUND_MSNDCLAS is not set +# CONFIG_SOUND_MSNDPIN is not set +# CONFIG_SOUND_VIA82CXXX is not set +# CONFIG_MIDI_VIA82CXXX is not set +# CONFIG_SOUND_OSS is not set +# CONFIG_SOUND_TVMIXER is not set + +# +# USB support +# +CONFIG_USB=3Dy +# CONFIG_USB_DEBUG is not set + +# +# Miscellaneous USB options +# +CONFIG_USB_DEVICEFS=3Dy +# CONFIG_USB_BANDWIDTH is not set + +# +# USB Controllers +# +CONFIG_USB_UHCI_ALT=3Dy +CONFIG_USB_OHCI=3Dy + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# +# CONFIG_USB_HID is not set +CONFIG_USB_KBD=3Dy +CONFIG_USB_MOUSE=3Dy +# CONFIG_USB_WACOM is not set + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set + +# +# USB Multimedia devices +# +CONFIG_USB_IBMCAM=3Dy +# CONFIG_USB_OV511 is not set +# CONFIG_USB_PWC is not set +# CONFIG_USB_SE401 is not set +# CONFIG_USB_DSBR is not set +# CONFIG_USB_DABUSB is not set + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set + +# +# USB misc drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set =20 # # Kernel hacking # -# CONFIG_IA32_SUPPORT is not set -# CONFIG_MATHEMU is not set -# CONFIG_MAGIC_SYSRQ is not set -# CONFIG_IA64_EARLY_PRINTK is not set +CONFIG_DEBUG_KERNEL=3Dy +CONFIG_IA64_PRINT_HAZARDS=3Dy +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=3Dy +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set # CONFIG_IA64_DEBUG_CMPXCHG is not set # CONFIG_IA64_DEBUG_IRQ is not set -# CONFIG_IA64_PRINT_HAZARDS is not set -# CONFIG_KDB is not set diff -urN linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c linux-2.4.13-lia/arch/= ia64/ia32/binfmt_elf32.c --- linux-2.4.13/arch/ia64/ia32/binfmt_elf32.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/binfmt_elf32.c Thu Oct 4 00:21:52 2001 @@ -3,10 +3,11 @@ * * Copyright (C) 1999 Arun Sharma * Copyright (C) 2001 Hewlett-Packard Co - * Copyright (C) 2001 David Mosberger-Tang + * David Mosberger-Tang * * 06/16/00 A. Mallick initialize csd/ssd/tssd/cflg for ia32_load_state * 04/13/01 D. Mosberger dropped saving tssd in ar.k1---it's not needed + * 09/14/01 D. Mosberger fixed memory management for gdt/tss page */ #include =20 @@ -41,65 +42,59 @@ extern void ia64_elf32_init (struct pt_regs *regs); extern void put_dirty_page (struct task_struct * tsk, struct page *page, u= nsigned long address); =20 +static void elf32_set_personality (void); + #define ELF_PLAT_INIT(_r) ia64_elf32_init(_r) #define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm) -#define elf_map elf_map32 +#define elf_map elf32_map +#define SET_PERSONALITY(ex, ibcs2) elf32_set_personality() =20 /* Ugly but avoids duplication */ #include "../../../fs/binfmt_elf.c" =20 -/* Global descriptor table */ -unsigned long *ia32_gdt_table, *ia32_tss; +extern struct page *ia32_shared_page[]; +extern unsigned long *ia32_gdt; =20 struct page * -put_shared_page (struct task_struct * tsk, struct page *page, unsigned lon= g address) +ia32_install_shared_page (struct vm_area_struct *vma, unsigned long addres= s, int no_share) { - pgd_t * pgd; - pmd_t * pmd; - pte_t * pte; - - if (page_count(page) !=3D 1) - printk("mem_map disagrees with %p at %08lx\n", (void *) page, address); + struct page *pg =3D ia32_shared_page[(address - vma->vm_start)/PAGE_SIZE]; =20 - pgd =3D pgd_offset(tsk->mm, address); - - spin_lock(&tsk->mm->page_table_lock); - { - pmd =3D pmd_alloc(tsk->mm, pgd, address); - if (!pmd) - goto out; - pte =3D pte_alloc(tsk->mm, pmd, address); - if (!pte) - goto out; - if (!pte_none(*pte)) - goto out; - flush_page_to_ram(page); - set_pte(pte, pte_mkwrite(mk_pte(page, PAGE_SHARED))); - } - spin_unlock(&tsk->mm->page_table_lock); - /* no need for flush_tlb */ - return page; - - out: - spin_unlock(&tsk->mm->page_table_lock); - __free_page(page); - return 0; + get_page(pg); + return pg; } =20 +static struct vm_operations_struct ia32_shared_page_vm_ops =3D { + nopage: ia32_install_shared_page +}; + void ia64_elf32_init (struct pt_regs *regs) { struct vm_area_struct *vma; - int nr; =20 /* * Map GDT and TSS below 4GB, where the processor can find them. We need= to map * it with privilege level 3 because the IVE uses non-privileged accesses= to these * tables. IA-32 segmentation is used to protect against IA-32 accesses = to them. */ - put_shared_page(current, virt_to_page(ia32_gdt_table), IA32_GDT_OFFSET); - if (PAGE_SHIFT <=3D IA32_PAGE_SHIFT) - put_shared_page(current, virt_to_page(ia32_tss), IA32_TSS_OFFSET); + vma =3D kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); + if (vma) { + vma->vm_mm =3D current->mm; + vma->vm_start =3D IA32_GDT_OFFSET; + vma->vm_end =3D vma->vm_start + max(PAGE_SIZE, 2*IA32_PAGE_SIZE); + vma->vm_page_prot =3D PAGE_SHARED; + vma->vm_flags =3D VM_READ|VM_MAYREAD; + vma->vm_ops =3D &ia32_shared_page_vm_ops; + vma->vm_pgoff =3D 0; + vma->vm_file =3D NULL; + vma->vm_private_data =3D NULL; + down_write(¤t->mm->mmap_sem); + { + insert_vm_struct(current->mm, vma); + } + up_write(¤t->mm->mmap_sem); + } =20 /* * Install LDT as anonymous memory. This gives us all-zero segment descr= iptors @@ -116,34 +111,13 @@ vma->vm_pgoff =3D 0; vma->vm_file =3D NULL; vma->vm_private_data =3D NULL; - insert_vm_struct(current->mm, vma); + down_write(¤t->mm->mmap_sem); + { + insert_vm_struct(current->mm, vma); + } + up_write(¤t->mm->mmap_sem); } =20 - nr =3D smp_processor_id(); - - current->thread.map_base =3D IA32_PAGE_OFFSET/3; - current->thread.task_size =3D IA32_PAGE_OFFSET; /* use what Linux/x86 use= s... */ - set_fs(USER_DS); /* set addr limit for new TASK_SIZE */ - - /* Setup the segment selectors */ - regs->r16 =3D (__USER_DS << 16) | __USER_DS; /* ES =3D DS, GS, FS are zer= o */ - regs->r17 =3D (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state()= sets TSS and LDT */ - - /* Setup the segment descriptors */ - regs->r24 =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* ESD= */ - regs->r27 =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3]); /* DSD= */ - regs->r28 =3D 0; /* FSD (null) */ - regs->r29 =3D 0; /* GSD (null) */ - regs->r30 =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_LDT(nr)]); /* LDTD */ - - /* - * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descr= iptor - * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32 - * architecture manual. - */ - regs->r31 =3D IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA= 32_PAGE_SIZE - 1, 0, - 0, 0, 0, 0, 0, 0)); - ia64_psr(regs)->ac =3D 0; /* turn off alignment checking */ regs->loadrs =3D 0; /* @@ -164,10 +138,19 @@ current->thread.fcr =3D IA32_FCR_DEFAULT; current->thread.fir =3D 0; current->thread.fdr =3D 0; - current->thread.csd =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_CS >> 3= ]); - current->thread.ssd =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[__USER_DS >> 3= ]); - current->thread.tssd =3D IA32_SEG_UNSCRAMBLE(ia32_gdt_table[_TSS(nr)]); =20 + /* + * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descr= iptor + * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32 + * architecture manual. + */ + regs->r31 =3D IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA= 32_PAGE_SIZE - 1, 0, + 0, 0, 0, 0, 0, 0)); + /* Setup the segment selectors */ + regs->r16 =3D (__USER_DS << 16) | __USER_DS; /* ES =3D DS, GS, FS are zer= o */ + regs->r17 =3D (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state()= sets TSS and LDT */ + + ia32_load_segment_descriptors(current); ia32_load_state(current); } =20 @@ -189,6 +172,7 @@ if (!mpnt) return -ENOMEM; =20 + down_write(¤t->mm->mmap_sem); { mpnt->vm_mm =3D current->mm; mpnt->vm_start =3D PAGE_MASK & (unsigned long) bprm->p; @@ -204,54 +188,32 @@ } =20 for (i =3D 0 ; i < MAX_ARG_PAGES ; i++) { - if (bprm->page[i]) { - put_dirty_page(current,bprm->page[i],stack_base); + struct page *page =3D bprm->page[i]; + if (page) { + bprm->page[i] =3D NULL; + put_dirty_page(current, page, stack_base); } stack_base +=3D PAGE_SIZE; } + up_write(¤t->mm->mmap_sem); =20 return 0; } =20 -static unsigned long -ia32_mm_addr (unsigned long addr) +static void +elf32_set_personality (void) { - struct vm_area_struct *vma; - - if ((vma =3D find_vma(current->mm, addr)) =3D NULL) - return ELF_PAGESTART(addr); - if (vma->vm_start > addr) - return ELF_PAGESTART(addr); - return ELF_PAGEALIGN(addr); + set_personality(PER_LINUX32); + current->thread.map_base =3D IA32_PAGE_OFFSET/3; + current->thread.task_size =3D IA32_PAGE_OFFSET; /* use what Linux/x86 use= s... */ + set_fs(USER_DS); /* set addr limit for new TASK_SIZE */ } =20 -/* - * Normally we would do an `mmap' to map in the process's text section. - * This doesn't work with IA32 processes as the ELF file might specify - * a non page size aligned address. Instead we will just allocate - * memory and read the data in from the file. Slightly less efficient - * but it works. - */ -extern long ia32_do_mmap (struct file *filep, unsigned int len, unsigned i= nt prot, - unsigned int flags, unsigned int fd, unsigned int offset); - static unsigned long -elf_map32 (struct file *filep, unsigned long addr, struct elf_phdr *eppnt,= int prot, int type) +elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt,= int prot, int type) { - unsigned long retval; + unsigned long pgoff =3D (eppnt->p_vaddr) & ~IA32_PAGE_MASK; =20 - if (eppnt->p_memsz >=3D (1UL<<32) || addr > (1UL<<32) - eppnt->p_memsz) - return -EINVAL; - - /* - * Make sure the elf interpreter doesn't get loaded at location 0 - * so that NULL pointers correctly cause segfaults. - */ - if (addr =3D 0) - addr +=3D PAGE_SIZE; - set_brk(ia32_mm_addr(addr), addr + eppnt->p_memsz); - memset((char *) addr + eppnt->p_filesz, 0, eppnt->p_memsz - eppnt->p_file= sz); - kernel_read(filep, eppnt->p_offset, (char *) addr, eppnt->p_filesz); - retval =3D (unsigned long) addr; - return retval; + return ia32_do_mmap(filep, (addr & IA32_PAGE_MASK), eppnt->p_filesz + pgo= ff, prot, type, + eppnt->p_offset - pgoff); } diff -urN linux-2.4.13/arch/ia64/ia32/ia32_entry.S linux-2.4.13-lia/arch/ia= 64/ia32/ia32_entry.S --- linux-2.4.13/arch/ia64/ia32/ia32_entry.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_entry.S Wed Oct 24 18:11:48 2001 @@ -2,7 +2,7 @@ #include #include =20 -#include "../kernel/entry.h" +#include "../kernel/minstate.h" =20 /* * execve() is special because in case of success, we need to @@ -14,13 +14,13 @@ alloc loc1=3Dar.pfs,3,2,4,0 mov loc0=3Drp .body - mov out0=3Din0 // filename + zxt4 out0=3Din0 // filename ;; // stop bit between alloc and call - mov out1=3Din1 // argv - mov out2=3Din2 // envp + zxt4 out1=3Din1 // argv + zxt4 out2=3Din2 // envp add out3=16,sp // regs br.call.sptk.few rp=3Dsys32_execve -1: cmp4.ge p6,p0=3Dr8,r0 +1: cmp.ge p6,p0=3Dr8,r0 mov ar.pfs=3Dloc1 // restore ar.pfs ;; (p6) mov ar.pfs=3Dr0 // clear ar.pfs in case of success @@ -29,31 +29,80 @@ br.ret.sptk.few rp END(ia32_execve) =20 - // - // Get possibly unaligned sigmask argument into an aligned - // kernel buffer -GLOBAL_ENTRY(ia32_rt_sigsuspend) - // We'll cheat and not do an alloc here since we are ultimately - // going to do a simple branch to the IA64 sys_rt_sigsuspend. - // r32 is still the first argument which is the signal mask. - // We copy this 4-byte aligned value to an 8-byte aligned buffer - // in the task structure and then jump to the IA64 code. +ENTRY(ia32_clone) + .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2) + alloc r16=3Dar.pfs,2,2,4,0 + DO_SAVE_SWITCH_STACK + mov loc0=3Drp + mov loc1=3Dr16 // save ar.pfs across do_fork + .body + zxt4 out1=3Din1 // newsp + mov out3=3D0 // stacksize + adds out2=3DIA64_SWITCH_STACK_SIZE+16,sp // out2 =3D ®s + zxt4 out0=3Din0 // out0 =3D clone_flags + br.call.sptk.many rp=3Ddo_fork +.ret0: .restore sp + adds sp=3DIA64_SWITCH_STACK_SIZE,sp // pop the switch stack + mov ar.pfs=3Dloc1 + mov rp=3Dloc0 + br.ret.sptk.many rp +END(ia32_clone) =20 - EX(.Lfail, ld4 r2=3D[r32],4) // load low part of sigmask - ;; - EX(.Lfail, ld4 r3=3D[r32]) // load high part of sigmask - adds r32=3DIA64_TASK_THREAD_SIGMASK_OFFSET,r13 - ;; - st8 [r32]=3Dr2 - adds r10=3DIA64_TASK_THREAD_SIGMASK_OFFSET+4,r13 +ENTRY(sys32_rt_sigsuspend) + .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) + alloc loc1=3Dar.pfs,8,2,3,0 // preserve all eight input regs + mov loc0=3Drp + mov out0=3Din0 // mask + mov out1=3Din1 // sigsetsize + mov out2=3Dsp // out2 =3D &sigscratch + .fframe 16 + adds sp=3D-16,sp // allocate dummy "sigscratch" ;; + .body + br.call.sptk.many rp=3Dia32_rt_sigsuspend +1: .restore sp + adds sp=16,sp + mov rp=3Dloc0 + mov ar.pfs=3Dloc1 + br.ret.sptk.many rp +END(sys32_rt_sigsuspend) =20 - st4 [r10]=3Dr3 - br.cond.sptk.many sys_rt_sigsuspend - -.Lfail: br.ret.sptk.many rp // failed to read sigmask -END(ia32_rt_sigsuspend) +ENTRY(sys32_sigsuspend) + .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) + alloc loc1=3Dar.pfs,8,2,3,0 // preserve all eight input regs + mov loc0=3Drp + mov out0=3Din2 // mask (first two args are ignored) + ;; + mov out1=3Dsp // out1 =3D &sigscratch + .fframe 16 + adds sp=3D-16,sp // allocate dummy "sigscratch" + .body + br.call.sptk.many rp=3Dia32_sigsuspend +1: .restore sp + adds sp=16,sp + mov rp=3Dloc0 + mov ar.pfs=3Dloc1 + br.ret.sptk.many rp +END(sys32_sigsuspend) =20 +GLOBAL_ENTRY(ia32_ret_from_clone) + PT_REGS_UNWIND_INFO(0) + /* + * We need to call schedule_tail() to complete the scheduling process. + * Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains = the + * address of the previously executing task. + */ + br.call.sptk.many rp=3Dia64_invoke_schedule_tail +.ret1: adds r2=3DIA64_TASK_PTRACE_OFFSET,r13 + ;; + ld8 r2=3D[r2] + ;; + mov r8=3D0 + tbit.nz p6,p0=3Dr2,PT_TRACESYS_BIT +(p6) br.cond.spnt .ia32_strace_check_retval + ;; // prevent RAW on r8 +END(ia32_ret_from_clone) + // fall thrugh GLOBAL_ENTRY(ia32_ret_from_syscall) PT_REGS_UNWIND_INFO(0) =20 @@ -72,20 +121,25 @@ // manipulate ar.pfs. // // Input: - // r15 =3D syscall number - // b6 =3D syscall entry point + // r8 =3D syscall number + // b6 =3D syscall entry point // GLOBAL_ENTRY(ia32_trace_syscall) PT_REGS_UNWIND_INFO(0) + mov r3=3D-38 + adds r2=3DIA64_PT_REGS_R8_OFFSET+16,sp + ;; + st8 [r2]=3Dr3 // initialize return code to -ENOSYS br.call.sptk.few rp=3Dinvoke_syscall_trace // give parent a chance to cat= ch syscall args -.ret0: br.call.sptk.few rp=B6 // do the syscall -.ret1: cmp.lt p6,p0=3Dr8,r0 // syscall failed? +.ret2: br.call.sptk.few rp=B6 // do the syscall +.ia32_strace_check_retval: + cmp.lt p6,p0=3Dr8,r0 // syscall failed? adds r2=3DIA64_PT_REGS_R8_OFFSET+16,sp // r2 =3D &pt_regs.r8 ;; st8.spill [r2]=3Dr8 // store return value in slot for r8 br.call.sptk.few rp=3Dinvoke_syscall_trace // give parent a chance to cat= ch return value -.ret2: alloc r2=3Dar.pfs,0,0,0,0 // drop the syscall argument frame - br.cond.sptk.many ia64_leave_kernel // rp MUST be !=3D ia64_leave_kernel! +.ret4: alloc r2=3Dar.pfs,0,0,0,0 // drop the syscall argument frame + br.cond.sptk.many ia64_leave_kernel END(ia32_trace_syscall) =20 GLOBAL_ENTRY(sys32_vfork) @@ -110,7 +164,7 @@ mov out3=3D0 adds out2=3DIA64_SWITCH_STACK_SIZE+16,sp // out2 =3D ®s br.call.sptk.few rp=3Ddo_fork -.ret3: mov ar.pfs=3Dloc1 +.ret5: mov ar.pfs=3Dloc1 .restore sp adds sp=3DIA64_SWITCH_STACK_SIZE,sp // pop the switch stack mov rp=3Dloc0 @@ -137,21 +191,21 @@ data8 sys32_time data8 sys_mknod data8 sys_chmod /* 15 */ - data8 sys_lchown + data8 sys_lchown /* 16-bit version */ data8 sys32_ni_syscall /* old break syscall holder */ data8 sys32_ni_syscall data8 sys32_lseek data8 sys_getpid /* 20 */ data8 sys_mount data8 sys_oldumount - data8 sys_setuid - data8 sys_getuid + data8 sys_setuid /* 16-bit version */ + data8 sys_getuid /* 16-bit version */ data8 sys32_ni_syscall /* sys_stime is not supported on IA64 */ /* 25 */ data8 sys32_ptrace data8 sys32_alarm data8 sys32_ni_syscall - data8 sys_pause - data8 ia32_utime /* 30 */ + data8 sys32_pause + data8 sys32_utime /* 30 */ data8 sys32_ni_syscall /* old stty syscall holder */ data8 sys32_ni_syscall /* old gtty syscall holder */ data8 sys_access @@ -167,15 +221,15 @@ data8 sys32_times data8 sys32_ni_syscall /* old prof syscall holder */ data8 sys_brk /* 45 */ - data8 sys_setgid - data8 sys_getgid + data8 sys_setgid /* 16-bit version */ + data8 sys_getgid /* 16-bit version */ data8 sys32_signal - data8 sys_geteuid - data8 sys_getegid /* 50 */ + data8 sys_geteuid /* 16-bit version */ + data8 sys_getegid /* 16-bit version */ /* 50 */ data8 sys_acct data8 sys_umount /* recycled never used phys( */ data8 sys32_ni_syscall /* old lock syscall holder */ - data8 ia32_ioctl + data8 sys32_ioctl data8 sys32_fcntl /* 55 */ data8 sys32_ni_syscall /* old mpx syscall holder */ data8 sys_setpgid @@ -191,19 +245,19 @@ data8 sys32_sigaction data8 sys32_ni_syscall data8 sys32_ni_syscall - data8 sys_setreuid /* 70 */ - data8 sys_setregid - data8 sys32_ni_syscall - data8 sys_sigpending + data8 sys_setreuid /* 16-bit version */ /* 70 */ + data8 sys_setregid /* 16-bit version */ + data8 sys32_sigsuspend + data8 sys32_sigpending data8 sys_sethostname data8 sys32_setrlimit /* 75 */ - data8 sys32_getrlimit + data8 sys32_old_getrlimit data8 sys32_getrusage data8 sys32_gettimeofday data8 sys32_settimeofday - data8 sys_getgroups /* 80 */ - data8 sys_setgroups - data8 old_select + data8 sys32_getgroups16 /* 80 */ + data8 sys32_setgroups16 + data8 sys32_old_select data8 sys_symlink data8 sys32_ni_syscall data8 sys_readlink /* 85 */ @@ -212,17 +266,17 @@ data8 sys_reboot data8 sys32_readdir data8 sys32_mmap /* 90 */ - data8 sys_munmap + data8 sys32_munmap data8 sys_truncate data8 sys_ftruncate data8 sys_fchmod - data8 sys_fchown /* 95 */ + data8 sys_fchown /* 16-bit version */ /* 95 */ data8 sys_getpriority data8 sys_setpriority data8 sys32_ni_syscall /* old profil syscall holder */ data8 sys32_statfs data8 sys32_fstatfs /* 100 */ - data8 sys_ioperm + data8 sys32_ioperm data8 sys32_socketcall data8 sys_syslog data8 sys32_setitimer @@ -231,36 +285,36 @@ data8 sys32_newlstat data8 sys32_newfstat data8 sys32_ni_syscall - data8 sys_iopl /* 110 */ + data8 sys32_iopl /* 110 */ data8 sys_vhangup data8 sys32_ni_syscall /* used to be sys_idle */ data8 sys32_ni_syscall data8 sys32_wait4 data8 sys_swapoff /* 115 */ - data8 sys_sysinfo + data8 sys32_sysinfo data8 sys32_ipc data8 sys_fsync data8 sys32_sigreturn - data8 sys_clone /* 120 */ + data8 ia32_clone /* 120 */ data8 sys_setdomainname data8 sys32_newuname data8 sys32_modify_ldt - data8 sys_adjtimex + data8 sys32_ni_syscall /* adjtimex */ data8 sys32_mprotect /* 125 */ - data8 sys_sigprocmask - data8 sys_create_module - data8 sys_init_module - data8 sys_delete_module - data8 sys_get_kernel_syms /* 130 */ - data8 sys_quotactl + data8 sys32_sigprocmask + data8 sys32_ni_syscall /* create_module */ + data8 sys32_ni_syscall /* init_module */ + data8 sys32_ni_syscall /* delete_module */ + data8 sys32_ni_syscall /* get_kernel_syms */ /* 130 */ + data8 sys32_quotactl data8 sys_getpgid data8 sys_fchdir - data8 sys_bdflush - data8 sys_sysfs /* 135 */ - data8 sys_personality + data8 sys32_ni_syscall /* sys_bdflush */ + data8 sys_sysfs /* 135 */ + data8 sys32_personality data8 sys32_ni_syscall /* for afs_syscall */ - data8 sys_setfsuid - data8 sys_setfsgid + data8 sys_setfsuid /* 16-bit version */ + data8 sys_setfsgid /* 16-bit version */ data8 sys_llseek /* 140 */ data8 sys32_getdents data8 sys32_select @@ -282,66 +336,73 @@ data8 sys_sched_yield data8 sys_sched_get_priority_max data8 sys_sched_get_priority_min /* 160 */ - data8 sys_sched_rr_get_interval + data8 sys32_sched_rr_get_interval data8 sys32_nanosleep data8 sys_mremap - data8 sys_setresuid - data8 sys32_getresuid /* 165 */ - data8 sys_vm86 - data8 sys_query_module + data8 sys_setresuid /* 16-bit version */ + data8 sys32_getresuid16 /* 16-bit version */ /* 165 */ + data8 sys32_ni_syscall /* vm86 */ + data8 sys32_ni_syscall /* sys_query_module */ data8 sys_poll - data8 sys_nfsservctl + data8 sys32_ni_syscall /* nfsservctl */ data8 sys_setresgid /* 170 */ - data8 sys32_getresgid + data8 sys32_getresgid16 data8 sys_prctl data8 sys32_rt_sigreturn data8 sys32_rt_sigaction data8 sys32_rt_sigprocmask /* 175 */ data8 sys_rt_sigpending - data8 sys_rt_sigtimedwait - data8 sys_rt_sigqueueinfo - data8 ia32_rt_sigsuspend - data8 sys_pread /* 180 */ - data8 sys_pwrite - data8 sys_chown + data8 sys32_rt_sigtimedwait + data8 sys32_rt_sigqueueinfo + data8 sys32_rt_sigsuspend + data8 sys32_pread /* 180 */ + data8 sys32_pwrite + data8 sys_chown /* 16-bit version */ data8 sys_getcwd data8 sys_capget data8 sys_capset /* 185 */ data8 sys32_sigaltstack - data8 sys_sendfile + data8 sys32_sendfile data8 sys32_ni_syscall /* streams1 */ data8 sys32_ni_syscall /* streams2 */ data8 sys32_vfork /* 190 */ + data8 sys32_getrlimit + data8 sys32_mmap2 + data8 sys32_truncate64 + data8 sys32_ftruncate64 + data8 sys32_stat64 /* 195 */ + data8 sys32_lstat64 + data8 sys32_fstat64 + data8 sys_lchown + data8 sys_getuid + data8 sys_getgid /* 200 */ + data8 sys_geteuid + data8 sys_getegid + data8 sys_setreuid + data8 sys_setregid + data8 sys_getgroups /* 205 */ + data8 sys_setgroups + data8 sys_fchown + data8 sys_setresuid + data8 sys_getresuid + data8 sys_setresgid /* 210 */ + data8 sys_getresgid + data8 sys_chown + data8 sys_setuid + data8 sys_setgid + data8 sys_setfsuid /* 215 */ + data8 sys_setfsgid + data8 sys_pivot_root + data8 sys_mincore + data8 sys_madvise + data8 sys_getdents64 /* 220 */ + data8 sys32_fcntl64 + data8 sys_ni_syscall /* reserved for TUX */ + data8 sys_ni_syscall /* reserved for Security */ + data8 sys_gettid + data8 sys_readahead /* 225 */ data8 sys_ni_syscall data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 195 */ - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 200 */ - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 205 */ - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 210 */ - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 215 */ - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall - data8 sys_ni_syscall /* 220 */ data8 sys_ni_syscall data8 sys_ni_syscall /* diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c linux-2.4.13-lia/arch/ia= 64/ia32/ia32_ioctl.c --- linux-2.4.13/arch/ia64/ia32/ia32_ioctl.c Thu Jan 4 12:50:17 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ioctl.c Thu Oct 4 00:21:52 2001 @@ -3,6 +3,8 @@ * * Copyright (C) 2000 VA Linux Co * Copyright (C) 2000 Don Dugger + * Copyright (C) 2001 Hewlett-Packard Co + * David Mosberger-Tang */ =20 #include @@ -22,8 +24,12 @@ #include #include #include + +#include + #include <../drivers/char/drm/drm.h> =20 + #define IOCTL_NR(a) ((a) & ~(_IOC_SIZEMASK << _IOC_SIZESHIFT)) =20 #define DO_IOCTL(fd, cmd, arg) ({ \ @@ -36,179 +42,200 @@ _ret; \ }) =20 -#define P(i) ((void *)(long)(i)) - +#define P(i) ((void *)(unsigned long)(i)) =20 asmlinkage long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long= arg); =20 -asmlinkage long ia32_ioctl(unsigned int fd, unsigned int cmd, unsigned int= arg) +static long +put_dirent32 (struct dirent *d, struct linux32_dirent *d32) +{ + size_t namelen =3D strlen(d->d_name); + + return (put_user(d->d_ino, &d32->d_ino) + || put_user(d->d_off, &d32->d_off) + || put_user(d->d_reclen, &d32->d_reclen) + || copy_to_user(d32->d_name, d->d_name, namelen + 1)); +} + +asmlinkage long +sys32_ioctl (unsigned int fd, unsigned int cmd, unsigned int arg) { long ret; =20 switch (IOCTL_NR(cmd)) { - - case IOCTL_NR(DRM_IOCTL_VERSION): - { - drm_version_t ver; - struct { - int version_major; - int version_minor; - int version_patchlevel; - unsigned int name_len; - unsigned int name; /* pointer */ - unsigned int date_len; - unsigned int date; /* pointer */ - unsigned int desc_len; - unsigned int desc; /* pointer */ - } ver32; - - if (copy_from_user(&ver32, P(arg), sizeof(ver32))) - return -EFAULT; - ver.name_len =3D ver32.name_len; - ver.name =3D P(ver32.name); - ver.date_len =3D ver32.date_len; - ver.date =3D P(ver32.date); - ver.desc_len =3D ver32.desc_len; - ver.desc =3D P(ver32.desc); - ret =3D DO_IOCTL(fd, cmd, &ver); - if (ret >=3D 0) { - ver32.version_major =3D ver.version_major; - ver32.version_minor =3D ver.version_minor; - ver32.version_patchlevel =3D ver.version_patchlevel; - ver32.name_len =3D ver.name_len; - ver32.date_len =3D ver.date_len; - ver32.desc_len =3D ver.desc_len; - if (copy_to_user(P(arg), &ver32, sizeof(ver32))) - return -EFAULT; - } - return(ret); - } - - case IOCTL_NR(DRM_IOCTL_GET_UNIQUE): - { - drm_unique_t un; - struct { - unsigned int unique_len; - unsigned int unique; - } un32; - - if (copy_from_user(&un32, P(arg), sizeof(un32))) - return -EFAULT; - un.unique_len =3D un32.unique_len; - un.unique =3D P(un32.unique); - ret =3D DO_IOCTL(fd, cmd, &un); - if (ret >=3D 0) { - un32.unique_len =3D un.unique_len; - if (copy_to_user(P(arg), &un32, sizeof(un32))) - return -EFAULT; - } - return(ret); - } - case IOCTL_NR(DRM_IOCTL_SET_UNIQUE): - case IOCTL_NR(DRM_IOCTL_ADD_MAP): - case IOCTL_NR(DRM_IOCTL_ADD_BUFS): - case IOCTL_NR(DRM_IOCTL_MARK_BUFS): - case IOCTL_NR(DRM_IOCTL_INFO_BUFS): - case IOCTL_NR(DRM_IOCTL_MAP_BUFS): - case IOCTL_NR(DRM_IOCTL_FREE_BUFS): - case IOCTL_NR(DRM_IOCTL_ADD_CTX): - case IOCTL_NR(DRM_IOCTL_RM_CTX): - case IOCTL_NR(DRM_IOCTL_MOD_CTX): - case IOCTL_NR(DRM_IOCTL_GET_CTX): - case IOCTL_NR(DRM_IOCTL_SWITCH_CTX): - case IOCTL_NR(DRM_IOCTL_NEW_CTX): - case IOCTL_NR(DRM_IOCTL_RES_CTX): - - case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE): - case IOCTL_NR(DRM_IOCTL_AGP_RELEASE): - case IOCTL_NR(DRM_IOCTL_AGP_ENABLE): - case IOCTL_NR(DRM_IOCTL_AGP_INFO): - case IOCTL_NR(DRM_IOCTL_AGP_ALLOC): - case IOCTL_NR(DRM_IOCTL_AGP_FREE): - case IOCTL_NR(DRM_IOCTL_AGP_BIND): - case IOCTL_NR(DRM_IOCTL_AGP_UNBIND): - - /* Mga specific ioctls */ - - case IOCTL_NR(DRM_IOCTL_MGA_INIT): - - /* I810 specific ioctls */ - - case IOCTL_NR(DRM_IOCTL_I810_GETBUF): - case IOCTL_NR(DRM_IOCTL_I810_COPY): - - /* Rage 128 specific ioctls */ - - case IOCTL_NR(DRM_IOCTL_R128_PACKET): - - case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH): - case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT): - case IOCTL_NR(MTIOCGET): - case IOCTL_NR(MTIOCPOS): - case IOCTL_NR(MTIOCGETCONFIG): - case IOCTL_NR(MTIOCSETCONFIG): - case IOCTL_NR(PPPIOCSCOMPRESS): - case IOCTL_NR(PPPIOCGIDLE): - case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2): - case IOCTL_NR(NCP_IOC_GETOBJECTNAME): - case IOCTL_NR(NCP_IOC_SETOBJECTNAME): - case IOCTL_NR(NCP_IOC_GETPRIVATEDATA): - case IOCTL_NR(NCP_IOC_SETPRIVATEDATA): - case IOCTL_NR(NCP_IOC_GETMOUNTUID2): - case IOCTL_NR(CAPI_MANUFACTURER_CMD): - case IOCTL_NR(VIDIOCGTUNER): - case IOCTL_NR(VIDIOCSTUNER): - case IOCTL_NR(VIDIOCGWIN): - case IOCTL_NR(VIDIOCSWIN): - case IOCTL_NR(VIDIOCGFBUF): - case IOCTL_NR(VIDIOCSFBUF): - case IOCTL_NR(MGSL_IOCSPARAMS): - case IOCTL_NR(MGSL_IOCGPARAMS): - case IOCTL_NR(ATM_GETNAMES): - case IOCTL_NR(ATM_GETLINKRATE): - case IOCTL_NR(ATM_GETTYPE): - case IOCTL_NR(ATM_GETESI): - case IOCTL_NR(ATM_GETADDR): - case IOCTL_NR(ATM_RSTADDR): - case IOCTL_NR(ATM_ADDADDR): - case IOCTL_NR(ATM_DELADDR): - case IOCTL_NR(ATM_GETCIRANGE): - case IOCTL_NR(ATM_SETCIRANGE): - case IOCTL_NR(ATM_SETESI): - case IOCTL_NR(ATM_SETESIF): - case IOCTL_NR(ATM_GETSTAT): - case IOCTL_NR(ATM_GETSTATZ): - case IOCTL_NR(ATM_GETLOOP): - case IOCTL_NR(ATM_SETLOOP): - case IOCTL_NR(ATM_QUERYLOOP): - case IOCTL_NR(ENI_SETMULT): - case IOCTL_NR(NS_GETPSTAT): - /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPOO= LZ */ - case IOCTL_NR(ZATM_GETPOOLZ): - case IOCTL_NR(ZATM_GETPOOL): - case IOCTL_NR(ZATM_SETPOOL): - case IOCTL_NR(ZATM_GETTHIST): - case IOCTL_NR(IDT77105_GETSTAT): - case IOCTL_NR(IDT77105_GETSTATZ): - case IOCTL_NR(IXJCTL_TONE_CADENCE): - case IOCTL_NR(IXJCTL_FRAMES_READ): - case IOCTL_NR(IXJCTL_FRAMES_WRITTEN): - case IOCTL_NR(IXJCTL_READ_WAIT): - case IOCTL_NR(IXJCTL_WRITE_WAIT): - case IOCTL_NR(IXJCTL_DRYBUFFER_READ): - case IOCTL_NR(I2OHRTGET): - case IOCTL_NR(I2OLCTGET): - case IOCTL_NR(I2OPARMSET): - case IOCTL_NR(I2OPARMGET): - case IOCTL_NR(I2OSWDL): - case IOCTL_NR(I2OSWUL): - case IOCTL_NR(I2OSWDEL): - case IOCTL_NR(I2OHTML): + case IOCTL_NR(VFAT_IOCTL_READDIR_SHORT): + case IOCTL_NR(VFAT_IOCTL_READDIR_BOTH): { + struct linux32_dirent *d32 =3D P(arg); + struct dirent d[2]; + + ret =3D DO_IOCTL(fd, _IOR('r', _IOC_NR(cmd), + struct dirent [2]), + (unsigned long) d); + if (ret < 0) + return ret; + + if (put_dirent32(d, d32) || put_dirent32(d + 1, d32 + 1)) + return -EFAULT; + + return ret; + } + + case IOCTL_NR(DRM_IOCTL_VERSION): + { + drm_version_t ver; + struct { + int version_major; + int version_minor; + int version_patchlevel; + unsigned int name_len; + unsigned int name; /* pointer */ + unsigned int date_len; + unsigned int date; /* pointer */ + unsigned int desc_len; + unsigned int desc; /* pointer */ + } ver32; + + if (copy_from_user(&ver32, P(arg), sizeof(ver32))) + return -EFAULT; + ver.name_len =3D ver32.name_len; + ver.name =3D P(ver32.name); + ver.date_len =3D ver32.date_len; + ver.date =3D P(ver32.date); + ver.desc_len =3D ver32.desc_len; + ver.desc =3D P(ver32.desc); + ret =3D DO_IOCTL(fd, DRM_IOCTL_VERSION, &ver); + if (ret >=3D 0) { + ver32.version_major =3D ver.version_major; + ver32.version_minor =3D ver.version_minor; + ver32.version_patchlevel =3D ver.version_patchlevel; + ver32.name_len =3D ver.name_len; + ver32.date_len =3D ver.date_len; + ver32.desc_len =3D ver.desc_len; + if (copy_to_user(P(arg), &ver32, sizeof(ver32))) + return -EFAULT; + } + return ret; + } + + case IOCTL_NR(DRM_IOCTL_GET_UNIQUE): + { + drm_unique_t un; + struct { + unsigned int unique_len; + unsigned int unique; + } un32; + + if (copy_from_user(&un32, P(arg), sizeof(un32))) + return -EFAULT; + un.unique_len =3D un32.unique_len; + un.unique =3D P(un32.unique); + ret =3D DO_IOCTL(fd, DRM_IOCTL_GET_UNIQUE, &un); + if (ret >=3D 0) { + un32.unique_len =3D un.unique_len; + if (copy_to_user(P(arg), &un32, sizeof(un32))) + return -EFAULT; + } + return ret; + } + case IOCTL_NR(DRM_IOCTL_SET_UNIQUE): + case IOCTL_NR(DRM_IOCTL_ADD_MAP): + case IOCTL_NR(DRM_IOCTL_ADD_BUFS): + case IOCTL_NR(DRM_IOCTL_MARK_BUFS): + case IOCTL_NR(DRM_IOCTL_INFO_BUFS): + case IOCTL_NR(DRM_IOCTL_MAP_BUFS): + case IOCTL_NR(DRM_IOCTL_FREE_BUFS): + case IOCTL_NR(DRM_IOCTL_ADD_CTX): + case IOCTL_NR(DRM_IOCTL_RM_CTX): + case IOCTL_NR(DRM_IOCTL_MOD_CTX): + case IOCTL_NR(DRM_IOCTL_GET_CTX): + case IOCTL_NR(DRM_IOCTL_SWITCH_CTX): + case IOCTL_NR(DRM_IOCTL_NEW_CTX): + case IOCTL_NR(DRM_IOCTL_RES_CTX): + + case IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE): + case IOCTL_NR(DRM_IOCTL_AGP_RELEASE): + case IOCTL_NR(DRM_IOCTL_AGP_ENABLE): + case IOCTL_NR(DRM_IOCTL_AGP_INFO): + case IOCTL_NR(DRM_IOCTL_AGP_ALLOC): + case IOCTL_NR(DRM_IOCTL_AGP_FREE): + case IOCTL_NR(DRM_IOCTL_AGP_BIND): + case IOCTL_NR(DRM_IOCTL_AGP_UNBIND): + + /* Mga specific ioctls */ + + case IOCTL_NR(DRM_IOCTL_MGA_INIT): + + /* I810 specific ioctls */ + + case IOCTL_NR(DRM_IOCTL_I810_GETBUF): + case IOCTL_NR(DRM_IOCTL_I810_COPY): + + case IOCTL_NR(MTIOCGET): + case IOCTL_NR(MTIOCPOS): + case IOCTL_NR(MTIOCGETCONFIG): + case IOCTL_NR(MTIOCSETCONFIG): + case IOCTL_NR(PPPIOCSCOMPRESS): + case IOCTL_NR(PPPIOCGIDLE): + case IOCTL_NR(NCP_IOC_GET_FS_INFO_V2): + case IOCTL_NR(NCP_IOC_GETOBJECTNAME): + case IOCTL_NR(NCP_IOC_SETOBJECTNAME): + case IOCTL_NR(NCP_IOC_GETPRIVATEDATA): + case IOCTL_NR(NCP_IOC_SETPRIVATEDATA): + case IOCTL_NR(NCP_IOC_GETMOUNTUID2): + case IOCTL_NR(CAPI_MANUFACTURER_CMD): + case IOCTL_NR(VIDIOCGTUNER): + case IOCTL_NR(VIDIOCSTUNER): + case IOCTL_NR(VIDIOCGWIN): + case IOCTL_NR(VIDIOCSWIN): + case IOCTL_NR(VIDIOCGFBUF): + case IOCTL_NR(VIDIOCSFBUF): + case IOCTL_NR(MGSL_IOCSPARAMS): + case IOCTL_NR(MGSL_IOCGPARAMS): + case IOCTL_NR(ATM_GETNAMES): + case IOCTL_NR(ATM_GETLINKRATE): + case IOCTL_NR(ATM_GETTYPE): + case IOCTL_NR(ATM_GETESI): + case IOCTL_NR(ATM_GETADDR): + case IOCTL_NR(ATM_RSTADDR): + case IOCTL_NR(ATM_ADDADDR): + case IOCTL_NR(ATM_DELADDR): + case IOCTL_NR(ATM_GETCIRANGE): + case IOCTL_NR(ATM_SETCIRANGE): + case IOCTL_NR(ATM_SETESI): + case IOCTL_NR(ATM_SETESIF): + case IOCTL_NR(ATM_GETSTAT): + case IOCTL_NR(ATM_GETSTATZ): + case IOCTL_NR(ATM_GETLOOP): + case IOCTL_NR(ATM_SETLOOP): + case IOCTL_NR(ATM_QUERYLOOP): + case IOCTL_NR(ENI_SETMULT): + case IOCTL_NR(NS_GETPSTAT): + /* case IOCTL_NR(NS_SETBUFLEV): This is a duplicate case with ZATM_GETPO= OLZ */ + case IOCTL_NR(ZATM_GETPOOLZ): + case IOCTL_NR(ZATM_GETPOOL): + case IOCTL_NR(ZATM_SETPOOL): + case IOCTL_NR(ZATM_GETTHIST): + case IOCTL_NR(IDT77105_GETSTAT): + case IOCTL_NR(IDT77105_GETSTATZ): + case IOCTL_NR(IXJCTL_TONE_CADENCE): + case IOCTL_NR(IXJCTL_FRAMES_READ): + case IOCTL_NR(IXJCTL_FRAMES_WRITTEN): + case IOCTL_NR(IXJCTL_READ_WAIT): + case IOCTL_NR(IXJCTL_WRITE_WAIT): + case IOCTL_NR(IXJCTL_DRYBUFFER_READ): + case IOCTL_NR(I2OHRTGET): + case IOCTL_NR(I2OLCTGET): + case IOCTL_NR(I2OPARMSET): + case IOCTL_NR(I2OPARMGET): + case IOCTL_NR(I2OSWDL): + case IOCTL_NR(I2OSWUL): + case IOCTL_NR(I2OSWDEL): + case IOCTL_NR(I2OHTML): break; - default: - return(sys_ioctl(fd, cmd, (unsigned long)arg)); + default: + return sys_ioctl(fd, cmd, (unsigned long)arg); =20 } printk("%x:unimplemented IA32 ioctl system call\n", cmd); - return(-EINVAL); + return -EINVAL; } diff -urN linux-2.4.13/arch/ia64/ia32/ia32_ldt.c linux-2.4.13-lia/arch/ia64= /ia32/ia32_ldt.c --- linux-2.4.13/arch/ia64/ia32/ia32_ldt.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_ldt.c Wed Oct 24 18:12:38 2001 @@ -1,6 +1,6 @@ /* * Copyright (C) 2001 Hewlett-Packard Co - * Copyright (C) 2001 David Mosberger-Tang + * David Mosberger-Tang * * Adapted from arch/i386/kernel/ldt.c */ @@ -16,6 +16,8 @@ #include #include =20 +#define P(p) ((void *) (unsigned long) (p)) + /* * read_ldt() is not really atomic - this is not a problem since synchroni= zation of reads * and writes done to the LDT has to be assured by user-space anyway. Writ= es are atomic, @@ -58,10 +60,30 @@ } =20 static int +read_default_ldt (void * ptr, unsigned long bytecount) +{ + unsigned long size; + int err; + + /* XXX fix me: should return equivalent of default_ldt[0] */ + err =3D 0; + size =3D 8; + if (size > bytecount) + size =3D bytecount; + + err =3D size; + if (clear_user(ptr, size)) + err =3D -EFAULT; + + return err; +} + +static int write_ldt (void * ptr, unsigned long bytecount, int oldmode) { struct ia32_modify_ldt_ldt_s ldt_info; __u64 entry; + int ret; =20 if (bytecount !=3D sizeof(ldt_info)) return -EINVAL; @@ -97,23 +119,28 @@ * memory, but we still need to guard against out-of-memory, hence we mus= t use * put_user(). */ - return __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_numbe= r); + ret =3D __put_user(entry, (__u64 *) IA32_LDT_OFFSET + ldt_info.entry_numb= er); + ia32_load_segment_descriptors(current); + return ret; } =20 asmlinkage int -sys32_modify_ldt (int func, void *ptr, unsigned int bytecount) +sys32_modify_ldt (int func, unsigned int ptr, unsigned int bytecount) { int ret =3D -ENOSYS; =20 switch (func) { case 0: - ret =3D read_ldt(ptr, bytecount); + ret =3D read_ldt(P(ptr), bytecount); break; case 1: - ret =3D write_ldt(ptr, bytecount, 1); + ret =3D write_ldt(P(ptr), bytecount, 1); + break; + case 2: + ret =3D read_default_ldt(P(ptr), bytecount); break; case 0x11: - ret =3D write_ldt(ptr, bytecount, 0); + ret =3D write_ldt(P(ptr), bytecount, 0); break; } return ret; diff -urN linux-2.4.13/arch/ia64/ia32/ia32_signal.c linux-2.4.13-lia/arch/i= a64/ia32/ia32_signal.c --- linux-2.4.13/arch/ia64/ia32/ia32_signal.c Mon Oct 9 17:54:53 2000 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_signal.c Wed Oct 10 17:38:49 2001 @@ -1,8 +1,8 @@ /* * IA32 Architecture-specific signal handling support. * - * Copyright (C) 1999 Hewlett-Packard Co - * Copyright (C) 1999 David Mosberger-Tang + * Copyright (C) 1999, 2001 Hewlett-Packard Co + * David Mosberger-Tang * Copyright (C) 1999 Arun Sharma * Copyright (C) 2000 VA Linux Co * Copyright (C) 2000 Don Dugger @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -28,9 +29,15 @@ #include #include =20 +#include "../kernel/sigframe.h" + +#define A(__x) ((unsigned long)(__x)) + #define DEBUG_SIG 0 #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) =20 +#define __IA32_NR_sigreturn 119 +#define __IA32_NR_rt_sigreturn 173 =20 struct sigframe_ia32 { @@ -54,12 +61,51 @@ char retcode[8]; }; =20 -static int +int +copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from) +{ + unsigned long tmp; + int err; + + if (!access_ok(VERIFY_READ, from, sizeof(siginfo_t32))) + return -EFAULT; + + err =3D __get_user(to->si_signo, &from->si_signo); + err |=3D __get_user(to->si_errno, &from->si_errno); + err |=3D __get_user(to->si_code, &from->si_code); + + if (from->si_code < 0) + err |=3D __copy_from_user(&to->_sifields._pad, &from->_sifields._pad, SI= _PAD_SIZE); + else { + switch (from->si_code >> 16) { + case __SI_CHLD >> 16: + err |=3D __get_user(to->si_utime, &from->si_utime); + err |=3D __get_user(to->si_stime, &from->si_stime); + err |=3D __get_user(to->si_status, &from->si_status); + default: + err |=3D __get_user(to->si_pid, &from->si_pid); + err |=3D __get_user(to->si_uid, &from->si_uid); + break; + case __SI_FAULT >> 16: + err |=3D __get_user(tmp, &from->si_addr); + to->si_addr =3D (void *) tmp; + break; + case __SI_POLL >> 16: + err |=3D __get_user(to->si_band, &from->si_band); + err |=3D __get_user(to->si_fd, &from->si_fd); + break; + /* case __SI_RT: This is not generated by the kernel as of now. */ + } + } + return err; +} + +int copy_siginfo_to_user32 (siginfo_t32 *to, siginfo_t *from) { int err; =20 - if (!access_ok (VERIFY_WRITE, to, sizeof(siginfo_t32))) + if (!access_ok(VERIFY_WRITE, to, sizeof(siginfo_t32))) return -EFAULT; =20 /* If you change siginfo_t structure, please be sure @@ -97,110 +143,329 @@ return err; } =20 +static inline void +sigact_set_handler (struct k_sigaction *sa, unsigned int handler, unsigned= int restorer) +{ + if (handler + 1 <=3D 2) + /* SIG_DFL, SIG_IGN, or SIG_ERR: must sign-extend to 64-bits */ + sa->sa.sa_handler =3D (__sighandler_t) A((int) handler); + else + sa->sa.sa_handler =3D (__sighandler_t) (((unsigned long) restorer << 32)= | handler); +} =20 +asmlinkage long +ia32_rt_sigsuspend (sigset32_t *uset, unsigned int sigsetsize, struct sigs= cratch *scr) +{ + extern long ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, lon= g in_syscall); + sigset_t oldset, set; =20 -static int -setup_sigcontext_ia32(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fp= state, - struct pt_regs *regs, unsigned long mask) + scr->scratch_unat =3D 0; /* avoid leaking kernel bits to user level */ + memset(&set, 0, sizeof(&set)); + + if (sigsetsize > sizeof(sigset_t)) + return -EINVAL; + + if (copy_from_user(&set.sig, &uset->sig, sigsetsize)) + return -EFAULT; + + sigdelsetmask(&set, ~_BLOCKABLE); + + spin_lock_irq(¤t->sigmask_lock); + { + oldset =3D current->blocked; + current->blocked =3D set; + recalc_sigpending(current); + } + spin_unlock_irq(¤t->sigmask_lock); + + /* + * The return below usually returns to the signal handler. We need to pr= e-set the + * correct error code here to ensure that the right values get saved in s= igcontext + * by ia64_do_signal. + */ + scr->pt.r8 =3D -EINTR; + while (1) { + current->state =3D TASK_INTERRUPTIBLE; + schedule(); + if (ia64_do_signal(&oldset, scr, 1)) + return -EINTR; + } +} + +asmlinkage long +ia32_sigsuspend (unsigned int mask, struct sigscratch *scr) +{ + return ia32_rt_sigsuspend((sigset32_t *)&mask, sizeof(mask), scr); +} + +asmlinkage long +sys32_signal (int sig, unsigned int handler) +{ + struct k_sigaction new_sa, old_sa; + int ret; + + sigact_set_handler(&new_sa, handler, 0); + new_sa.sa.sa_flags =3D SA_ONESHOT | SA_NOMASK; + + ret =3D do_sigaction(sig, &new_sa, &old_sa); + + return ret ? ret : IA32_SA_HANDLER(&old_sa); +} + +asmlinkage long +sys32_rt_sigaction (int sig, struct sigaction32 *act, + struct sigaction32 *oact, unsigned int sigsetsize) +{ + struct k_sigaction new_ka, old_ka; + unsigned int handler, restorer; + int ret; + + /* XXX: Don't preclude handling different sized sigset_t's. */ + if (sigsetsize !=3D sizeof(sigset32_t)) + return -EINVAL; + + if (act) { + ret =3D get_user(handler, &act->sa_handler); + ret |=3D get_user(new_ka.sa.sa_flags, &act->sa_flags); + ret |=3D get_user(restorer, &act->sa_restorer); + ret |=3D copy_from_user(&new_ka.sa.sa_mask, &act->sa_mask, sizeof(sigset= 32_t)); + if (ret) + return -EFAULT; + + sigact_set_handler(&new_ka, handler, restorer); + } + + ret =3D do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); + + if (!ret && oact) { + ret =3D put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler); + ret |=3D put_user(old_ka.sa.sa_flags, &oact->sa_flags); + ret |=3D put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer); + ret |=3D copy_to_user(&oact->sa_mask, &old_ka.sa.sa_mask, sizeof(sigset3= 2_t)); + } + return ret; +} + + +extern asmlinkage long sys_rt_sigprocmask (int how, sigset_t *set, sigset_= t *oset, + size_t sigsetsize); + +asmlinkage long +sys32_rt_sigprocmask (int how, sigset32_t *set, sigset32_t *oset, unsigned= int sigsetsize) +{ + mm_segment_t old_fs =3D get_fs(); + sigset_t s; + long ret; + + if (sigsetsize > sizeof(s)) + return -EINVAL; + + if (set) { + memset(&s, 0, sizeof(s)); + if (copy_from_user(&s.sig, set, sigsetsize)) + return -EFAULT; + } + set_fs(KERNEL_DS); + ret =3D sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL, sizeof= (s)); + set_fs(old_fs); + if (ret) + return ret; + if (oset) { + if (copy_to_user(oset, &s.sig, sigsetsize)) + return -EFAULT; + } + return 0; +} + +asmlinkage long +sys32_sigprocmask (int how, unsigned int *set, unsigned int *oset) { - int err =3D 0; - unsigned long flag; + return sys32_rt_sigprocmask(how, (sigset32_t *) set, (sigset32_t *) oset,= sizeof(*set)); +} =20 - err |=3D __put_user((regs->r16 >> 32) & 0xffff , (unsigned int *)&s= c->fs); - err |=3D __put_user((regs->r16 >> 48) & 0xffff , (unsigned int *)&s= c->gs); +asmlinkage long +sys32_rt_sigtimedwait (sigset32_t *uthese, siginfo_t32 *uinfo, struct time= spec32 *uts, + unsigned int sigsetsize) +{ + extern asmlinkage long sys_rt_sigtimedwait (const sigset_t *, siginfo_t *, + const struct timespec *, size_t); + extern int copy_siginfo_to_user32 (siginfo_t32 *, siginfo_t *); + mm_segment_t old_fs =3D get_fs(); + struct timespec t; + siginfo_t info; + sigset_t s; + int ret; =20 - err |=3D __put_user((regs->r16 >> 56) & 0xffff, (unsigned int *)&sc= ->es); - err |=3D __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds); - err |=3D __put_user(regs->r15, &sc->edi); - err |=3D __put_user(regs->r14, &sc->esi); - err |=3D __put_user(regs->r13, &sc->ebp); - err |=3D __put_user(regs->r12, &sc->esp); - err |=3D __put_user(regs->r11, &sc->ebx); - err |=3D __put_user(regs->r10, &sc->edx); - err |=3D __put_user(regs->r9, &sc->ecx); - err |=3D __put_user(regs->r8, &sc->eax); + if (copy_from_user(&s.sig, uthese, sizeof(sigset32_t))) + return -EFAULT; + if (uts) { + ret =3D get_user(t.tv_sec, &uts->tv_sec); + ret |=3D get_user(t.tv_nsec, &uts->tv_nsec); + if (ret) + return -EFAULT; + } + set_fs(KERNEL_DS); + ret =3D sys_rt_sigtimedwait(&s, &info, &t, sigsetsize); + set_fs(old_fs); + if (ret >=3D 0 && uinfo) { + if (copy_siginfo_to_user32(uinfo, &info)) + return -EFAULT; + } + return ret; +} + +asmlinkage long +sys32_rt_sigqueueinfo (int pid, int sig, siginfo_t32 *uinfo) +{ + extern asmlinkage long sys_rt_sigqueueinfo (int, int, siginfo_t *); + extern int copy_siginfo_from_user32 (siginfo_t *to, siginfo_t32 *from); + mm_segment_t old_fs =3D get_fs(); + siginfo_t info; + int ret; + + if (copy_siginfo_from_user32(&info, uinfo)) + return -EFAULT; + set_fs(KERNEL_DS); + ret =3D sys_rt_sigqueueinfo(pid, sig, &info); + set_fs(old_fs); + return ret; +} + +asmlinkage long +sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigactio= n32 *oact) +{ + struct k_sigaction new_ka, old_ka; + unsigned int handler, restorer; + int ret; + + if (act) { + old_sigset32_t mask; + + ret =3D get_user(handler, &act->sa_handler); + ret |=3D get_user(new_ka.sa.sa_flags, &act->sa_flags); + ret |=3D get_user(restorer, &act->sa_restorer); + ret |=3D get_user(mask, &act->sa_mask); + if (ret) + return ret; + + sigact_set_handler(&new_ka, handler, restorer); + siginitset(&new_ka.sa.sa_mask, mask); + } + + ret =3D do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); + + if (!ret && oact) { + ret =3D put_user(IA32_SA_HANDLER(&old_ka), &oact->sa_handler); + ret |=3D put_user(old_ka.sa.sa_flags, &oact->sa_flags); + ret |=3D put_user(IA32_SA_RESTORER(&old_ka), &oact->sa_restorer); + ret |=3D put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask); + } + + return ret; +} + +static int +setup_sigcontext_ia32 (struct sigcontext_ia32 *sc, struct _fpstate_ia32 *f= pstate, + struct pt_regs *regs, unsigned long mask) +{ + int err =3D 0; + unsigned long flag; + + err |=3D __put_user((regs->r16 >> 32) & 0xffff, (unsigned int *)&sc->fs); + err |=3D __put_user((regs->r16 >> 48) & 0xffff, (unsigned int *)&sc->gs); + err |=3D __put_user((regs->r16 >> 16) & 0xffff, (unsigned int *)&sc->es); + err |=3D __put_user(regs->r16 & 0xffff, (unsigned int *)&sc->ds); + err |=3D __put_user(regs->r15, &sc->edi); + err |=3D __put_user(regs->r14, &sc->esi); + err |=3D __put_user(regs->r13, &sc->ebp); + err |=3D __put_user(regs->r12, &sc->esp); + err |=3D __put_user(regs->r11, &sc->ebx); + err |=3D __put_user(regs->r10, &sc->edx); + err |=3D __put_user(regs->r9, &sc->ecx); + err |=3D __put_user(regs->r8, &sc->eax); #if 0 - err |=3D __put_user(current->tss.trap_no, &sc->trapno); - err |=3D __put_user(current->tss.error_code, &sc->err); + err |=3D __put_user(current->tss.trap_no, &sc->trapno); + err |=3D __put_user(current->tss.error_code, &sc->err); #endif - err |=3D __put_user(regs->cr_iip, &sc->eip); - err |=3D __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs); - /* - * `eflags' is in an ar register for this context - */ - asm volatile ("mov %0=3Dar.eflag ;;" : "=3Dr"(flag)); - err |=3D __put_user((unsigned int)flag, &sc->eflags); - =20 - err |=3D __put_user(regs->r12, &sc->esp_at_signal); - err |=3D __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc= ->ss); + err |=3D __put_user(regs->cr_iip, &sc->eip); + err |=3D __put_user(regs->r17 & 0xffff, (unsigned int *)&sc->cs); + /* + * `eflags' is in an ar register for this context + */ + asm volatile ("mov %0=3Dar.eflag ;;" : "=3Dr"(flag)); + err |=3D __put_user((unsigned int)flag, &sc->eflags); + err |=3D __put_user(regs->r12, &sc->esp_at_signal); + err |=3D __put_user((regs->r17 >> 16) & 0xffff, (unsigned int *)&sc->ss); =20 #if 0 - tmp =3D save_i387(fpstate); - if (tmp < 0) - err =3D 1; - else - err |=3D __put_user(tmp ? fpstate : NULL, &sc->fpstate); + tmp =3D save_i387(fpstate); + if (tmp < 0) + err =3D 1; + else + err |=3D __put_user(tmp ? fpstate : NULL, &sc->fpstate); =20 - /* non-iBCS2 extensions.. */ + /* non-iBCS2 extensions.. */ #endif - err |=3D __put_user(mask, &sc->oldmask); + err |=3D __put_user(mask, &sc->oldmask); #if 0 - err |=3D __put_user(current->tss.cr2, &sc->cr2); + err |=3D __put_user(current->tss.cr2, &sc->cr2); #endif - =20 - return err; + return err; } =20 static int -restore_sigcontext_ia32(struct pt_regs *regs, struct sigcontext_ia32 *sc, = int *peax) +restore_sigcontext_ia32 (struct pt_regs *regs, struct sigcontext_ia32 *sc,= int *peax) { - unsigned int err =3D 0; + unsigned int err =3D 0; + +#define COPY(ia64x, ia32x) err |=3D __get_user(regs->ia64x, &sc->ia32x) =20 -#define COPY(ia64x, ia32x) err |=3D __get_user(regs->ia64x, &s= c->ia32x) +#define copyseg_gs(tmp) (regs->r16 |=3D (unsigned long) tmp << 48) +#define copyseg_fs(tmp) (regs->r16 |=3D (unsigned long) tmp << 32) +#define copyseg_cs(tmp) (regs->r17 |=3D tmp) +#define copyseg_ss(tmp) (regs->r17 |=3D (unsigned long) tmp << 16) +#define copyseg_es(tmp) (regs->r16 |=3D (unsigned long) tmp << 16) +#define copyseg_ds(tmp) (regs->r16 |=3D tmp) + +#define COPY_SEG(seg) \ + { \ + unsigned short tmp; \ + err |=3D __get_user(tmp, &sc->seg); \ + copyseg_##seg(tmp); \ + } +#define COPY_SEG_STRICT(seg) \ + { \ + unsigned short tmp; \ + err |=3D __get_user(tmp, &sc->seg); \ + copyseg_##seg(tmp|3); \ + } =20 -#define copyseg_gs(tmp) (regs->r16 |=3D (unsigned long) tmp << 48) -#define copyseg_fs(tmp) (regs->r16 |=3D (unsigned long) tmp << 32) -#define copyseg_cs(tmp) (regs->r17 |=3D tmp) -#define copyseg_ss(tmp) (regs->r17 |=3D (unsigned long) tmp << 16) -#define copyseg_es(tmp) (regs->r16 |=3D (unsigned long) tmp << 16) -#define copyseg_ds(tmp) (regs->r16 |=3D tmp) - -#define COPY_SEG(seg) \ - { unsigned short tmp; \ - err |=3D __get_user(tmp, &sc->seg); \ - copyseg_##seg(tmp); } - -#define COPY_SEG_STRICT(seg) \ - { unsigned short tmp; \ - err |=3D __get_user(tmp, &sc->seg); \ - copyseg_##seg(tmp|3); } - - /* To make COPY_SEGs easier, we zero r16, r17 */ - regs->r16 =3D 0; - regs->r17 =3D 0; - - COPY_SEG(gs); - COPY_SEG(fs); - COPY_SEG(es); - COPY_SEG(ds); - COPY(r15, edi); - COPY(r14, esi); - COPY(r13, ebp); - COPY(r12, esp); - COPY(r11, ebx); - COPY(r10, edx); - COPY(r9, ecx); - COPY(cr_iip, eip); - COPY_SEG_STRICT(cs); - COPY_SEG_STRICT(ss); - { + /* To make COPY_SEGs easier, we zero r16, r17 */ + regs->r16 =3D 0; + regs->r17 =3D 0; + + COPY_SEG(gs); + COPY_SEG(fs); + COPY_SEG(es); + COPY_SEG(ds); + COPY(r15, edi); + COPY(r14, esi); + COPY(r13, ebp); + COPY(r12, esp); + COPY(r11, ebx); + COPY(r10, edx); + COPY(r9, ecx); + COPY(cr_iip, eip); + COPY_SEG_STRICT(cs); + COPY_SEG_STRICT(ss); + ia32_load_segment_descriptors(current); + { unsigned int tmpflags; unsigned long flag; =20 /* - * IA32 `eflags' is not part of `pt_regs', it's - * in an ar register which is part of the thread - * context. Fortunately, we are executing in the + * IA32 `eflags' is not part of `pt_regs', it's in an ar register which + * is part of the thread context. Fortunately, we are executing in the * IA32 process's context. */ err |=3D __get_user(tmpflags, &sc->eflags); @@ -210,186 +475,191 @@ asm volatile ("mov ar.eflag=3D%0 ;;" :: "r"(flag)); =20 regs->r1 =3D -1; /* disable syscall checks, r1 is orig_eax */ - } + } =20 #if 0 - { - struct _fpstate * buf; - err |=3D __get_user(buf, &sc->fpstate); - if (buf) { - if (verify_area(VERIFY_READ, buf, sizeof(*buf))) - goto badframe; - err |=3D restore_i387(buf); - } - } + { + struct _fpstate * buf; + err |=3D __get_user(buf, &sc->fpstate); + if (buf) { + if (verify_area(VERIFY_READ, buf, sizeof(*buf))) + goto badframe; + err |=3D restore_i387(buf); + } + } #endif =20 - err |=3D __get_user(*peax, &sc->eax); - return err; + err |=3D __get_user(*peax, &sc->eax); + return err; =20 -#if 0 =20 -badframe: - return 1; +#if 0 + badframe: + return 1; #endif - } =20 /* * Determine which stack to use.. */ static inline void * -get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_s= ize) +get_sigframe (struct k_sigaction *ka, struct pt_regs * regs, size_t frame_= size) { - unsigned long esp; - unsigned int xss; + unsigned long esp; =20 - /* Default to using normal stack */ - esp =3D regs->r12; - xss =3D regs->r16 >> 16; - - /* This is the X/Open sanctioned signal stack switching. */ - if (ka->sa.sa_flags & SA_ONSTACK) { - if (! on_sig_stack(esp)) - esp =3D current->sas_ss_sp + current->sas_ss_size; - } - /* Legacy stack switching not supported */ - =20 - return (void *)((esp - frame_size) & -8ul); + /* Default to using normal stack (truncate off sign-extension of bit 31: = */ + esp =3D (unsigned int) regs->r12; + + /* This is the X/Open sanctioned signal stack switching. */ + if (ka->sa.sa_flags & SA_ONSTACK) { + if (!on_sig_stack(esp)) + esp =3D current->sas_ss_sp + current->sas_ss_size; + } + /* Legacy stack switching not supported */ + + return (void *)((esp - frame_size) & -8ul); } =20 static int -setup_frame_ia32(int sig, struct k_sigaction *ka, sigset_t *set, - struct pt_regs * regs)=20 -{ =20 - struct sigframe_ia32 *frame; - int err =3D 0; - - frame =3D get_sigframe(ka, regs, sizeof(*frame)); - - if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) - goto give_sigsegv; - - err |=3D __put_user((current->exec_domain - && current->exec_domain->signal_invmap - && sig < 32 - ? (int)(current->exec_domain->signal_invmap[sig]) - : sig), - &frame->sig); - - err |=3D setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, s= et->sig[0]); - - if (_IA32_NSIG_WORDS > 1) { - err |=3D __copy_to_user(frame->extramask, &set->sig[1], - sizeof(frame->extramask)); - } - - /* Set up to return from userspace. If provided, use a stub - already in userspace. */ - err |=3D __put_user((long)frame->retcode, &frame->pretcode); - /* This is popl %eax ; movl $,%eax ; int $0x80 */ - err |=3D __put_user(0xb858, (short *)(frame->retcode+0)); -#define __IA32_NR_sigreturn 119 - err |=3D __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->= retcode+2)); - err |=3D __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->ret= code+4)); - err |=3D __put_user(0x80cd, (short *)(frame->retcode+6)); - - if (err) - goto give_sigsegv; - - /* Set up registers for signal handler */ - regs->r12 =3D (unsigned long) frame; - regs->cr_iip =3D (unsigned long) ka->sa.sa_handler; - - set_fs(USER_DS); - regs->r16 =3D (__USER_DS << 16) | (__USER_DS); /* ES =3D DS, GS, F= S are zero */ - regs->r17 =3D (__USER_DS << 16) | __USER_CS; +setup_frame_ia32 (int sig, struct k_sigaction *ka, sigset_t *set, struct p= t_regs * regs) +{ + struct sigframe_ia32 *frame; + int err =3D 0; + + frame =3D get_sigframe(ka, regs, sizeof(*frame)); + + if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) + goto give_sigsegv; + + err |=3D __put_user((current->exec_domain + && current->exec_domain->signal_invmap + && sig < 32 + ? (int)(current->exec_domain->signal_invmap[sig]) + : sig), + &frame->sig); + + err |=3D setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->si= g[0]); + + if (_IA32_NSIG_WORDS > 1) + err |=3D __copy_to_user(frame->extramask, (char *) &set->sig + 4, + sizeof(frame->extramask)); + + /* Set up to return from userspace. If provided, use a stub + already in userspace. */ + if (ka->sa.sa_flags & SA_RESTORER) { + unsigned int restorer =3D IA32_SA_RESTORER(ka); + err |=3D __put_user(restorer, &frame->pretcode); + } else { + err |=3D __put_user((long)frame->retcode, &frame->pretcode); + /* This is popl %eax ; movl $,%eax ; int $0x80 */ + err |=3D __put_user(0xb858, (short *)(frame->retcode+0)); + err |=3D __put_user(__IA32_NR_sigreturn & 0xffff, (short *)(frame->retco= de+2)); + err |=3D __put_user(__IA32_NR_sigreturn >> 16, (short *)(frame->retcode+= 4)); + err |=3D __put_user(0x80cd, (short *)(frame->retcode+6)); + } + + if (err) + goto give_sigsegv; + + /* Set up registers for signal handler */ + regs->r12 =3D (unsigned long) frame; + regs->cr_iip =3D IA32_SA_HANDLER(ka); + + set_fs(USER_DS); + regs->r16 =3D (__USER_DS << 16) | (__USER_DS); /* ES =3D DS, GS, FS are = zero */ + regs->r17 =3D (__USER_DS << 16) | __USER_CS; =20 #if 0 - regs->eflags &=3D ~TF_MASK; + regs->eflags &=3D ~TF_MASK; #endif =20 #if 0 - printk("SIG deliver (%s:%d): sig=3D%d sp=3D%p pc=3D%lx ra=3D%x\n", + printk("SIG deliver (%s:%d): sig=3D%d sp=3D%p pc=3D%lx ra=3D%x\n", current->comm, current->pid, sig, (void *) frame, regs->cr_= iip, frame->pretcode); #endif =20 - return 1; + return 1; =20 -give_sigsegv: - if (sig =3D SIGSEGV) - ka->sa.sa_handler =3D SIG_DFL; - force_sig(SIGSEGV, current); - return 0; + give_sigsegv: + if (sig =3D SIGSEGV) + ka->sa.sa_handler =3D SIG_DFL; + force_sig(SIGSEGV, current); + return 0; } =20 static int -setup_rt_frame_ia32(int sig, struct k_sigaction *ka, siginfo_t *info, - sigset_t *set, struct pt_regs * regs) +setup_rt_frame_ia32 (int sig, struct k_sigaction *ka, siginfo_t *info, + sigset_t *set, struct pt_regs * regs) { - struct rt_sigframe_ia32 *frame; - int err =3D 0; + struct rt_sigframe_ia32 *frame; + int err =3D 0; =20 - frame =3D get_sigframe(ka, regs, sizeof(*frame)); + frame =3D get_sigframe(ka, regs, sizeof(*frame)); =20 - if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) - goto give_sigsegv; - - err |=3D __put_user((current->exec_domain - && current->exec_domain->signal_invmap - && sig < 32 - ? current->exec_domain->signal_invmap[sig] - : sig), - &frame->sig); - err |=3D __put_user((long)&frame->info, &frame->pinfo); - err |=3D __put_user((long)&frame->uc, &frame->puc); - err |=3D copy_siginfo_to_user32(&frame->info, info); - - /* Create the ucontext. */ - err |=3D __put_user(0, &frame->uc.uc_flags); - err |=3D __put_user(0, &frame->uc.uc_link); - err |=3D __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp); - err |=3D __put_user(sas_ss_flags(regs->r12), - &frame->uc.uc_stack.ss_flags); - err |=3D __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_si= ze); - err |=3D setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpst= ate, - regs, set->sig[0]); - err |=3D __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); - =20 - err |=3D __put_user((long)frame->retcode, &frame->pretcode); - /* This is movl $,%eax ; int $0x80 */ - err |=3D __put_user(0xb8, (char *)(frame->retcode+0)); -#define __IA32_NR_rt_sigreturn 173 - err |=3D __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+= 1)); - err |=3D __put_user(0x80cd, (short *)(frame->retcode+5)); + if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) + goto give_sigsegv; + + err |=3D __put_user((current->exec_domain + && current->exec_domain->signal_invmap + && sig < 32 + ? current->exec_domain->signal_invmap[sig] + : sig), + &frame->sig); + err |=3D __put_user((long)&frame->info, &frame->pinfo); + err |=3D __put_user((long)&frame->uc, &frame->puc); + err |=3D copy_siginfo_to_user32(&frame->info, info); + + /* Create the ucontext. */ + err |=3D __put_user(0, &frame->uc.uc_flags); + err |=3D __put_user(0, &frame->uc.uc_link); + err |=3D __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp); + err |=3D __put_user(sas_ss_flags(regs->r12), &frame->uc.uc_stack.ss_flags= ); + err |=3D __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size); + err |=3D setup_sigcontext_ia32(&frame->uc.uc_mcontext, &frame->fpstate, r= egs, set->sig[0]); + err |=3D __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); + if (err) + goto give_sigsegv; + + /* Set up to return from userspace. If provided, use a stub + already in userspace. */ + if (ka->sa.sa_flags & SA_RESTORER) { + unsigned int restorer =3D IA32_SA_RESTORER(ka); + err |=3D __put_user(restorer, &frame->pretcode); + } else { + err |=3D __put_user((long)frame->retcode, &frame->pretcode); + /* This is movl $,%eax ; int $0x80 */ + err |=3D __put_user(0xb8, (char *)(frame->retcode+0)); + err |=3D __put_user(__IA32_NR_rt_sigreturn, (int *)(frame->retcode+1)); + err |=3D __put_user(0x80cd, (short *)(frame->retcode+5)); + } =20 - if (err) - goto give_sigsegv; + if (err) + goto give_sigsegv; =20 - /* Set up registers for signal handler */ - regs->r12 =3D (unsigned long) frame; - regs->cr_iip =3D (unsigned long) ka->sa.sa_handler; + /* Set up registers for signal handler */ + regs->r12 =3D (unsigned long) frame; + regs->cr_iip =3D IA32_SA_HANDLER(ka); =20 - set_fs(USER_DS); + set_fs(USER_DS); =20 - regs->r16 =3D (__USER_DS << 16) | (__USER_DS); /* ES =3D DS, GS, F= S are zero */ - regs->r17 =3D (__USER_DS << 16) | __USER_CS; + regs->r16 =3D (__USER_DS << 16) | (__USER_DS); /* ES =3D DS, GS, FS are = zero */ + regs->r17 =3D (__USER_DS << 16) | __USER_CS; =20 #if 0 - regs->eflags &=3D ~TF_MASK; + regs->eflags &=3D ~TF_MASK; #endif =20 #if 0 - printk("SIG deliver (%s:%d): sp=3D%p pc=3D%lx ra=3D%x\n", + printk("SIG deliver (%s:%d): sp=3D%p pc=3D%lx ra=3D%x\n", current->comm, current->pid, (void *) frame, regs->cr_iip, = frame->pretcode); #endif =20 - return 1; + return 1; =20 give_sigsegv: - if (sig =3D SIGSEGV) - ka->sa.sa_handler =3D SIG_DFL; - force_sig(SIGSEGV, current); - return 0; + if (sig =3D SIGSEGV) + ka->sa.sa_handler =3D SIG_DFL; + force_sig(SIGSEGV, current); + return 0; } =20 int @@ -398,95 +668,78 @@ { /* Set up the stack frame */ if (ka->sa.sa_flags & SA_SIGINFO) - return(setup_rt_frame_ia32(sig, ka, info, set, regs)); + return setup_rt_frame_ia32(sig, ka, info, set, regs); else - return(setup_frame_ia32(sig, ka, set, regs)); + return setup_frame_ia32(sig, ka, set, regs); } =20 -asmlinkage int -sys32_sigreturn( -int arg0, -int arg1, -int arg2, -int arg3, -int arg4, -int arg5, -int arg6, -int arg7, -unsigned long stack) -{ - struct pt_regs *regs =3D (struct pt_regs *) &stack; - struct sigframe_ia32 *frame =3D (struct sigframe_ia32 *)(regs->r12-= 8); - sigset_t set; - int eax; - - if (verify_area(VERIFY_READ, frame, sizeof(*frame))) - goto badframe; - - if (__get_user(set.sig[0], &frame->sc.oldmask) - || (_IA32_NSIG_WORDS > 1 - && __copy_from_user((((char *) &set.sig) + 4), - &frame->extramask, - sizeof(frame->extramask)))) - goto badframe; - - sigdelsetmask(&set, ~_BLOCKABLE); - spin_lock_irq(¤t->sigmask_lock); - current->blocked =3D (sigset_t) set; - recalc_sigpending(current); - spin_unlock_irq(¤t->sigmask_lock); - =20 - if (restore_sigcontext_ia32(regs, &frame->sc, &eax)) - goto badframe; - return eax; - -badframe: - force_sig(SIGSEGV, current); - return 0; -} =20 - -asmlinkage int -sys32_rt_sigreturn( -int arg0, -int arg1, -int arg2, -int arg3, -int arg4, -int arg5, -int arg6, -int arg7, -unsigned long stack) -{ - struct pt_regs *regs =3D (struct pt_regs *) &stack; - struct rt_sigframe_ia32 *frame =3D (struct rt_sigframe_ia32 *)(regs= ->r12 - 4); - sigset_t set; - stack_t st; - int eax; - - if (verify_area(VERIFY_READ, frame, sizeof(*frame))) - goto badframe; - if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set))) - goto badframe; - - sigdelsetmask(&set, ~_BLOCKABLE); - spin_lock_irq(¤t->sigmask_lock); - current->blocked =3D set; - recalc_sigpending(current); - spin_unlock_irq(¤t->sigmask_lock); - =20 - if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax)) - goto badframe; - - if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st))) - goto badframe; - /* It is more difficult to avoid calling this function than to - call it and ignore errors. */ - do_sigaltstack(&st, NULL, regs->r12); - - return eax; - -badframe: - force_sig(SIGSEGV, current); - return 0; -} =20 +asmlinkage long +sys32_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int arg= 5, int arg6, int arg7, + unsigned long stack) +{ + struct pt_regs *regs =3D (struct pt_regs *) &stack; + unsigned long esp =3D (unsigned int) regs->r12; + struct sigframe_ia32 *frame =3D (struct sigframe_ia32 *)(esp - 8); + sigset_t set; + int eax; + + if (verify_area(VERIFY_READ, frame, sizeof(*frame))) + goto badframe; + + if (__get_user(set.sig[0], &frame->sc.oldmask) + || (_IA32_NSIG_WORDS > 1 && __copy_from_user((char *) &set.sig + 4, &= frame->extramask, + sizeof(frame->extramask)))) + goto badframe; + + sigdelsetmask(&set, ~_BLOCKABLE); + spin_lock_irq(¤t->sigmask_lock); + current->blocked =3D (sigset_t) set; + recalc_sigpending(current); + spin_unlock_irq(¤t->sigmask_lock); + + if (restore_sigcontext_ia32(regs, &frame->sc, &eax)) + goto badframe; + return eax; + + badframe: + force_sig(SIGSEGV, current); + return 0; +} =20 +asmlinkage long +sys32_rt_sigreturn (int arg0, int arg1, int arg2, int arg3, int arg4, int = arg5, int arg6, int arg7, + unsigned long stack) +{ + struct pt_regs *regs =3D (struct pt_regs *) &stack; + unsigned long esp =3D (unsigned int) regs->r12; + struct rt_sigframe_ia32 *frame =3D (struct rt_sigframe_ia32 *)(esp - 4); + sigset_t set; + stack_t st; + int eax; + + if (verify_area(VERIFY_READ, frame, sizeof(*frame))) + goto badframe; + if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set))) + goto badframe; + + sigdelsetmask(&set, ~_BLOCKABLE); + spin_lock_irq(¤t->sigmask_lock); + current->blocked =3D set; + recalc_sigpending(current); + spin_unlock_irq(¤t->sigmask_lock); + + if (restore_sigcontext_ia32(regs, &frame->uc.uc_mcontext, &eax)) + goto badframe; + + if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st))) + goto badframe; + /* It is more difficult to avoid calling this function than to + call it and ignore errors. */ + do_sigaltstack(&st, NULL, esp); + + return eax; + + badframe: + force_sig(SIGSEGV, current); + return 0; +} diff -urN linux-2.4.13/arch/ia64/ia32/ia32_support.c linux-2.4.13-lia/arch/= ia64/ia32/ia32_support.c --- linux-2.4.13/arch/ia64/ia32/ia32_support.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_support.c Wed Oct 10 17:39:02 2001 @@ -4,15 +4,18 @@ * Copyright (C) 1999 Arun Sharma * Copyright (C) 2000 Asit K. Mallick * Copyright (C) 2001 Hewlett-Packard Co - * Copyright (C) 2001 David Mosberger-Tang + * David Mosberger-Tang * * 06/16/00 A. Mallick added csd/ssd/tssd for ia32 thread context * 02/19/01 D. Mosberger dropped tssd; it's not needed + * 09/14/01 D. Mosberger fixed memory management for gdt/tss page + * 09/29/01 D. Mosberger added ia32_load_segment_descriptors() */ =20 #include #include #include +#include #include =20 #include @@ -21,10 +24,46 @@ #include #include =20 -extern unsigned long *ia32_gdt_table, *ia32_tss; - extern void die_if_kernel (char *str, struct pt_regs *regs, long err); =20 +struct exec_domain ia32_exec_domain; +struct page *ia32_shared_page[(2*IA32_PAGE_SIZE + PAGE_SIZE - 1)/PAGE_SIZE= ]; +unsigned long *ia32_gdt; + +static unsigned long +load_desc (u16 selector) +{ + unsigned long *table, limit, index; + + if (!selector) + return 0; + if (selector & IA32_SEGSEL_TI) { + table =3D (unsigned long *) IA32_LDT_OFFSET; + limit =3D IA32_LDT_ENTRIES; + } else { + table =3D ia32_gdt; + limit =3D IA32_PAGE_SIZE / sizeof(ia32_gdt[0]); + } + index =3D selector >> IA32_SEGSEL_INDEX_SHIFT; + if (index >=3D limit) + return 0; + return IA32_SEG_UNSCRAMBLE(table[index]); +} + +void +ia32_load_segment_descriptors (struct task_struct *task) +{ + struct pt_regs *regs =3D ia64_task_regs(task); + + /* Setup the segment descriptors */ + regs->r24 =3D load_desc(regs->r16 >> 16); /* ESD */ + regs->r27 =3D load_desc(regs->r16 >> 0); /* DSD */ + regs->r28 =3D load_desc(regs->r16 >> 32); /* FSD */ + regs->r29 =3D load_desc(regs->r16 >> 48); /* GSD */ + task->thread.csd =3D load_desc(regs->r17 >> 0); /* CSD */ + task->thread.ssd =3D load_desc(regs->r17 >> 16); /* SSD */ +} + void ia32_save_state (struct task_struct *t) { @@ -46,14 +85,17 @@ t->thread.csd =3D csd; t->thread.ssd =3D ssd; ia64_set_kr(IA64_KR_IO_BASE, t->thread.old_iob); + ia64_set_kr(IA64_KR_TSSD, t->thread.old_k1); } =20 void ia32_load_state (struct task_struct *t) { - unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd; + unsigned long eflag, fsr, fcr, fir, fdr, csd, ssd, tssd; struct pt_regs *regs =3D ia64_task_regs(t); - int nr; + int nr =3D smp_processor_id(); /* LDT and TSS depend on CPU number: */ + + nr =3D smp_processor_id(); =20 eflag =3D t->thread.eflag; fsr =3D t->thread.fsr; @@ -62,6 +104,7 @@ fdr =3D t->thread.fdr; csd =3D t->thread.csd; ssd =3D t->thread.ssd; + tssd =3D load_desc(_TSS(nr)); /* TSSD */ =20 asm volatile ("mov ar.eflag=3D%0;" "mov ar.fsr=3D%1;" @@ -72,11 +115,12 @@ "mov ar.ssd=3D%6;" :: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr), "r"(csd), "= r"(ssd)); current->thread.old_iob =3D ia64_get_kr(IA64_KR_IO_BASE); + current->thread.old_k1 =3D ia64_get_kr(IA64_KR_TSSD); ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE); + ia64_set_kr(IA64_KR_TSSD, tssd); =20 - /* load TSS and LDT while preserving SS and CS: */ - nr =3D smp_processor_id(); regs->r17 =3D (_TSS(nr) << 48) | (_LDT(nr) << 32) | (__u32) regs->r17; + regs->r30 =3D load_desc(_LDT(nr)); /* LDTD */ } =20 /* @@ -85,36 +129,34 @@ void ia32_gdt_init (void) { - unsigned long gdt_and_tss_page, ldt_size; + unsigned long *tss; + unsigned long ldt_size; int nr; =20 - /* allocate two IA-32 pages of memory: */ - gdt_and_tss_page =3D __get_free_pages(GFP_KERNEL, - (IA32_PAGE_SHIFT < PAGE_SHIFT) - ? 0 : (IA32_PAGE_SHIFT + 1) - PAGE_SHIFT); - ia32_gdt_table =3D (unsigned long *) gdt_and_tss_page; - ia32_tss =3D (unsigned long *) (gdt_and_tss_page + IA32_PAGE_SIZE); - - /* Zero the gdt and tss */ - memset((void *) gdt_and_tss_page, 0, 2*IA32_PAGE_SIZE); + ia32_shared_page[0] =3D alloc_page(GFP_KERNEL); + ia32_gdt =3D page_address(ia32_shared_page[0]); + tss =3D ia32_gdt + IA32_PAGE_SIZE/sizeof(ia32_gdt[0]); + + if (IA32_PAGE_SIZE =3D PAGE_SIZE) { + ia32_shared_page[1] =3D alloc_page(GFP_KERNEL); + tss =3D page_address(ia32_shared_page[1]); + } =20 /* CS descriptor in IA-32 (scrambled) format */ - ia32_gdt_table[__USER_CS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSE= T - 1) >> IA32_PAGE_SHIFT, - 0xb, 1, 3, 1, 1, 1, 1); + ia32_gdt[__USER_CS >> 3] =3D IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) = >> IA32_PAGE_SHIFT, + 0xb, 1, 3, 1, 1, 1, 1); =20 /* DS descriptor in IA-32 (scrambled) format */ - ia32_gdt_table[__USER_DS >> 3] - IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSE= T - 1) >> IA32_PAGE_SHIFT, - 0x3, 1, 3, 1, 1, 1, 1); + ia32_gdt[__USER_DS >> 3] =3D IA32_SEG_DESCRIPTOR(0, (IA32_PAGE_OFFSET-1) = >> IA32_PAGE_SHIFT, + 0x3, 1, 3, 1, 1, 1, 1); =20 /* We never change the TSS and LDT descriptors, so we can share them acro= ss all CPUs. */ ldt_size =3D PAGE_ALIGN(IA32_LDT_ENTRIES*IA32_LDT_ENTRY_SIZE); for (nr =3D 0; nr < NR_CPUS; ++nr) { - ia32_gdt_table[_TSS(nr)] =3D IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235, - 0xb, 0, 3, 1, 1, 1, 0); - ia32_gdt_table[_LDT(nr)] =3D IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_si= ze - 1, - 0x2, 0, 3, 1, 1, 1, 0); + ia32_gdt[_TSS(nr)] =3D IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235, + 0xb, 0, 3, 1, 1, 1, 0); + ia32_gdt[_LDT(nr)] =3D IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1, + 0x2, 0, 3, 1, 1, 1, 0); } } =20 @@ -133,3 +175,18 @@ siginfo.si_code =3D TRAP_BRKPT; force_sig_info(SIGTRAP, &siginfo, current); } + +static int __init +ia32_init (void) +{ + ia32_exec_domain.name =3D "Linux/x86"; + ia32_exec_domain.handler =3D NULL; + ia32_exec_domain.pers_low =3D PER_LINUX32; + ia32_exec_domain.pers_high =3D PER_LINUX32; + ia32_exec_domain.signal_map =3D default_exec_domain.signal_map; + ia32_exec_domain.signal_invmap =3D default_exec_domain.signal_invmap; + register_exec_domain(&ia32_exec_domain); + return 0; +} + +__initcall(ia32_init); diff -urN linux-2.4.13/arch/ia64/ia32/ia32_traps.c linux-2.4.13-lia/arch/ia= 64/ia32/ia32_traps.c --- linux-2.4.13/arch/ia64/ia32/ia32_traps.c Thu Jan 4 12:50:17 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/ia32_traps.c Thu Oct 4 00:21:52 2001 @@ -1,7 +1,12 @@ /* - * IA32 exceptions handler + * IA-32 exception handlers * + * Copyright (C) 2000 Asit K. Mallick + * Copyright (C) 2001 Hewlett-Packard Co + * David Mosberger-Tang +/* * 06/16/00 A. Mallick added siginfo for most cases (close to IA32) + * 09/29/00 D. Mosberger added ia32_intercept() */ =20 #include @@ -9,6 +14,26 @@ =20 #include #include + +int +ia32_intercept (struct pt_regs *regs, unsigned long isr) +{ + switch ((isr >> 16) & 0xff) { + case 0: /* Instruction intercept fault */ + case 3: /* Locked Data reference fault */ + case 1: /* Gate intercept trap */ + return -1; + + case 2: /* System flag trap */ + if (((isr >> 14) & 0x3) >=3D 2) { + /* MOV SS, POP SS instructions */ + ia64_psr(regs)->id =3D 1; + return 0; + } else + return -1; + } + return -1; +} =20 int ia32_exception (struct pt_regs *regs, unsigned long isr) diff -urN linux-2.4.13/arch/ia64/ia32/sys_ia32.c linux-2.4.13-lia/arch/ia64= /ia32/sys_ia32.c --- linux-2.4.13/arch/ia64/ia32/sys_ia32.c Mon Aug 20 10:18:26 2001 +++ linux-2.4.13-lia/arch/ia64/ia32/sys_ia32.c Wed Oct 10 17:39:17 2001 @@ -1,14 +1,13 @@ /* - * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Based on - * sys_sparc32 + * sys_ia32.c: Conversion between 32bit and 64bit native syscalls. Derived= from sys_sparc32.c. * * Copyright (C) 2000 VA Linux Co * Copyright (C) 2000 Don Dugger * Copyright (C) 1999 Arun Sharma * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu) - * Copyright (C) 2000 Hewlett-Packard Co. - * Copyright (C) 2000 David Mosberger-Tang + * Copyright (C) 2000-2001 Hewlett-Packard Co + * David Mosberger-Tang * * These routines maintain argument size conversion between 32bit and 64bit * environment. @@ -53,31 +52,56 @@ #include #include #include -#include =20 #include #include #include =20 +#define DEBUG 0 + +#if DEBUG +# define DBG(fmt...) printk(KERN_DEBUG fmt) +#else +# define DBG(fmt...) +#endif + #define A(__x) ((unsigned long)(__x)) #define AA(__x) ((unsigned long)(__x)) #define ROUND_UP(x,a) ((__typeof__(x))(((unsigned long)(x) + ((a) - 1)) & = ~((a) - 1))) #define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de))) =20 +#define OFFSET4K(a) ((a) & 0xfff) +#define PAGE_START(addr) ((addr) & PAGE_MASK) +#define PAGE_OFF(addr) ((addr) & ~PAGE_MASK) + extern asmlinkage long sys_execve (char *, char **, char **, struct pt_reg= s *); extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long); +extern asmlinkage long sys_munmap (unsigned long, size_t); +extern unsigned long arch_get_unmapped_area (struct file *, unsigned long,= unsigned long, + unsigned long, unsigned long); + +/* forward declaration: */ +asmlinkage long sys32_mprotect (unsigned int, unsigned int, int); + +/* + * Anything that modifies or inspects ia32 user virtual memory must hold t= his semaphore + * while doing so. + */ +/* XXX make per-mm: */ +static DECLARE_MUTEX(ia32_mmap_sem); =20 static int nargs (unsigned int arg, char **ap) { - int n, err, addr; + unsigned int addr; + int n, err; =20 if (!arg) return 0; =20 n =3D 0; do { - err =3D get_user(addr, (int *)A(arg)); + err =3D get_user(addr, (unsigned int *)A(arg)); if (err) return err; if (ap) @@ -94,7 +118,7 @@ int stack) { struct pt_regs *regs =3D (struct pt_regs *)&stack; - unsigned long old_map_base, old_task_size; + unsigned long old_map_base, old_task_size, tssd; char **av, **ae; int na, ne, len; long r; @@ -123,15 +147,20 @@ =20 old_map_base =3D current->thread.map_base; old_task_size =3D current->thread.task_size; + tssd =3D ia64_get_kr(IA64_KR_TSSD); =20 - /* we may be exec'ing a 64-bit process: reset map base & task-size: */ + /* we may be exec'ing a 64-bit process: reset map base, task-size, and io= -base: */ current->thread.map_base =3D DEFAULT_MAP_BASE; current->thread.task_size =3D DEFAULT_TASK_SIZE; + ia64_set_kr(IA64_KR_IO_BASE, current->thread.old_iob); + ia64_set_kr(IA64_KR_TSSD, current->thread.old_k1); =20 set_fs(KERNEL_DS); r =3D sys_execve(filename, av, ae, regs); if (r < 0) { - /* oops, execve failed, switch back to old map base & task-size: */ + /* oops, execve failed, switch back to old values... */ + ia64_set_kr(IA64_KR_IO_BASE, IA32_IOBASE); + ia64_set_kr(IA64_KR_TSSD, tssd); current->thread.map_base =3D old_map_base; current->thread.task_size =3D old_task_size; set_fs(USER_DS); /* establish new task-size as the address-limit */ @@ -142,30 +171,33 @@ } =20 static inline int -putstat(struct stat32 *ubuf, struct stat *kbuf) +putstat (struct stat32 *ubuf, struct stat *kbuf) { int err; =20 - err =3D put_user (kbuf->st_dev, &ubuf->st_dev); - err |=3D __put_user (kbuf->st_ino, &ubuf->st_ino); - err |=3D __put_user (kbuf->st_mode, &ubuf->st_mode); - err |=3D __put_user (kbuf->st_nlink, &ubuf->st_nlink); - err |=3D __put_user (kbuf->st_uid, &ubuf->st_uid); - err |=3D __put_user (kbuf->st_gid, &ubuf->st_gid); - err |=3D __put_user (kbuf->st_rdev, &ubuf->st_rdev); - err |=3D __put_user (kbuf->st_size, &ubuf->st_size); - err |=3D __put_user (kbuf->st_atime, &ubuf->st_atime); - err |=3D __put_user (kbuf->st_mtime, &ubuf->st_mtime); - err |=3D __put_user (kbuf->st_ctime, &ubuf->st_ctime); - err |=3D __put_user (kbuf->st_blksize, &ubuf->st_blksize); - err |=3D __put_user (kbuf->st_blocks, &ubuf->st_blocks); + if (clear_user(ubuf, sizeof(*ubuf))) + return 1; + + err =3D __put_user(kbuf->st_dev, &ubuf->st_dev); + err |=3D __put_user(kbuf->st_ino, &ubuf->st_ino); + err |=3D __put_user(kbuf->st_mode, &ubuf->st_mode); + err |=3D __put_user(kbuf->st_nlink, &ubuf->st_nlink); + err |=3D __put_user(kbuf->st_uid, &ubuf->st_uid); + err |=3D __put_user(kbuf->st_gid, &ubuf->st_gid); + err |=3D __put_user(kbuf->st_rdev, &ubuf->st_rdev); + err |=3D __put_user(kbuf->st_size, &ubuf->st_size); + err |=3D __put_user(kbuf->st_atime, &ubuf->st_atime); + err |=3D __put_user(kbuf->st_mtime, &ubuf->st_mtime); + err |=3D __put_user(kbuf->st_ctime, &ubuf->st_ctime); + err |=3D __put_user(kbuf->st_blksize, &ubuf->st_blksize); + err |=3D __put_user(kbuf->st_blocks, &ubuf->st_blocks); return err; } =20 -extern asmlinkage long sys_newstat(char * filename, struct stat * statbuf); +extern asmlinkage long sys_newstat (char * filename, struct stat * statbuf= ); =20 asmlinkage long -sys32_newstat(char * filename, struct stat32 *statbuf) +sys32_newstat (char *filename, struct stat32 *statbuf) { int ret; struct stat s; @@ -173,8 +205,8 @@ =20 set_fs(KERNEL_DS); ret =3D sys_newstat(filename, &s); - set_fs (old_fs); - if (putstat (statbuf, &s)) + set_fs(old_fs); + if (putstat(statbuf, &s)) return -EFAULT; return ret; } @@ -182,16 +214,16 @@ extern asmlinkage long sys_newlstat(char * filename, struct stat * statbuf= ); =20 asmlinkage long -sys32_newlstat(char * filename, struct stat32 *statbuf) +sys32_newlstat (char *filename, struct stat32 *statbuf) { - int ret; - struct stat s; mm_segment_t old_fs =3D get_fs(); + struct stat s; + int ret; =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_newlstat(filename, &s); - set_fs (old_fs); - if (putstat (statbuf, &s)) + set_fs(old_fs); + if (putstat(statbuf, &s)) return -EFAULT; return ret; } @@ -199,112 +231,249 @@ extern asmlinkage long sys_newfstat(unsigned int fd, struct stat * statbuf= ); =20 asmlinkage long -sys32_newfstat(unsigned int fd, struct stat32 *statbuf) +sys32_newfstat (unsigned int fd, struct stat32 *statbuf) { - int ret; - struct stat s; mm_segment_t old_fs =3D get_fs(); + struct stat s; + int ret; =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_newfstat(fd, &s); - set_fs (old_fs); - if (putstat (statbuf, &s)) + set_fs(old_fs); + if (putstat(statbuf, &s)) return -EFAULT; return ret; } =20 -#define OFFSET4K(a) ((a) & 0xfff) +#if PAGE_SHIFT > IA32_PAGE_SHIFT =20 -unsigned long -do_mmap_fake(struct file *file, unsigned long addr, unsigned long len, - unsigned long prot, unsigned long flags, loff_t off) + +static int +get_page_prot (unsigned long addr) +{ + struct vm_area_struct *vma =3D find_vma(current->mm, addr); + int prot =3D 0; + + if (!vma || vma->vm_start > addr) + return 0; + + if (vma->vm_flags & VM_READ) + prot |=3D PROT_READ; + if (vma->vm_flags & VM_WRITE) + prot |=3D PROT_WRITE; + if (vma->vm_flags & VM_EXEC) + prot |=3D PROT_EXEC; + return prot; +} + +/* + * Map a subpage by creating an anonymous page that contains the union of = the old page and + * the subpage. + */ +static unsigned long +mmap_subpage (struct file *file, unsigned long start, unsigned long end, i= nt prot, int flags, + loff_t off) { + void *page =3D (void *) get_zeroed_page(GFP_KERNEL); struct inode *inode; - void *front, *back; - unsigned long baddr; - int r; - char c; + unsigned long ret; + int old_prot =3D get_page_prot(start); =20 - if (OFFSET4K(addr) || OFFSET4K(off)) - return -EINVAL; - prot |=3D PROT_WRITE; - front =3D NULL; - back =3D NULL; - if ((baddr =3D (addr & PAGE_MASK)) !=3D addr && get_user(c, (char *)baddr= ) =3D 0) { - front =3D kmalloc(addr - baddr, GFP_KERNEL); - if (!front) - return -ENOMEM; - __copy_user(front, (void *)baddr, addr - baddr); + DBG("mmap_subpage(file=3D%p,start=3D0x%lx,end=3D0x%lx,prot=3D%x,flags=3D%= x,off=3D0x%llx)\n", + file, start, end, prot, flags, off); + + if (!page) + return -ENOMEM; + + if (old_prot) + copy_from_user(page, (void *) PAGE_START(start), PAGE_SIZE); + + down_write(¤t->mm->mmap_sem); + { + ret =3D do_mmap(0, PAGE_START(start), PAGE_SIZE, prot | PROT_WRITE, + flags | MAP_FIXED | MAP_ANONYMOUS, 0); } - if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + le= n)) =3D 0) { - back =3D kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL); - if (!back) { - if (front) - kfree(front); - return -ENOMEM; + up_write(¤t->mm->mmap_sem); + + if (IS_ERR((void *) ret)) + goto out; + + if (old_prot) { + /* copy back the old page contents. */ + if (PAGE_OFF(start)) + copy_to_user((void *) PAGE_START(start), page, PAGE_OFF(start)); + if (PAGE_OFF(end)) + copy_to_user((void *) end, page + PAGE_OFF(end), + PAGE_SIZE - PAGE_OFF(end)); + } + if (!(flags & MAP_ANONYMOUS)) { + /* read the file contents */ + inode =3D file->f_dentry->d_inode; + if (!inode->i_fop || !file->f_op->read + || ((*file->f_op->read)(file, (char *) start, end - start, &off) < 0= )) + { + ret =3D -EINVAL; + goto out; + } + } + if (!(prot & PROT_WRITE)) + ret =3D sys_mprotect(PAGE_START(start), PAGE_SIZE, prot | old_prot); + out: + free_page((unsigned long) page); + return ret; +} + +static unsigned long +emulate_mmap (struct file *file, unsigned long start, unsigned long len, i= nt prot, int flags, + loff_t off) +{ + unsigned long tmp, end, pend, pstart, ret, is_congruent, fudge =3D 0; + struct inode *inode; + loff_t poff; + + end =3D start + len; + pstart =3D PAGE_START(start); + pend =3D PAGE_ALIGN(end); + + if (flags & MAP_FIXED) { + if (start > pstart) { + if (flags & MAP_SHARED) + printk(KERN_INFO + "%s(%d): emulate_mmap() can't share head (addr=3D0x%lx)\n", + current->comm, current->pid, start); + ret =3D mmap_subpage(file, start, min(PAGE_ALIGN(start), end), prot, fl= ags, + off); + if (IS_ERR((void *) ret)) + return ret; + pstart +=3D PAGE_SIZE; + if (pstart >=3D pend) + return start; /* done */ + } + if (end < pend) { + if (flags & MAP_SHARED) + printk(KERN_INFO + "%s(%d): emulate_mmap() can't share tail (end=3D0x%lx)\n", + current->comm, current->pid, end); + ret =3D mmap_subpage(file, max(start, PAGE_START(end)), end, prot, flag= s, + (off + len) - PAGE_OFF(end)); + if (IS_ERR((void *) ret)) + return ret; + pend -=3D PAGE_SIZE; + if (pstart >=3D pend) + return start; /* done */ + } + } else { + /* + * If a start address was specified, use it if the entire rounded out ar= ea + * is available. + */ + if (start && !pstart) + fudge =3D 1; /* handle case of mapping to range (0,PAGE_SIZE) */ + tmp =3D arch_get_unmapped_area(file, pstart - fudge, pend - pstart, 0, f= lags); + if (tmp !=3D pstart) { + pstart =3D tmp; + start =3D pstart + PAGE_OFF(off); /* make start congruent with off */ + end =3D start + len; + pend =3D PAGE_ALIGN(end); } - __copy_user(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_= MASK)); } + + poff =3D off + (pstart - start); /* note: (pstart - start) may be negativ= e */ + is_congruent =3D (flags & MAP_ANONYMOUS) || (PAGE_OFF(poff) =3D 0); + + if ((flags & MAP_SHARED) && !is_congruent) + printk(KERN_INFO "%s(%d): emulate_mmap() can't share contents of incongr= uent mmap " + "(addr=3D0x%lx,off=3D0x%llx)\n", current->comm, current->pid, sta= rt, off); + + DBG("mmap_body: mapping [0x%lx-0x%lx) %s with poff 0x%llx\n", pstart, pen= d, + is_congruent ? "congruent" : "not congruent", poff); + down_write(¤t->mm->mmap_sem); - r =3D do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS= , 0); + { + if (!(flags & MAP_ANONYMOUS) && is_congruent) + ret =3D do_mmap(file, pstart, pend - pstart, prot, flags | MAP_FIXED, p= off); + else + ret =3D do_mmap(0, pstart, pend - pstart, + prot | ((flags & MAP_ANONYMOUS) ? 0 : PROT_WRITE), + flags | MAP_FIXED | MAP_ANONYMOUS, 0); + } up_write(¤t->mm->mmap_sem); - if (r < 0) - return(r); - if (addr =3D 0) - addr =3D r; - if (back) { - __copy_user((char *)addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_= MASK)); - kfree(back); - } - if (front) { - __copy_user((void *)baddr, front, addr - baddr); - kfree(front); - } - if (flags & MAP_ANONYMOUS) { - clear_user((char *)addr, len); - return(addr); + + if (IS_ERR((void *) ret)) + return ret; + + if (!is_congruent) { + /* read the file contents */ + inode =3D file->f_dentry->d_inode; + if (!inode->i_fop || !file->f_op->read + || ((*file->f_op->read)(file, (char *) pstart, pend - pstart, &poff)= < 0)) + { + sys_munmap(pstart, pend - pstart); + return -EINVAL; + } + if (!(prot & PROT_WRITE) && sys_mprotect(pstart, pend - pstart, prot) < = 0) + return EINVAL; } - if (!file) - return -EINVAL; - inode =3D file->f_dentry->d_inode; - if (!inode->i_fop) - return -EINVAL; - if (!file->f_op->read) - return -EINVAL; - r =3D file->f_op->read(file, (char *)addr, len, &off); - return (r < 0) ? -EINVAL : addr; + return start; } =20 -long -ia32_do_mmap (struct file *file, unsigned int addr, unsigned int len, unsi= gned int prot, - unsigned int flags, unsigned int fd, unsigned int offset) +#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */ + +static inline unsigned int +get_prot32 (unsigned int prot) { - long error =3D -EFAULT; - unsigned int poff; + if (prot & PROT_WRITE) + /* on x86, PROT_WRITE implies PROT_READ which implies PROT_EEC */ + prot |=3D PROT_READ | PROT_WRITE | PROT_EXEC; + else if (prot & (PROT_READ | PROT_EXEC)) + /* on x86, there is no distinction between PROT_READ and PROT_EXEC */ + prot |=3D (PROT_READ | PROT_EXEC); =20 - flags &=3D ~(MAP_EXECUTABLE | MAP_DENYWRITE); - prot |=3D PROT_EXEC; + return prot; +} =20 - if ((flags & MAP_FIXED) && ((addr & ~PAGE_MASK) || (offset & ~PAGE_MASK))) - error =3D do_mmap_fake(file, addr, len, prot, flags, (loff_t)offset); - else { - poff =3D offset & PAGE_MASK; - len +=3D offset - poff; +unsigned long +ia32_do_mmap (struct file *file, unsigned long addr, unsigned long len, in= t prot, int flags, + loff_t offset) +{ + DBG("ia32_do_mmap(file=3D%p,addr=3D0x%lx,len=3D0x%lx,prot=3D%x,flags=3D%x= ,offset=3D0x%llx)\n", + file, addr, len, prot, flags, offset); + + if (file && (!file->f_op || !file->f_op->mmap)) + return -ENODEV; + + len =3D IA32_PAGE_ALIGN(len); + if (len =3D 0) + return addr; + + if (len > IA32_PAGE_OFFSET || addr > IA32_PAGE_OFFSET - len) + return -EINVAL; + + if (OFFSET4K(offset)) + return -EINVAL; =20 - down_write(¤t->mm->mmap_sem); - error =3D do_mmap_pgoff(file, addr, len, prot, flags, poff >> PAGE_SHIFT= ); - up_write(¤t->mm->mmap_sem); + prot =3D get_prot32(prot); =20 - if (!IS_ERR((void *) error)) - error +=3D offset - poff; +#if PAGE_SHIFT > IA32_PAGE_SHIFT + down(&ia32_mmap_sem); + { + addr =3D emulate_mmap(file, addr, len, prot, flags, offset); } - return error; + up(&ia32_mmap_sem); +#else + down_write(¤t->mm->mmap_sem); + { + addr =3D do_mmap(file, addr, len, prot, flags, offset); + } + up_write(¤t->mm->mmap_sem); +#endif + DBG("ia32_do_mmap: returning 0x%lx\n", addr); + return addr; } =20 /* - * Linux/i386 didn't use to be able to handle more than - * 4 system call parameters, so these system calls used a memory - * block for parameter passing.. + * Linux/i386 didn't use to be able to handle more than 4 system call para= meters, so these + * system calls used a memory block for parameter passing.. */ =20 struct mmap_arg_struct { @@ -317,180 +486,166 @@ }; =20 asmlinkage long -sys32_mmap(struct mmap_arg_struct *arg) +sys32_mmap (struct mmap_arg_struct *arg) { struct mmap_arg_struct a; struct file *file =3D NULL; - long retval; + unsigned long addr; + int flags; =20 if (copy_from_user(&a, arg, sizeof(a))) return -EFAULT; =20 - if (PAGE_ALIGN(a.len) =3D 0) - return a.addr; + if (OFFSET4K(a.offset)) + return -EINVAL; + + flags =3D a.flags; =20 - if (!(a.flags & MAP_ANONYMOUS)) { + flags &=3D ~(MAP_EXECUTABLE | MAP_DENYWRITE); + if (!(flags & MAP_ANONYMOUS)) { file =3D fget(a.fd); if (!file) return -EBADF; } -#ifdef CONFIG_IA64_PAGE_SIZE_4KB - if ((a.offset & ~PAGE_MASK) !=3D 0) - return -EINVAL; =20 - down_write(¤t->mm->mmap_sem); - retval =3D do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, a.offset >= > PAGE_SHIFT); - up_write(¤t->mm->mmap_sem); -#else - retval =3D ia32_do_mmap(file, a.addr, a.len, a.prot, a.flags, a.fd, a.off= set); -#endif + addr =3D ia32_do_mmap(file, a.addr, a.len, a.prot, flags, a.offset); + if (file) fput(file); - return retval; + return addr; } =20 asmlinkage long -sys32_mprotect(unsigned long start, size_t len, unsigned long prot) +sys32_mmap2 (unsigned int addr, unsigned int len, unsigned int prot, unsig= ned int flags, + unsigned int fd, unsigned int pgoff) { + struct file *file =3D NULL; + unsigned long retval; =20 -#ifdef CONFIG_IA64_PAGE_SIZE_4KB - return(sys_mprotect(start, len, prot)); -#else // CONFIG_IA64_PAGE_SIZE_4KB - if (prot =3D 0) - return(0); - len +=3D start & ~PAGE_MASK; - if ((start & ~PAGE_MASK) && (prot & PROT_WRITE)) - prot |=3D PROT_EXEC; - return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot)); -#endif // CONFIG_IA64_PAGE_SIZE_4KB -} + flags &=3D ~(MAP_EXECUTABLE | MAP_DENYWRITE); + if (!(flags & MAP_ANONYMOUS)) { + file =3D fget(fd); + if (!file) + return -EBADF; + } =20 -asmlinkage long -sys32_pipe(int *fd) -{ - int retval; - int fds[2]; + retval =3D ia32_do_mmap(file, addr, len, prot, flags, + (unsigned long) pgoff << IA32_PAGE_SHIFT); =20 - retval =3D do_pipe(fds); - if (retval) - goto out; - if (copy_to_user(fd, fds, sizeof(fds))) - retval =3D -EFAULT; - out: + if (file) + fput(file); return retval; } =20 asmlinkage long -sys32_signal (int sig, unsigned int handler) +sys32_munmap (unsigned int start, unsigned int len) { - struct k_sigaction new_sa, old_sa; - int ret; + unsigned int end =3D start + len; + long ret; + +#if PAGE_SHIFT <=3D IA32_PAGE_SHIFT + ret =3D sys_munmap(start, end - start); +#else + if (start > end) + return -EINVAL; + + start =3D PAGE_ALIGN(start); + end =3D PAGE_START(end); + + if (start >=3D end) + return 0; + + down(&ia32_mmap_sem); + { + ret =3D sys_munmap(start, end - start); + } + up(&ia32_mmap_sem); +#endif + return ret; +} =20 - new_sa.sa.sa_handler =3D (__sighandler_t) A(handler); - new_sa.sa.sa_flags =3D SA_ONESHOT | SA_NOMASK; +#if PAGE_SHIFT > IA32_PAGE_SHIFT + +/* + * When mprotect()ing a partial page, we set the permission to the union o= f the old + * settings and the new settings. In other words, it's only possible to m= ake access to a + * partial page less restrictive. + */ +static long +mprotect_subpage (unsigned long address, int new_prot) +{ + int old_prot; =20 - ret =3D do_sigaction(sig, &new_sa, &old_sa); + if (new_prot =3D PROT_NONE) + return 0; /* optimize case where nothing changes... */ =20 - return ret ? ret : (unsigned long)old_sa.sa.sa_handler; + old_prot =3D get_page_prot(address); + return sys_mprotect(address, PAGE_SIZE, new_prot | old_prot); } =20 +#endif /* PAGE_SHIFT > IA32_PAGE_SHIFT */ + asmlinkage long -sys32_rt_sigaction(int sig, struct sigaction32 *act, - struct sigaction32 *oact, unsigned int sigsetsize) +sys32_mprotect (unsigned int start, unsigned int len, int prot) { - struct k_sigaction new_ka, old_ka; - int ret; - sigset32_t set32; + unsigned long end =3D start + len; +#if PAGE_SHIFT > IA32_PAGE_SHIFT + long retval =3D 0; +#endif + + prot =3D get_prot32(prot); =20 - /* XXX: Don't preclude handling different sized sigset_t's. */ - if (sigsetsize !=3D sizeof(sigset32_t)) +#if PAGE_SHIFT <=3D IA32_PAGE_SHIFT + return sys_mprotect(start, end - start, prot); +#else + if (OFFSET4K(start)) return -EINVAL; =20 - if (act) { - ret =3D get_user((long)new_ka.sa.sa_handler, &act->sa_handler); - ret |=3D __copy_from_user(&set32, &act->sa_mask, - sizeof(sigset32_t)); - switch (_NSIG_WORDS) { - case 4: new_ka.sa.sa_mask.sig[3] =3D set32.sig[6] - | (((long)set32.sig[7]) << 32); - case 3: new_ka.sa.sa_mask.sig[2] =3D set32.sig[4] - | (((long)set32.sig[5]) << 32); - case 2: new_ka.sa.sa_mask.sig[1] =3D set32.sig[2] - | (((long)set32.sig[3]) << 32); - case 1: new_ka.sa.sa_mask.sig[0] =3D set32.sig[0] - | (((long)set32.sig[1]) << 32); - } - ret |=3D __get_user(new_ka.sa.sa_flags, &act->sa_flags); + end =3D IA32_PAGE_ALIGN(end); + if (end < start) + return -EINVAL; =20 - if (ret) - return -EFAULT; - } + down(&ia32_mmap_sem); + { + if (PAGE_OFF(start)) { + /* start address is 4KB aligned but not page aligned. */ + retval =3D mprotect_subpage(PAGE_START(start), prot); + if (retval < 0) + goto out; =20 - ret =3D do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); + start =3D PAGE_ALIGN(start); + if (start >=3D end) + goto out; /* retval is already zero... */ + } =20 - if (!ret && oact) { - switch (_NSIG_WORDS) { - case 4: - set32.sig[7] =3D (old_ka.sa.sa_mask.sig[3] >> 32); - set32.sig[6] =3D old_ka.sa.sa_mask.sig[3]; - case 3: - set32.sig[5] =3D (old_ka.sa.sa_mask.sig[2] >> 32); - set32.sig[4] =3D old_ka.sa.sa_mask.sig[2]; - case 2: - set32.sig[3] =3D (old_ka.sa.sa_mask.sig[1] >> 32); - set32.sig[2] =3D old_ka.sa.sa_mask.sig[1]; - case 1: - set32.sig[1] =3D (old_ka.sa.sa_mask.sig[0] >> 32); - set32.sig[0] =3D old_ka.sa.sa_mask.sig[0]; + if (PAGE_OFF(end)) { + /* end address is 4KB aligned but not page aligned. */ + retval =3D mprotect_subpage(PAGE_START(end), prot); + if (retval < 0) + return retval; + end =3D PAGE_START(end); } - ret =3D put_user((long)old_ka.sa.sa_handler, &oact->sa_handler); - ret |=3D __copy_to_user(&oact->sa_mask, &set32, - sizeof(sigset32_t)); - ret |=3D __put_user(old_ka.sa.sa_flags, &oact->sa_flags); + retval =3D sys_mprotect(start, end - start, prot); } - - return ret; + out: + up(&ia32_mmap_sem); + return retval; +#endif } =20 - -extern asmlinkage long sys_rt_sigprocmask(int how, sigset_t *set, sigset_t= *oset, - size_t sigsetsize); - asmlinkage long -sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset, - unsigned int sigsetsize) +sys32_pipe (int *fd) { - sigset_t s; - sigset32_t s32; - int ret; - mm_segment_t old_fs =3D get_fs(); + int retval; + int fds[2]; =20 - if (set) { - if (copy_from_user (&s32, set, sizeof(sigset32_t))) - return -EFAULT; - switch (_NSIG_WORDS) { - case 4: s.sig[3] =3D s32.sig[6] | (((long)s32.sig[7]) << 32); - case 3: s.sig[2] =3D s32.sig[4] | (((long)s32.sig[5]) << 32); - case 2: s.sig[1] =3D s32.sig[2] | (((long)s32.sig[3]) << 32); - case 1: s.sig[0] =3D s32.sig[0] | (((long)s32.sig[1]) << 32); - } - } - set_fs (KERNEL_DS); - ret =3D sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL, - sigsetsize); - set_fs (old_fs); - if (ret) return ret; - if (oset) { - switch (_NSIG_WORDS) { - case 4: s32.sig[7] =3D (s.sig[3] >> 32); s32.sig[6] =3D s.sig[3]; - case 3: s32.sig[5] =3D (s.sig[2] >> 32); s32.sig[4] =3D s.sig[2]; - case 2: s32.sig[3] =3D (s.sig[1] >> 32); s32.sig[2] =3D s.sig[1]; - case 1: s32.sig[1] =3D (s.sig[0] >> 32); s32.sig[0] =3D s.sig[0]; - } - if (copy_to_user (oset, &s32, sizeof(sigset32_t))) - return -EFAULT; - } - return 0; + retval =3D do_pipe(fds); + if (retval) + goto out; + if (copy_to_user(fd, fds, sizeof(fds))) + retval =3D -EFAULT; + out: + return retval; } =20 static inline int @@ -498,31 +653,34 @@ { int err; =20 - err =3D put_user (kbuf->f_type, &ubuf->f_type); - err |=3D __put_user (kbuf->f_bsize, &ubuf->f_bsize); - err |=3D __put_user (kbuf->f_blocks, &ubuf->f_blocks); - err |=3D __put_user (kbuf->f_bfree, &ubuf->f_bfree); - err |=3D __put_user (kbuf->f_bavail, &ubuf->f_bavail); - err |=3D __put_user (kbuf->f_files, &ubuf->f_files); - err |=3D __put_user (kbuf->f_ffree, &ubuf->f_ffree); - err |=3D __put_user (kbuf->f_namelen, &ubuf->f_namelen); - err |=3D __put_user (kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]); - err |=3D __put_user (kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]); + if (!access_ok(VERIFY_WRITE, ubuf, sizeof(*ubuf))) + return -EFAULT; + + err =3D __put_user(kbuf->f_type, &ubuf->f_type); + err |=3D __put_user(kbuf->f_bsize, &ubuf->f_bsize); + err |=3D __put_user(kbuf->f_blocks, &ubuf->f_blocks); + err |=3D __put_user(kbuf->f_bfree, &ubuf->f_bfree); + err |=3D __put_user(kbuf->f_bavail, &ubuf->f_bavail); + err |=3D __put_user(kbuf->f_files, &ubuf->f_files); + err |=3D __put_user(kbuf->f_ffree, &ubuf->f_ffree); + err |=3D __put_user(kbuf->f_namelen, &ubuf->f_namelen); + err |=3D __put_user(kbuf->f_fsid.val[0], &ubuf->f_fsid.val[0]); + err |=3D __put_user(kbuf->f_fsid.val[1], &ubuf->f_fsid.val[1]); return err; } =20 extern asmlinkage long sys_statfs(const char * path, struct statfs * buf); =20 asmlinkage long -sys32_statfs(const char * path, struct statfs32 *buf) +sys32_statfs (const char *path, struct statfs32 *buf) { int ret; struct statfs s; mm_segment_t old_fs =3D get_fs(); =20 - set_fs (KERNEL_DS); - ret =3D sys_statfs((const char *)path, &s); - set_fs (old_fs); + set_fs(KERNEL_DS); + ret =3D sys_statfs(path, &s); + set_fs(old_fs); if (put_statfs(buf, &s)) return -EFAULT; return ret; @@ -531,15 +689,15 @@ extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf); =20 asmlinkage long -sys32_fstatfs(unsigned int fd, struct statfs32 *buf) +sys32_fstatfs (unsigned int fd, struct statfs32 *buf) { int ret; struct statfs s; mm_segment_t old_fs =3D get_fs(); =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_fstatfs(fd, &s); - set_fs (old_fs); + set_fs(old_fs); if (put_statfs(buf, &s)) return -EFAULT; return ret; @@ -557,23 +715,21 @@ }; =20 static inline long -get_tv32(struct timeval *o, struct timeval32 *i) +get_tv32 (struct timeval *o, struct timeval32 *i) { return (!access_ok(VERIFY_READ, i, sizeof(*i)) || - (__get_user(o->tv_sec, &i->tv_sec) | - __get_user(o->tv_usec, &i->tv_usec))); + (__get_user(o->tv_sec, &i->tv_sec) | __get_user(o->tv_usec, &i->tv_usec)= )); } =20 static inline long -put_tv32(struct timeval32 *o, struct timeval *i) +put_tv32 (struct timeval32 *o, struct timeval *i) { return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) || - (__put_user(i->tv_sec, &o->tv_sec) | - __put_user(i->tv_usec, &o->tv_usec))); + (__put_user(i->tv_sec, &o->tv_sec) | __put_user(i->tv_usec, &o->tv_usec)= )); } =20 static inline long -get_it32(struct itimerval *o, struct itimerval32 *i) +get_it32 (struct itimerval *o, struct itimerval32 *i) { return (!access_ok(VERIFY_READ, i, sizeof(*i)) || (__get_user(o->it_interval.tv_sec, &i->it_interval.tv_sec) | @@ -583,7 +739,7 @@ } =20 static inline long -put_it32(struct itimerval32 *o, struct itimerval *i) +put_it32 (struct itimerval32 *o, struct itimerval *i) { return (!access_ok(VERIFY_WRITE, o, sizeof(*o)) || (__put_user(i->it_interval.tv_sec, &o->it_interval.tv_sec) | @@ -592,10 +748,10 @@ __put_user(i->it_value.tv_usec, &o->it_value.tv_usec))); } =20 -extern int do_getitimer(int which, struct itimerval *value); +extern int do_getitimer (int which, struct itimerval *value); =20 asmlinkage long -sys32_getitimer(int which, struct itimerval32 *it) +sys32_getitimer (int which, struct itimerval32 *it) { struct itimerval kit; int error; @@ -607,10 +763,10 @@ return error; } =20 -extern int do_setitimer(int which, struct itimerval *, struct itimerval *); +extern int do_setitimer (int which, struct itimerval *, struct itimerval *= ); =20 asmlinkage long -sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out) +sys32_setitimer (int which, struct itimerval32 *in, struct itimerval32 *ou= t) { struct itimerval kin, kout; int error; @@ -630,8 +786,9 @@ return 0; =20 } + asmlinkage unsigned long -sys32_alarm(unsigned int seconds) +sys32_alarm (unsigned int seconds) { struct itimerval it_new, it_old; unsigned int oldalarm; @@ -660,7 +817,7 @@ extern asmlinkage long sys_gettimeofday (struct timeval *tv, struct timezo= ne *tz); =20 asmlinkage long -ia32_utime(char * filename, struct utimbuf_32 *times32) +sys32_utime (char *filename, struct utimbuf_32 *times32) { mm_segment_t old_fs =3D get_fs(); struct timeval tv[2], *tvp; @@ -673,20 +830,20 @@ if (get_user(tv[1].tv_sec, ×32->mtime)) return -EFAULT; tv[1].tv_usec =3D 0; - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); tvp =3D tv; } else tvp =3D NULL; ret =3D sys_utimes(filename, tvp); - set_fs (old_fs); + set_fs(old_fs); return ret; } =20 extern struct timezone sys_tz; -extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz); +extern int do_sys_settimeofday (struct timeval *tv, struct timezone *tz); =20 asmlinkage long -sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz) +sys32_gettimeofday (struct timeval32 *tv, struct timezone *tz) { if (tv) { struct timeval ktv; @@ -702,7 +859,7 @@ } =20 asmlinkage long -sys32_settimeofday(struct timeval32 *tv, struct timezone *tz) +sys32_settimeofday (struct timeval32 *tv, struct timezone *tz) { struct timeval ktv; struct timezone ktz; @@ -719,20 +876,6 @@ return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL); } =20 -struct linux32_dirent { - u32 d_ino; - u32 d_off; - u16 d_reclen; - char d_name[1]; -}; - -struct old_linux32_dirent { - u32 d_ino; - u32 d_offset; - u16 d_namlen; - char d_name[1]; -}; - struct getdents32_callback { struct linux32_dirent * current_dir; struct linux32_dirent * previous; @@ -775,7 +918,7 @@ } =20 asmlinkage long -sys32_getdents (unsigned int fd, void * dirent, unsigned int count) +sys32_getdents (unsigned int fd, struct linux32_dirent *dirent, unsigned i= nt count) { struct file * file; struct linux32_dirent * lastdirent; @@ -787,7 +930,7 @@ if (!file) goto out; =20 - buf.current_dir =3D (struct linux32_dirent *) dirent; + buf.current_dir =3D dirent; buf.previous =3D NULL; buf.count =3D count; buf.error =3D 0; @@ -831,7 +974,7 @@ } =20 asmlinkage long -sys32_readdir (unsigned int fd, void * dirent, unsigned int count) +sys32_readdir (unsigned int fd, void *dirent, unsigned int count) { int error; struct file * file; @@ -866,7 +1009,7 @@ #define ROUND_UP_TIME(x,y) (((x)+(y)-1)/(y)) =20 asmlinkage long -sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval= 32 *tvp32) +sys32_select (int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeva= l32 *tvp32) { fd_set_bits fds; char *bits; @@ -878,8 +1021,7 @@ time_t sec, usec; =20 ret =3D -EFAULT; - if (get_user(sec, &tvp32->tv_sec) - || get_user(usec, &tvp32->tv_usec)) + if (get_user(sec, &tvp32->tv_sec) || get_user(usec, &tvp32->tv_usec)) goto out_nofds; =20 ret =3D -EINVAL; @@ -933,9 +1075,7 @@ usec =3D timeout % HZ; usec *=3D (1000000/HZ); } - if (put_user(sec, (int *)&tvp32->tv_sec) - || put_user(usec, (int *)&tvp32->tv_usec)) - { + if (put_user(sec, &tvp32->tv_sec) || put_user(usec, &tvp32->tv_usec)) { ret =3D -EFAULT; goto out; } @@ -969,50 +1109,43 @@ }; =20 asmlinkage long -old_select(struct sel_arg_struct *arg) +sys32_old_select (struct sel_arg_struct *arg) { struct sel_arg_struct a; =20 if (copy_from_user(&a, arg, sizeof(a))) return -EFAULT; - return sys32_select(a.n, (fd_set *)A(a.inp), (fd_set *)A(a.outp), (fd_set= *)A(a.exp), - (struct timeval32 *)A(a.tvp)); + return sys32_select(a.n, (fd_set *) A(a.inp), (fd_set *) A(a.outp), (fd_s= et *) A(a.exp), + (struct timeval32 *) A(a.tvp)); } =20 -struct timespec32 { - int tv_sec; - int tv_nsec; -}; - -extern asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespe= c *rmtp); +extern asmlinkage long sys_nanosleep (struct timespec *rqtp, struct timesp= ec *rmtp); =20 asmlinkage long -sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp) +sys32_nanosleep (struct timespec32 *rqtp, struct timespec32 *rmtp) { struct timespec t; int ret; - mm_segment_t old_fs =3D get_fs (); + mm_segment_t old_fs =3D get_fs(); =20 - if (get_user (t.tv_sec, &rqtp->tv_sec) || - __get_user (t.tv_nsec, &rqtp->tv_nsec)) + if (get_user (t.tv_sec, &rqtp->tv_sec) || get_user (t.tv_nsec, &rqtp->tv_= nsec)) return -EFAULT; - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_nanosleep(&t, rmtp ? &t : NULL); - set_fs (old_fs); + set_fs(old_fs); if (rmtp && ret =3D -EINTR) { - if (__put_user (t.tv_sec, &rmtp->tv_sec) || - __put_user (t.tv_nsec, &rmtp->tv_nsec)) + if (put_user(t.tv_sec, &rmtp->tv_sec) || put_user(t.tv_nsec, &rmtp->tv_n= sec)) return -EFAULT; } return ret; } =20 struct iovec32 { unsigned int iov_base; int iov_len; }; -asmlinkage ssize_t sys_readv(unsigned long,const struct iovec *,unsigned l= ong); -asmlinkage ssize_t sys_writev(unsigned long,const struct iovec *,unsigned = long); +asmlinkage ssize_t sys_readv (unsigned long,const struct iovec *,unsigned = long); +asmlinkage ssize_t sys_writev (unsigned long,const struct iovec *,unsigned= long); =20 static struct iovec * -get_iovec32(struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int t= ype) +get_iovec32 (struct iovec32 *iov32, struct iovec *iov_buf, u32 count, int = type) { int i; u32 buf, len; @@ -1022,24 +1155,23 @@ =20 if (!count) return 0; - if(verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count)) - return(struct iovec *)0; + if (verify_area(VERIFY_READ, iov32, sizeof(struct iovec32)*count)) + return NULL; if (count > UIO_MAXIOV) - return(struct iovec *)0; + return NULL; if (count > UIO_FASTIOV) { iov =3D kmalloc(count*sizeof(struct iovec), GFP_KERNEL); if (!iov) - return((struct iovec *)0); + return NULL; } else iov =3D iov_buf; =20 ivp =3D iov; for (i =3D 0; i < count; i++) { - if (__get_user(len, &iov32->iov_len) || - __get_user(buf, &iov32->iov_base)) { + if (__get_user(len, &iov32->iov_len) || __get_user(buf, &iov32->iov_base= )) { if (iov !=3D iov_buf) kfree(iov); - return((struct iovec *)0); + return NULL; } if (verify_area(type, (void *)A(buf), len)) { if (iov !=3D iov_buf) @@ -1047,22 +1179,23 @@ return((struct iovec *)0); } ivp->iov_base =3D (void *)A(buf); - ivp->iov_len =3D (__kernel_size_t)len; + ivp->iov_len =3D (__kernel_size_t) len; iov32++; ivp++; } - return(iov); + return iov; } =20 asmlinkage long -sys32_readv(int fd, struct iovec32 *vector, u32 count) +sys32_readv (int fd, struct iovec32 *vector, u32 count) { struct iovec iovstack[UIO_FASTIOV]; struct iovec *iov; - int ret; + long ret; mm_segment_t old_fs =3D get_fs(); =20 - if ((iov =3D get_iovec32(vector, iovstack, count, VERIFY_WRITE)) =3D (str= uct iovec *)0) + iov =3D get_iovec32(vector, iovstack, count, VERIFY_WRITE); + if (!iov) return -EFAULT; set_fs(KERNEL_DS); ret =3D sys_readv(fd, iov, count); @@ -1073,14 +1206,15 @@ } =20 asmlinkage long -sys32_writev(int fd, struct iovec32 *vector, u32 count) +sys32_writev (int fd, struct iovec32 *vector, u32 count) { struct iovec iovstack[UIO_FASTIOV]; struct iovec *iov; - int ret; + long ret; mm_segment_t old_fs =3D get_fs(); =20 - if ((iov =3D get_iovec32(vector, iovstack, count, VERIFY_READ)) =3D (stru= ct iovec *)0) + iov =3D get_iovec32(vector, iovstack, count, VERIFY_READ); + if (!iov) return -EFAULT; set_fs(KERNEL_DS); ret =3D sys_writev(fd, iov, count); @@ -1098,45 +1232,66 @@ int rlim_max; }; =20 -extern asmlinkage long sys_getrlimit(unsigned int resource, struct rlimit = *rlim); +extern asmlinkage long sys_getrlimit (unsigned int resource, struct rlimit= *rlim); =20 asmlinkage long -sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim) +sys32_old_getrlimit (unsigned int resource, struct rlimit32 *rlim) { + mm_segment_t old_fs =3D get_fs(); + struct rlimit r; + int ret; + + set_fs(KERNEL_DS); + ret =3D sys_getrlimit(resource, &r); + set_fs(old_fs); + if (!ret) { + ret =3D put_user(RESOURCE32(r.rlim_cur), &rlim->rlim_cur); + ret |=3D put_user(RESOURCE32(r.rlim_max), &rlim->rlim_max); + } + return ret; +} + +asmlinkage long +sys32_getrlimit (unsigned int resource, struct rlimit32 *rlim) +{ + mm_segment_t old_fs =3D get_fs(); struct rlimit r; int ret; - mm_segment_t old_fs =3D get_fs (); =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_getrlimit(resource, &r); - set_fs (old_fs); + set_fs(old_fs); if (!ret) { - ret =3D put_user (RESOURCE32(r.rlim_cur), &rlim->rlim_cur); - ret |=3D __put_user (RESOURCE32(r.rlim_max), &rlim->rlim_max); + if (r.rlim_cur >=3D 0xffffffff) + r.rlim_cur =3D 0xffffffff; + if (r.rlim_max >=3D 0xffffffff) + r.rlim_max =3D 0xffffffff; + ret =3D put_user(r.rlim_cur, &rlim->rlim_cur); + ret |=3D put_user(r.rlim_max, &rlim->rlim_max); } return ret; } =20 -extern asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit = *rlim); +extern asmlinkage long sys_setrlimit (unsigned int resource, struct rlimit= *rlim); =20 asmlinkage long -sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim) +sys32_setrlimit (unsigned int resource, struct rlimit32 *rlim) { struct rlimit r; int ret; - mm_segment_t old_fs =3D get_fs (); + mm_segment_t old_fs =3D get_fs(); =20 - if (resource >=3D RLIM_NLIMITS) return -EINVAL; - if (get_user (r.rlim_cur, &rlim->rlim_cur) || - __get_user (r.rlim_max, &rlim->rlim_max)) + if (resource >=3D RLIM_NLIMITS) + return -EINVAL; + if (get_user(r.rlim_cur, &rlim->rlim_cur) || get_user(r.rlim_max, &rlim->= rlim_max)) return -EFAULT; if (r.rlim_cur =3D RLIM_INFINITY32) r.rlim_cur =3D RLIM_INFINITY; if (r.rlim_max =3D RLIM_INFINITY32) r.rlim_max =3D RLIM_INFINITY; - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_setrlimit(resource, &r); - set_fs (old_fs); + set_fs(old_fs); return ret; } =20 @@ -1154,25 +1309,141 @@ unsigned msg_flags; }; =20 -static inline int -shape_msg(struct msghdr *mp, struct msghdr32 *mp32) -{ - int ret; - unsigned int i; +struct cmsghdr32 { + __kernel_size_t32 cmsg_len; + int cmsg_level; + int cmsg_type; +}; =20 - if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32))) - return(-EFAULT); - ret =3D __get_user(i, &mp32->msg_name); - mp->msg_name =3D (void *)A(i); - ret |=3D __get_user(mp->msg_namelen, &mp32->msg_namelen); - ret |=3D __get_user(i, &mp32->msg_iov); +/* Bleech... */ +#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) __cmsg32_nxthdr((ctl),(le= n),(cmsg),(cmsglen)) +#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) cmsg32_nxthdr((mhdr), (cmsg), (= cmsglen)) +#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) ) +#define CMSG32_DATA(cmsg) \ + ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32)))) +#define CMSG32_SPACE(len) \ + (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len)) +#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len)) +#define __CMSG32_FIRSTHDR(ctl,len) \ + ((len) >=3D sizeof(struct cmsghdr32) ? (struct cmsghdr32 *)(ctl) : (struc= t cmsghdr32 *)NULL) +#define CMSG32_FIRSTHDR(msg) __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->= msg_controllen) + +static inline struct cmsghdr32 * +__cmsg32_nxthdr (void *ctl, __kernel_size_t size, struct cmsghdr32 *cmsg, = int cmsg_len) +{ + struct cmsghdr32 * ptr; + + ptr =3D (struct cmsghdr32 *)(((unsigned char *) cmsg) + CMSG32_ALIGN(cmsg= _len)); + if ((unsigned long)((char*)(ptr+1) - (char *) ctl) > size) + return NULL; + return ptr; +} + +static inline struct cmsghdr32 * +cmsg32_nxthdr (struct msghdr *msg, struct cmsghdr32 *cmsg, int cmsg_len) +{ + return __cmsg32_nxthdr(msg->msg_control, msg->msg_controllen, cmsg, cmsg_= len); +} + +static inline int +get_msghdr32 (struct msghdr *mp, struct msghdr32 *mp32) +{ + int ret; + unsigned int i; + + if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32))) + return -EFAULT; + ret =3D __get_user(i, &mp32->msg_name); + mp->msg_name =3D (void *)A(i); + ret |=3D __get_user(mp->msg_namelen, &mp32->msg_namelen); + ret |=3D __get_user(i, &mp32->msg_iov); mp->msg_iov =3D (struct iovec *)A(i); ret |=3D __get_user(mp->msg_iovlen, &mp32->msg_iovlen); ret |=3D __get_user(i, &mp32->msg_control); mp->msg_control =3D (void *)A(i); ret |=3D __get_user(mp->msg_controllen, &mp32->msg_controllen); ret |=3D __get_user(mp->msg_flags, &mp32->msg_flags); - return(ret ? -EFAULT : 0); + return ret ? -EFAULT : 0; +} + +/* + * There is a lot of hair here because the alignment rules (and thus place= ment) of cmsg + * headers and length are different for 32-bit apps. -DaveM + */ +static int +get_cmsghdr32 (struct msghdr *kmsg, unsigned char *stackbuf, struct sock *= sk, size_t *bufsize) +{ + struct cmsghdr *kcmsg, *kcmsg_base; + __kernel_size_t kcmlen, tmp; + __kernel_size_t32 ucmlen; + struct cmsghdr32 *ucmsg; + long err; + + kcmlen =3D 0; + kcmsg_base =3D kcmsg =3D (struct cmsghdr *)stackbuf; + ucmsg =3D CMSG32_FIRSTHDR(kmsg); + while (ucmsg !=3D NULL) { + if (get_user(ucmlen, &ucmsg->cmsg_len)) + return -EFAULT; + + /* Catch bogons. */ + if (CMSG32_ALIGN(ucmlen) < CMSG32_ALIGN(sizeof(struct cmsghdr32))) + return -EINVAL; + if ((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control) + ucmlen) + > kmsg->msg_controllen) + return -EINVAL; + + tmp =3D ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) + + CMSG_ALIGN(sizeof(struct cmsghdr))); + kcmlen +=3D tmp; + ucmsg =3D CMSG32_NXTHDR(kmsg, ucmsg, ucmlen); + } + if (kcmlen =3D 0) + return -EINVAL; + + /* + * The kcmlen holds the 64-bit version of the control length. It may not= be + * modified as we do not stick it into the kmsg until we have successfull= y copied + * over all of the data from the user. + */ + if (kcmlen > *bufsize) { + *bufsize =3D kcmlen; + kcmsg_base =3D kcmsg =3D sock_kmalloc(sk, kcmlen, GFP_KERNEL); + } + if (kcmsg =3D NULL) + return -ENOBUFS; + + /* Now copy them over neatly. */ + memset(kcmsg, 0, kcmlen); + ucmsg =3D CMSG32_FIRSTHDR(kmsg); + while (ucmsg !=3D NULL) { + err =3D get_user(ucmlen, &ucmsg->cmsg_len); + tmp =3D ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) + + CMSG_ALIGN(sizeof(struct cmsghdr))); + kcmsg->cmsg_len =3D tmp; + err |=3D get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level); + err |=3D get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type); + + /* Copy over the data. */ + err |=3D copy_from_user(CMSG_DATA(kcmsg), CMSG32_DATA(ucmsg), + (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg)))); + if (err) + goto out_free_efault; + + /* Advance. */ + kcmsg =3D (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp)); + ucmsg =3D CMSG32_NXTHDR(kmsg, ucmsg, ucmlen); + } + + /* Ok, looks like we made it. Hook it up and return success. */ + kmsg->msg_control =3D kcmsg_base; + kmsg->msg_controllen =3D kcmlen; + return 0; + +out_free_efault: + if (kcmsg_base !=3D (struct cmsghdr *)stackbuf) + sock_kfree_s(sk, kcmsg_base, kcmlen); + return -EFAULT; } =20 /* @@ -1187,20 +1458,17 @@ */ =20 static inline int -verify_iovec32(struct msghdr *m, struct iovec *iov, char *address, int mod= e) +verify_iovec32 (struct msghdr *m, struct iovec *iov, char *address, int mo= de) { int size, err, ct; struct iovec32 *iov32; =20 - if(m->msg_namelen) - { - if(mode=3DVERIFY_READ) - { - err=3Dmove_addr_to_kernel(m->msg_name, m->msg_namelen, address); - if(err<0) + if (m->msg_namelen) { + if (mode =3D VERIFY_READ) { + err =3D move_addr_to_kernel(m->msg_name, m->msg_namelen, address); + if (err < 0) goto out; } - m->msg_name =3D address; } else m->msg_name =3D NULL; @@ -1209,7 +1477,7 @@ size =3D m->msg_iovlen * sizeof(struct iovec32); if (copy_from_user(iov, m->msg_iov, size)) goto out; - m->msg_iov=3Diov; + m->msg_iov =3D iov; =20 err =3D 0; iov32 =3D (struct iovec32 *)iov; @@ -1222,8 +1490,188 @@ return err; } =20 -extern __inline__ void -sockfd_put(struct socket *sock) +static void +put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data) +{ + struct cmsghdr32 *cm =3D (struct cmsghdr32 *) kmsg->msg_control; + struct cmsghdr32 cmhdr; + int cmlen =3D CMSG32_LEN(len); + + if(cm =3D NULL || kmsg->msg_controllen < sizeof(*cm)) { + kmsg->msg_flags |=3D MSG_CTRUNC; + return; + } + + if(kmsg->msg_controllen < cmlen) { + kmsg->msg_flags |=3D MSG_CTRUNC; + cmlen =3D kmsg->msg_controllen; + } + cmhdr.cmsg_level =3D level; + cmhdr.cmsg_type =3D type; + cmhdr.cmsg_len =3D cmlen; + + if(copy_to_user(cm, &cmhdr, sizeof cmhdr)) + return; + if(copy_to_user(CMSG32_DATA(cm), data, + cmlen - sizeof(struct cmsghdr32))) + return; + cmlen =3D CMSG32_SPACE(len); + kmsg->msg_control +=3D cmlen; + kmsg->msg_controllen -=3D cmlen; +} + +static void +scm_detach_fds32 (struct msghdr *kmsg, struct scm_cookie *scm) +{ + struct cmsghdr32 *cm =3D (struct cmsghdr32 *) kmsg->msg_control; + int fdmax =3D (kmsg->msg_controllen - sizeof(struct cmsghdr32)) + / sizeof(int); + int fdnum =3D scm->fp->count; + struct file **fp =3D scm->fp->fp; + int *cmfptr; + int err =3D 0, i; + + if (fdnum < fdmax) + fdmax =3D fdnum; + + for (i =3D 0, cmfptr =3D (int *) CMSG32_DATA(cm); + i < fdmax; + i++, cmfptr++) { + int new_fd; + err =3D get_unused_fd(); + if (err < 0) + break; + new_fd =3D err; + err =3D put_user(new_fd, cmfptr); + if (err) { + put_unused_fd(new_fd); + break; + } + /* Bump the usage count and install the file. */ + get_file(fp[i]); + current->files->fd[new_fd] =3D fp[i]; + } + + if (i > 0) { + int cmlen =3D CMSG32_LEN(i * sizeof(int)); + if (!err) + err =3D put_user(SOL_SOCKET, &cm->cmsg_level); + if (!err) + err =3D put_user(SCM_RIGHTS, &cm->cmsg_type); + if (!err) + err =3D put_user(cmlen, &cm->cmsg_len); + if (!err) { + cmlen =3D CMSG32_SPACE(i * sizeof(int)); + kmsg->msg_control +=3D cmlen; + kmsg->msg_controllen -=3D cmlen; + } + } + if (i < fdnum) + kmsg->msg_flags |=3D MSG_CTRUNC; + + /* + * All of the files that fit in the message have had their + * usage counts incremented, so we just free the list. + */ + __scm_destroy(scm); +} + +/* + * In these cases we (currently) can just copy to data over verbatim becau= se all CMSGs + * created by the kernel have well defined types which have the same layou= t in both the + * 32-bit and 64-bit API. One must add some special cased conversions her= e if we start + * sending control messages with incompatible types. + * + * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after + * we do our work. The remaining cases are: + * + * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean + * IP_TTL int 32-bit clean + * IP_TOS __u8 32-bit clean + * IP_RECVOPTS variable length 32-bit clean + * IP_RETOPTS variable length 32-bit clean + * (these last two are clean because the types are defined + * by the IPv4 protocol) + * IP_RECVERR struct sock_extended_err + + * struct sockaddr_in 32-bit clean + * SOL_IPV6 IPV6_RECVERR struct sock_extended_err + + * struct sockaddr_in6 32-bit clean + * IPV6_PKTINFO struct in6_pktinfo 32-bit clean + * IPV6_HOPLIMIT int 32-bit clean + * IPV6_FLOWINFO u32 32-bit clean + * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean + * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean + * IPV6_RTHDR ipv6 routing exthdr 32-bit clean + * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean + */ +static void +cmsg32_recvmsg_fixup (struct msghdr *kmsg, unsigned long orig_cmsg_uptr) +{ + unsigned char *workbuf, *wp; + unsigned long bufsz, space_avail; + struct cmsghdr *ucmsg; + long err; + + bufsz =3D ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr; + space_avail =3D kmsg->msg_controllen + bufsz; + wp =3D workbuf =3D kmalloc(bufsz, GFP_KERNEL); + if (workbuf =3D NULL) + goto fail; + + /* To make this more sane we assume the kernel sends back properly + * formatted control messages. Because of how the kernel will truncate + * the cmsg_len for MSG_TRUNC cases, we need not check that case either. + */ + ucmsg =3D (struct cmsghdr *) orig_cmsg_uptr; + while (((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) { + struct cmsghdr32 *kcmsg32 =3D (struct cmsghdr32 *) wp; + int clen64, clen32; + + /* + * UCMSG is the 64-bit format CMSG entry in user-space. KCMSG32 is with= in + * the kernel space temporary buffer we use to convert into a 32-bit sty= le + * CMSG. + */ + err =3D get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len); + err |=3D get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level); + err |=3D get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type); + if (err) + goto fail2; + + clen64 =3D kcmsg32->cmsg_len; + copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg), + clen64 - CMSG_ALIGN(sizeof(*ucmsg))); + clen32 =3D ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) + + CMSG32_ALIGN(sizeof(struct cmsghdr32))); + kcmsg32->cmsg_len =3D clen32; + + ucmsg =3D (struct cmsghdr *) (((char *)ucmsg) + CMSG_ALIGN(clen64)); + wp =3D (((char *)kcmsg32) + CMSG32_ALIGN(clen32)); + } + + /* Copy back fixed up data, and adjust pointers. */ + bufsz =3D (wp - workbuf); + if (copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz)) + goto fail2; + + kmsg->msg_control =3D (struct cmsghdr *) (((char *)orig_cmsg_uptr) + bufs= z); + kmsg->msg_controllen =3D space_avail - bufsz; + kfree(workbuf); + return; + + fail2: + kfree(workbuf); + fail: + /* + * If we leave the 64-bit format CMSG chunks in there, the application co= uld get + * confused and crash. So to ensure greater recovery, we report no CMSGs. + */ + kmsg->msg_controllen +=3D bufsz; + kmsg->msg_control =3D (void *) orig_cmsg_uptr; +} + +static inline void +sockfd_put (struct socket *sock) { fput(sock->file); } @@ -1234,13 +1682,14 @@ 24 for IPv6, about 80 for AX.25 */ =20 -extern struct socket *sockfd_lookup(int fd, int *err); +extern struct socket *sockfd_lookup (int fd, int *err); =20 /* * BSD sendmsg interface */ =20 -int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags) +int +sys32_sendmsg (int fd, struct msghdr32 *msg, unsigned flags) { struct socket *sock; char address[MAX_SOCK_ADDR]; @@ -1248,10 +1697,11 @@ unsigned char ctl[sizeof(struct cmsghdr) + 20]; /* 20 is size of ipv6_pkt= info */ unsigned char *ctl_buf =3D ctl; struct msghdr msg_sys; - int err, ctl_len, iov_size, total_len; + int err, iov_size, total_len; + size_t ctl_len; =20 err =3D -EFAULT; - if (shape_msg(&msg_sys, msg)) + if (get_msghdr32(&msg_sys, msg)) goto out; =20 sock =3D sockfd_lookup(fd, &err); @@ -1282,20 +1732,12 @@ =20 if (msg_sys.msg_controllen > INT_MAX) goto out_freeiov; - ctl_len =3D msg_sys.msg_controllen; - if (ctl_len) - { - if (ctl_len > sizeof(ctl)) - { - err =3D -ENOBUFS; - ctl_buf =3D sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL); - if (ctl_buf =3D NULL) - goto out_freeiov; - } - err =3D -EFAULT; - if (copy_from_user(ctl_buf, msg_sys.msg_control, ctl_len)) - goto out_freectl; - msg_sys.msg_control =3D ctl_buf; + if (msg_sys.msg_controllen) { + ctl_len =3D sizeof(ctl); + err =3D get_cmsghdr32(&msg_sys, ctl_buf, sock->sk, &ctl_len); + if (err) + goto out_freeiov; + ctl_buf =3D msg_sys.msg_control; } msg_sys.msg_flags =3D flags; =20 @@ -1303,7 +1745,6 @@ msg_sys.msg_flags |=3D MSG_DONTWAIT; err =3D sock_sendmsg(sock, &msg_sys, total_len); =20 -out_freectl: if (ctl_buf !=3D ctl) sock_kfree_s(sock->sk, ctl_buf, ctl_len); out_freeiov: @@ -1328,6 +1769,7 @@ struct msghdr msg_sys; unsigned long cmsg_ptr; int err, iov_size, total_len, len; + struct scm_cookie scm; =20 /* kernel mode address */ char addr[MAX_SOCK_ADDR]; @@ -1336,8 +1778,8 @@ struct sockaddr *uaddr; int *uaddr_len; =20 - err=3D-EFAULT; - if (shape_msg(&msg_sys, msg)) + err =3D -EFAULT; + if (get_msghdr32(&msg_sys, msg)) goto out; =20 sock =3D sockfd_lookup(fd, &err); @@ -1374,13 +1816,42 @@ =20 if (sock->file->f_flags & O_NONBLOCK) flags |=3D MSG_DONTWAIT; - err =3D sock_recvmsg(sock, &msg_sys, total_len, flags); - if (err < 0) - goto out_freeiov; - len =3D err; =20 - if (uaddr !=3D NULL) { - err =3D move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len); + memset(&scm, 0, sizeof(scm)); + + lock_kernel(); + { + err =3D sock->ops->recvmsg(sock, &msg_sys, total_len, flags, &scm); + if (err < 0) + goto out_unlock_freeiov; + + len =3D err; + if (!msg_sys.msg_control) { + if (sock->passcred || scm.fp) + msg_sys.msg_flags |=3D MSG_CTRUNC; + if (scm.fp) + __scm_destroy(&scm); + } else { + /* + * If recvmsg processing itself placed some control messages into + * user space, it's is using 64-bit CMSG processing, so we need to + * fix it up before we tack on more stuff. + */ + if ((unsigned long) msg_sys.msg_control !=3D cmsg_ptr) + cmsg32_recvmsg_fixup(&msg_sys, cmsg_ptr); + + /* Wheee... */ + if (sock->passcred) + put_cmsg32(&msg_sys, SOL_SOCKET, SCM_CREDENTIALS, + sizeof(scm.creds), &scm.creds); + if (scm.fp !=3D NULL) + scm_detach_fds32(&msg_sys, &scm); + } + } + unlock_kernel(); + + if (uaddr !=3D NULL) { + err =3D move_addr_to_user(addr, msg_sys.msg_namelen, uaddr, uaddr_len); if (err < 0) goto out_freeiov; } @@ -1393,20 +1864,23 @@ goto out_freeiov; err =3D len; =20 -out_freeiov: + out_freeiov: if (iov !=3D iovstack) sock_kfree_s(sock->sk, iov, iov_size); -out_put: + out_put: sockfd_put(sock); -out: + out: return err; + + out_unlock_freeiov: + goto out_freeiov; } =20 /* Argument list sizes for sys_socketcall */ #define AL(x) ((x) * sizeof(u32)) -static unsigned char nas[18]=3D{AL(0),AL(3),AL(3),AL(3),AL(2),AL(3), - AL(3),AL(3),AL(4),AL(4),AL(4),AL(6), - AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)}; +static const unsigned char nas[18]=3D{AL(0),AL(3),AL(3),AL(3),AL(2),AL(3), + AL(3),AL(3),AL(4),AL(4),AL(4),AL(6), + AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)}; #undef AL =20 extern asmlinkage long sys_bind(int fd, struct sockaddr *umyaddr, int addr= len); @@ -1435,7 +1909,8 @@ extern asmlinkage long sys_shutdown(int fd, int how); extern asmlinkage long sys_listen(int fd, int backlog); =20 -asmlinkage long sys32_socketcall(int call, u32 *args) +asmlinkage long +sys32_socketcall (int call, u32 *args) { int ret; u32 a[6]; @@ -1463,16 +1938,13 @@ ret =3D sys_listen(a0, a1); break; case SYS_ACCEPT: - ret =3D sys_accept(a0, (struct sockaddr *)A(a1), - (int *)A(a[2])); + ret =3D sys_accept(a0, (struct sockaddr *)A(a1), (int *)A(a[2])); break; case SYS_GETSOCKNAME: - ret =3D sys_getsockname(a0, (struct sockaddr *)A(a1), - (int *)A(a[2])); + ret =3D sys_getsockname(a0, (struct sockaddr *)A(a1), (int *)A(a[2])); break; case SYS_GETPEERNAME: - ret =3D sys_getpeername(a0, (struct sockaddr *)A(a1), - (int *)A(a[2])); + ret =3D sys_getpeername(a0, (struct sockaddr *)A(a1), (int *)A(a[2])); break; case SYS_SOCKETPAIR: ret =3D sys_socketpair(a0, a1, a[2], (int *)A(a[3])); @@ -1500,12 +1972,10 @@ ret =3D sys_getsockopt(a0, a1, a[2], a[3], a[4]); break; case SYS_SENDMSG: - ret =3D sys32_sendmsg(a0, (struct msghdr32 *)A(a1), - a[2]); + ret =3D sys32_sendmsg(a0, (struct msghdr32 *) A(a1), a[2]); break; case SYS_RECVMSG: - ret =3D sys32_recvmsg(a0, (struct msghdr32 *)A(a1), - a[2]); + ret =3D sys32_recvmsg(a0, (struct msghdr32 *) A(a1), a[2]); break; default: ret =3D EINVAL; @@ -1522,15 +1992,28 @@ =20 struct msgbuf32 { s32 mtype; char mtext[1]; }; =20 -struct ipc_perm32 -{ - key_t key; - __kernel_uid_t32 uid; - __kernel_gid_t32 gid; - __kernel_uid_t32 cuid; - __kernel_gid_t32 cgid; +struct ipc_perm32 { + key_t key; + __kernel_uid_t32 uid; + __kernel_gid_t32 gid; + __kernel_uid_t32 cuid; + __kernel_gid_t32 cgid; + __kernel_mode_t32 mode; + unsigned short seq; +}; + +struct ipc64_perm32 { + key_t key; + __kernel_uid32_t32 uid; + __kernel_gid32_t32 gid; + __kernel_uid32_t32 cuid; + __kernel_gid32_t32 cgid; __kernel_mode_t32 mode; - unsigned short seq; + unsigned short __pad1; + unsigned short seq; + unsigned short __pad2; + unsigned int unused1; + unsigned int unused2; }; =20 struct semid_ds32 { @@ -1544,8 +2027,18 @@ unsigned short sem_nsems; /* no. of semaphores in array */ }; =20 -struct msqid_ds32 -{ +struct semid64_ds32 { + struct ipc64_perm32 sem_perm; + __kernel_time_t32 sem_otime; + unsigned int __unused1; + __kernel_time_t32 sem_ctime; + unsigned int __unused2; + unsigned int sem_nsems; + unsigned int __unused3; + unsigned int __unused4; +}; + +struct msqid_ds32 { struct ipc_perm32 msg_perm; u32 msg_first; u32 msg_last; @@ -1561,110 +2054,206 @@ __kernel_ipc_pid_t32 msg_lrpid; }; =20 +struct msqid64_ds32 { + struct ipc64_perm32 msg_perm; + __kernel_time_t32 msg_stime; + unsigned int __unused1; + __kernel_time_t32 msg_rtime; + unsigned int __unused2; + __kernel_time_t32 msg_ctime; + unsigned int __unused3; + unsigned int msg_cbytes; + unsigned int msg_qnum; + unsigned int msg_qbytes; + __kernel_pid_t32 msg_lspid; + __kernel_pid_t32 msg_lrpid; + unsigned int __unused4; + unsigned int __unused5; +}; + struct shmid_ds32 { - struct ipc_perm32 shm_perm; - int shm_segsz; - __kernel_time_t32 shm_atime; - __kernel_time_t32 shm_dtime; - __kernel_time_t32 shm_ctime; - __kernel_ipc_pid_t32 shm_cpid; - __kernel_ipc_pid_t32 shm_lpid; - unsigned short shm_nattch; + struct ipc_perm32 shm_perm; + int shm_segsz; + __kernel_time_t32 shm_atime; + __kernel_time_t32 shm_dtime; + __kernel_time_t32 shm_ctime; + __kernel_ipc_pid_t32 shm_cpid; + __kernel_ipc_pid_t32 shm_lpid; + unsigned short shm_nattch; +}; + +struct shmid64_ds32 { + struct ipc64_perm shm_perm; + __kernel_size_t32 shm_segsz; + __kernel_time_t32 shm_atime; + unsigned int __unused1; + __kernel_time_t32 shm_dtime; + unsigned int __unused2; + __kernel_time_t32 shm_ctime; + unsigned int __unused3; + __kernel_pid_t32 shm_cpid; + __kernel_pid_t32 shm_lpid; + unsigned int shm_nattch; + unsigned int __unused4; + unsigned int __unused5; +}; + +struct shminfo64_32 { + unsigned int shmmax; + unsigned int shmmin; + unsigned int shmmni; + unsigned int shmseg; + unsigned int shmall; + unsigned int __unused1; + unsigned int __unused2; + unsigned int __unused3; + unsigned int __unused4; }; =20 +struct shm_info32 { + int used_ids; + u32 shm_tot, shm_rss, shm_swp; + u32 swap_attempts, swap_successes; +}; + +struct ipc_kludge { + struct msgbuf *msgp; + long msgtyp; +}; + +#define SEMOP 1 +#define SEMGET 2 +#define SEMCTL 3 +#define MSGSND 11 +#define MSGRCV 12 +#define MSGGET 13 +#define MSGCTL 14 +#define SHMAT 21 +#define SHMDT 22 +#define SHMGET 23 +#define SHMCTL 24 + #define IPCOP_MASK(__x) (1UL << (__x)) =20 static int -do_sys32_semctl(int first, int second, int third, void *uptr) +ipc_parse_version32 (int *cmd) +{ + if (*cmd & IPC_64) { + *cmd ^=3D IPC_64; + return IPC_64; + } else { + return IPC_OLD; + } +} + +static int +semctl32 (int first, int second, int third, void *uptr) { union semun fourth; u32 pad; int err =3D 0, err2; struct semid64_ds s; - struct semid_ds32 *usp; mm_segment_t old_fs; + int version =3D ipc_parse_version32(&third); =20 if (!uptr) return -EINVAL; if (get_user(pad, (u32 *)uptr)) return -EFAULT; - if(third =3D SETVAL) + if (third =3D SETVAL) fourth.val =3D (int)pad; else fourth.__pad =3D (void *)A(pad); switch (third) { - - case IPC_INFO: - case IPC_RMID: - case IPC_SET: - case SEM_INFO: - case GETVAL: - case GETPID: - case GETNCNT: - case GETZCNT: - case GETALL: - case SETVAL: - case SETALL: - err =3D sys_semctl (first, second, third, fourth); + case IPC_INFO: + case IPC_RMID: + case IPC_SET: + case SEM_INFO: + case GETVAL: + case GETPID: + case GETNCNT: + case GETZCNT: + case GETALL: + case SETVAL: + case SETALL: + err =3D sys_semctl(first, second, third, fourth); break; =20 - case IPC_STAT: - case SEM_STAT: - usp =3D (struct semid_ds32 *)A(pad); + case IPC_STAT: + case SEM_STAT: fourth.__pad =3D &s; - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_semctl (first, second, third, fourth); - set_fs (old_fs); - err2 =3D put_user(s.sem_perm.key, &usp->sem_perm.key); - err2 |=3D __put_user(s.sem_perm.uid, &usp->sem_perm.uid); - err2 |=3D __put_user(s.sem_perm.gid, &usp->sem_perm.gid); - err2 |=3D __put_user(s.sem_perm.cuid, - &usp->sem_perm.cuid); - err2 |=3D __put_user (s.sem_perm.cgid, - &usp->sem_perm.cgid); - err2 |=3D __put_user (s.sem_perm.mode, - &usp->sem_perm.mode); - err2 |=3D __put_user (s.sem_perm.seq, &usp->sem_perm.seq); - err2 |=3D __put_user (s.sem_otime, &usp->sem_otime); - err2 |=3D __put_user (s.sem_ctime, &usp->sem_ctime); - err2 |=3D __put_user (s.sem_nsems, &usp->sem_nsems); + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_semctl(first, second, third, fourth); + set_fs(old_fs); + + if (version =3D IPC_64) { + struct semid64_ds32 *usp64 =3D (struct semid64_ds32 *) A(pad); + + if (!access_ok(VERIFY_WRITE, usp64, sizeof(*usp64))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(s.sem_perm.key, &usp64->sem_perm.key); + err2 |=3D __put_user(s.sem_perm.uid, &usp64->sem_perm.uid); + err2 |=3D __put_user(s.sem_perm.gid, &usp64->sem_perm.gid); + err2 |=3D __put_user(s.sem_perm.cuid, &usp64->sem_perm.cuid); + err2 |=3D __put_user(s.sem_perm.cgid, &usp64->sem_perm.cgid); + err2 |=3D __put_user(s.sem_perm.mode, &usp64->sem_perm.mode); + err2 |=3D __put_user(s.sem_perm.seq, &usp64->sem_perm.seq); + err2 |=3D __put_user(s.sem_otime, &usp64->sem_otime); + err2 |=3D __put_user(s.sem_ctime, &usp64->sem_ctime); + err2 |=3D __put_user(s.sem_nsems, &usp64->sem_nsems); + } else { + struct semid_ds32 *usp32 =3D (struct semid_ds32 *) A(pad); + + if (!access_ok(VERIFY_WRITE, usp32, sizeof(*usp32))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(s.sem_perm.key, &usp32->sem_perm.key); + err2 |=3D __put_user(s.sem_perm.uid, &usp32->sem_perm.uid); + err2 |=3D __put_user(s.sem_perm.gid, &usp32->sem_perm.gid); + err2 |=3D __put_user(s.sem_perm.cuid, &usp32->sem_perm.cuid); + err2 |=3D __put_user(s.sem_perm.cgid, &usp32->sem_perm.cgid); + err2 |=3D __put_user(s.sem_perm.mode, &usp32->sem_perm.mode); + err2 |=3D __put_user(s.sem_perm.seq, &usp32->sem_perm.seq); + err2 |=3D __put_user(s.sem_otime, &usp32->sem_otime); + err2 |=3D __put_user(s.sem_ctime, &usp32->sem_ctime); + err2 |=3D __put_user(s.sem_nsems, &usp32->sem_nsems); + } if (err2) - err =3D -EFAULT; + err =3D -EFAULT; break; - } - return err; } =20 static int do_sys32_msgsnd (int first, int second, int third, void *uptr) { - struct msgbuf *p =3D kmalloc (second + sizeof (struct msgbuf) - + 4, GFP_USER); + struct msgbuf *p =3D kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER= ); struct msgbuf32 *up =3D (struct msgbuf32 *)uptr; mm_segment_t old_fs; int err; =20 if (!p) return -ENOMEM; - err =3D get_user (p->mtype, &up->mtype); - err |=3D __copy_from_user (p->mtext, &up->mtext, second); + err =3D get_user(p->mtype, &up->mtype); + err |=3D copy_from_user(p->mtext, &up->mtext, second); if (err) goto out; - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_msgsnd (first, p, second, third); - set_fs (old_fs); -out: - kfree (p); + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_msgsnd(first, p, second, third); + set_fs(old_fs); + out: + kfree(p); return err; } =20 static int -do_sys32_msgrcv (int first, int second, int msgtyp, int third, - int version, void *uptr) +do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version= , void *uptr) { struct msgbuf32 *up; struct msgbuf *p; @@ -1679,185 +2268,281 @@ if (!uptr) goto out; err =3D -EFAULT; - if (copy_from_user (&ipck, uipck, sizeof (struct ipc_kludge))) + if (copy_from_user(&ipck, uipck, sizeof(struct ipc_kludge))) goto out; uptr =3D (void *)A(ipck.msgp); msgtyp =3D ipck.msgtyp; } err =3D -ENOMEM; - p =3D kmalloc (second + sizeof (struct msgbuf) + 4, GFP_USER); + p =3D kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER); if (!p) goto out; - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_msgrcv (first, p, second + 4, msgtyp, third); - set_fs (old_fs); + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_msgrcv(first, p, second + 4, msgtyp, third); + set_fs(old_fs); if (err < 0) goto free_then_out; up =3D (struct msgbuf32 *)uptr; - if (put_user (p->mtype, &up->mtype) || - __copy_to_user (&up->mtext, p->mtext, err)) + if (put_user(p->mtype, &up->mtype) || copy_to_user(&up->mtext, p->mtext, = err)) err =3D -EFAULT; free_then_out: - kfree (p); + kfree(p); out: return err; } =20 static int -do_sys32_msgctl (int first, int second, void *uptr) +msgctl32 (int first, int second, void *uptr) { int err =3D -EINVAL, err2; struct msqid_ds m; struct msqid64_ds m64; - struct msqid_ds32 *up =3D (struct msqid_ds32 *)uptr; + struct msqid_ds32 *up32 =3D (struct msqid_ds32 *)uptr; + struct msqid64_ds32 *up64 =3D (struct msqid64_ds32 *)uptr; mm_segment_t old_fs; + int version =3D ipc_parse_version32(&second); =20 switch (second) { - - case IPC_INFO: - case IPC_RMID: - case MSG_INFO: - err =3D sys_msgctl (first, second, (struct msqid_ds *)uptr); - break; - - case IPC_SET: - err =3D get_user (m.msg_perm.uid, &up->msg_perm.uid); - err |=3D __get_user (m.msg_perm.gid, &up->msg_perm.gid); - err |=3D __get_user (m.msg_perm.mode, &up->msg_perm.mode); - err |=3D __get_user (m.msg_qbytes, &up->msg_qbytes); + case IPC_INFO: + case IPC_RMID: + case MSG_INFO: + err =3D sys_msgctl(first, second, (struct msqid_ds *)uptr); + break; + + case IPC_SET: + if (version =3D IPC_64) { + err =3D get_user(m.msg_perm.uid, &up64->msg_perm.uid); + err |=3D get_user(m.msg_perm.gid, &up64->msg_perm.gid); + err |=3D get_user(m.msg_perm.mode, &up64->msg_perm.mode); + err |=3D get_user(m.msg_qbytes, &up64->msg_qbytes); + } else { + err =3D get_user(m.msg_perm.uid, &up32->msg_perm.uid); + err |=3D get_user(m.msg_perm.gid, &up32->msg_perm.gid); + err |=3D get_user(m.msg_perm.mode, &up32->msg_perm.mode); + err |=3D get_user(m.msg_qbytes, &up32->msg_qbytes); + } if (err) break; - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_msgctl (first, second, &m); - set_fs (old_fs); + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_msgctl(first, second, &m); + set_fs(old_fs); break; =20 - case IPC_STAT: - case MSG_STAT: - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_msgctl (first, second, (void *) &m64); - set_fs (old_fs); - err2 =3D put_user (m64.msg_perm.key, &up->msg_perm.key); - err2 |=3D __put_user(m64.msg_perm.uid, &up->msg_perm.uid); - err2 |=3D __put_user(m64.msg_perm.gid, &up->msg_perm.gid); - err2 |=3D __put_user(m64.msg_perm.cuid, &up->msg_perm.cuid); - err2 |=3D __put_user(m64.msg_perm.cgid, &up->msg_perm.cgid); - err2 |=3D __put_user(m64.msg_perm.mode, &up->msg_perm.mode); - err2 |=3D __put_user(m64.msg_perm.seq, &up->msg_perm.seq); - err2 |=3D __put_user(m64.msg_stime, &up->msg_stime); - err2 |=3D __put_user(m64.msg_rtime, &up->msg_rtime); - err2 |=3D __put_user(m64.msg_ctime, &up->msg_ctime); - err2 |=3D __put_user(m64.msg_cbytes, &up->msg_cbytes); - err2 |=3D __put_user(m64.msg_qnum, &up->msg_qnum); - err2 |=3D __put_user(m64.msg_qbytes, &up->msg_qbytes); - err2 |=3D __put_user(m64.msg_lspid, &up->msg_lspid); - err2 |=3D __put_user(m64.msg_lrpid, &up->msg_lrpid); - if (err2) - err =3D -EFAULT; - break; + case IPC_STAT: + case MSG_STAT: + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_msgctl(first, second, (void *) &m64); + set_fs(old_fs); =20 + if (version =3D IPC_64) { + if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(m64.msg_perm.key, &up64->msg_perm.key); + err2 |=3D __put_user(m64.msg_perm.uid, &up64->msg_perm.uid); + err2 |=3D __put_user(m64.msg_perm.gid, &up64->msg_perm.gid); + err2 |=3D __put_user(m64.msg_perm.cuid, &up64->msg_perm.cuid); + err2 |=3D __put_user(m64.msg_perm.cgid, &up64->msg_perm.cgid); + err2 |=3D __put_user(m64.msg_perm.mode, &up64->msg_perm.mode); + err2 |=3D __put_user(m64.msg_perm.seq, &up64->msg_perm.seq); + err2 |=3D __put_user(m64.msg_stime, &up64->msg_stime); + err2 |=3D __put_user(m64.msg_rtime, &up64->msg_rtime); + err2 |=3D __put_user(m64.msg_ctime, &up64->msg_ctime); + err2 |=3D __put_user(m64.msg_cbytes, &up64->msg_cbytes); + err2 |=3D __put_user(m64.msg_qnum, &up64->msg_qnum); + err2 |=3D __put_user(m64.msg_qbytes, &up64->msg_qbytes); + err2 |=3D __put_user(m64.msg_lspid, &up64->msg_lspid); + err2 |=3D __put_user(m64.msg_lrpid, &up64->msg_lrpid); + if (err2) + err =3D -EFAULT; + } else { + if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(m64.msg_perm.key, &up32->msg_perm.key); + err2 |=3D __put_user(m64.msg_perm.uid, &up32->msg_perm.uid); + err2 |=3D __put_user(m64.msg_perm.gid, &up32->msg_perm.gid); + err2 |=3D __put_user(m64.msg_perm.cuid, &up32->msg_perm.cuid); + err2 |=3D __put_user(m64.msg_perm.cgid, &up32->msg_perm.cgid); + err2 |=3D __put_user(m64.msg_perm.mode, &up32->msg_perm.mode); + err2 |=3D __put_user(m64.msg_perm.seq, &up32->msg_perm.seq); + err2 |=3D __put_user(m64.msg_stime, &up32->msg_stime); + err2 |=3D __put_user(m64.msg_rtime, &up32->msg_rtime); + err2 |=3D __put_user(m64.msg_ctime, &up32->msg_ctime); + err2 |=3D __put_user(m64.msg_cbytes, &up32->msg_cbytes); + err2 |=3D __put_user(m64.msg_qnum, &up32->msg_qnum); + err2 |=3D __put_user(m64.msg_qbytes, &up32->msg_qbytes); + err2 |=3D __put_user(m64.msg_lspid, &up32->msg_lspid); + err2 |=3D __put_user(m64.msg_lrpid, &up32->msg_lrpid); + if (err2) + err =3D -EFAULT; + } + break; } - return err; } =20 static int -do_sys32_shmat (int first, int second, int third, int version, void *uptr) +shmat32 (int first, int second, int third, int version, void *uptr) { unsigned long raddr; u32 *uaddr =3D (u32 *)A((u32)third); int err; =20 if (version =3D 1) - return -EINVAL; - err =3D sys_shmat (first, uptr, second, &raddr); + return -EINVAL; /* iBCS2 emulator entry point: unsupported */ + err =3D sys_shmat(first, uptr, second, &raddr); if (err) return err; return put_user(raddr, uaddr); } =20 static int -do_sys32_shmctl (int first, int second, void *uptr) +shmctl32 (int first, int second, void *uptr) { int err =3D -EFAULT, err2; struct shmid_ds s; struct shmid64_ds s64; - struct shmid_ds32 *up =3D (struct shmid_ds32 *)uptr; + struct shmid_ds32 *up32 =3D (struct shmid_ds32 *)uptr; + struct shmid64_ds32 *up64 =3D (struct shmid64_ds32 *)uptr; mm_segment_t old_fs; - struct shm_info32 { - int used_ids; - u32 shm_tot, shm_rss, shm_swp; - u32 swap_attempts, swap_successes; - } *uip =3D (struct shm_info32 *)uptr; + struct shm_info32 *uip =3D (struct shm_info32 *)uptr; struct shm_info si; + int version =3D ipc_parse_version32(&second); + struct shminfo64 smi; + struct shminfo *usi32 =3D (struct shminfo *) uptr; + struct shminfo64_32 *usi64 =3D (struct shminfo64_32 *) uptr; =20 switch (second) { + case IPC_INFO: + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_shmctl(first, second, (struct shmid_ds *)&smi); + set_fs(old_fs); + + if (version =3D IPC_64) { + if (!access_ok(VERIFY_WRITE, usi64, sizeof(*usi64))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(smi.shmmax, &usi64->shmmax); + err2 |=3D __put_user(smi.shmmin, &usi64->shmmin); + err2 |=3D __put_user(smi.shmmni, &usi64->shmmni); + err2 |=3D __put_user(smi.shmseg, &usi64->shmseg); + err2 |=3D __put_user(smi.shmall, &usi64->shmall); + } else { + if (!access_ok(VERIFY_WRITE, usi32, sizeof(*usi32))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(smi.shmmax, &usi32->shmmax); + err2 |=3D __put_user(smi.shmmin, &usi32->shmmin); + err2 |=3D __put_user(smi.shmmni, &usi32->shmmni); + err2 |=3D __put_user(smi.shmseg, &usi32->shmseg); + err2 |=3D __put_user(smi.shmall, &usi32->shmall); + } + if (err2) + err =3D -EFAULT; + break; =20 - case IPC_INFO: - case IPC_RMID: - case SHM_LOCK: - case SHM_UNLOCK: - err =3D sys_shmctl (first, second, (struct shmid_ds *)uptr); + case IPC_RMID: + case SHM_LOCK: + case SHM_UNLOCK: + err =3D sys_shmctl(first, second, (struct shmid_ds *)uptr); break; - case IPC_SET: - err =3D get_user (s.shm_perm.uid, &up->shm_perm.uid); - err |=3D __get_user (s.shm_perm.gid, &up->shm_perm.gid); - err |=3D __get_user (s.shm_perm.mode, &up->shm_perm.mode); + + case IPC_SET: + if (version =3D IPC_64) { + err =3D get_user(s.shm_perm.uid, &up64->shm_perm.uid); + err |=3D get_user(s.shm_perm.gid, &up64->shm_perm.gid); + err |=3D get_user(s.shm_perm.mode, &up64->shm_perm.mode); + } else { + err =3D get_user(s.shm_perm.uid, &up32->shm_perm.uid); + err |=3D get_user(s.shm_perm.gid, &up32->shm_perm.gid); + err |=3D get_user(s.shm_perm.mode, &up32->shm_perm.mode); + } if (err) break; - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_shmctl (first, second, &s); - set_fs (old_fs); + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_shmctl(first, second, &s); + set_fs(old_fs); break; =20 - case IPC_STAT: - case SHM_STAT: - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_shmctl (first, second, (void *) &s64); - set_fs (old_fs); + case IPC_STAT: + case SHM_STAT: + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_shmctl(first, second, (void *) &s64); + set_fs(old_fs); if (err < 0) break; - err2 =3D put_user (s64.shm_perm.key, &up->shm_perm.key); - err2 |=3D __put_user (s64.shm_perm.uid, &up->shm_perm.uid); - err2 |=3D __put_user (s64.shm_perm.gid, &up->shm_perm.gid); - err2 |=3D __put_user (s64.shm_perm.cuid, - &up->shm_perm.cuid); - err2 |=3D __put_user (s64.shm_perm.cgid, - &up->shm_perm.cgid); - err2 |=3D __put_user (s64.shm_perm.mode, - &up->shm_perm.mode); - err2 |=3D __put_user (s64.shm_perm.seq, &up->shm_perm.seq); - err2 |=3D __put_user (s64.shm_atime, &up->shm_atime); - err2 |=3D __put_user (s64.shm_dtime, &up->shm_dtime); - err2 |=3D __put_user (s64.shm_ctime, &up->shm_ctime); - err2 |=3D __put_user (s64.shm_segsz, &up->shm_segsz); - err2 |=3D __put_user (s64.shm_nattch, &up->shm_nattch); - err2 |=3D __put_user (s64.shm_cpid, &up->shm_cpid); - err2 |=3D __put_user (s64.shm_lpid, &up->shm_lpid); + if (version =3D IPC_64) { + if (!access_ok(VERIFY_WRITE, up64, sizeof(*up64))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(s64.shm_perm.key, &up64->shm_perm.key); + err2 |=3D __put_user(s64.shm_perm.uid, &up64->shm_perm.uid); + err2 |=3D __put_user(s64.shm_perm.gid, &up64->shm_perm.gid); + err2 |=3D __put_user(s64.shm_perm.cuid, &up64->shm_perm.cuid); + err2 |=3D __put_user(s64.shm_perm.cgid, &up64->shm_perm.cgid); + err2 |=3D __put_user(s64.shm_perm.mode, &up64->shm_perm.mode); + err2 |=3D __put_user(s64.shm_perm.seq, &up64->shm_perm.seq); + err2 |=3D __put_user(s64.shm_atime, &up64->shm_atime); + err2 |=3D __put_user(s64.shm_dtime, &up64->shm_dtime); + err2 |=3D __put_user(s64.shm_ctime, &up64->shm_ctime); + err2 |=3D __put_user(s64.shm_segsz, &up64->shm_segsz); + err2 |=3D __put_user(s64.shm_nattch, &up64->shm_nattch); + err2 |=3D __put_user(s64.shm_cpid, &up64->shm_cpid); + err2 |=3D __put_user(s64.shm_lpid, &up64->shm_lpid); + } else { + if (!access_ok(VERIFY_WRITE, up32, sizeof(*up32))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(s64.shm_perm.key, &up32->shm_perm.key); + err2 |=3D __put_user(s64.shm_perm.uid, &up32->shm_perm.uid); + err2 |=3D __put_user(s64.shm_perm.gid, &up32->shm_perm.gid); + err2 |=3D __put_user(s64.shm_perm.cuid, &up32->shm_perm.cuid); + err2 |=3D __put_user(s64.shm_perm.cgid, &up32->shm_perm.cgid); + err2 |=3D __put_user(s64.shm_perm.mode, &up32->shm_perm.mode); + err2 |=3D __put_user(s64.shm_perm.seq, &up32->shm_perm.seq); + err2 |=3D __put_user(s64.shm_atime, &up32->shm_atime); + err2 |=3D __put_user(s64.shm_dtime, &up32->shm_dtime); + err2 |=3D __put_user(s64.shm_ctime, &up32->shm_ctime); + err2 |=3D __put_user(s64.shm_segsz, &up32->shm_segsz); + err2 |=3D __put_user(s64.shm_nattch, &up32->shm_nattch); + err2 |=3D __put_user(s64.shm_cpid, &up32->shm_cpid); + err2 |=3D __put_user(s64.shm_lpid, &up32->shm_lpid); + } if (err2) err =3D -EFAULT; break; =20 - case SHM_INFO: - old_fs =3D get_fs (); - set_fs (KERNEL_DS); - err =3D sys_shmctl (first, second, (void *)&si); - set_fs (old_fs); + case SHM_INFO: + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + err =3D sys_shmctl(first, second, (void *)&si); + set_fs(old_fs); if (err < 0) break; - err2 =3D put_user (si.used_ids, &uip->used_ids); - err2 |=3D __put_user (si.shm_tot, &uip->shm_tot); - err2 |=3D __put_user (si.shm_rss, &uip->shm_rss); - err2 |=3D __put_user (si.shm_swp, &uip->shm_swp); - err2 |=3D __put_user (si.swap_attempts, - &uip->swap_attempts); - err2 |=3D __put_user (si.swap_successes, - &uip->swap_successes); + + if (!access_ok(VERIFY_WRITE, uip, sizeof(*uip))) { + err =3D -EFAULT; + break; + } + err2 =3D __put_user(si.used_ids, &uip->used_ids); + err2 |=3D __put_user(si.shm_tot, &uip->shm_tot); + err2 |=3D __put_user(si.shm_rss, &uip->shm_rss); + err2 |=3D __put_user(si.shm_swp, &uip->shm_swp); + err2 |=3D __put_user(si.swap_attempts, &uip->swap_attempts); + err2 |=3D __put_user(si.swap_successes, &uip->swap_successes); if (err2) err =3D -EFAULT; break; @@ -1869,59 +2554,42 @@ asmlinkage long sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth) { - int version, err; + int version; =20 version =3D call >> 16; /* hack for backward compatibility */ call &=3D 0xffff; =20 switch (call) { - - case SEMOP: + case SEMOP: /* struct sembuf is the same on 32 and 64bit :)) */ - err =3D sys_semop (first, (struct sembuf *)AA(ptr), - second); - break; - case SEMGET: - err =3D sys_semget (first, second, third); - break; - case SEMCTL: - err =3D do_sys32_semctl (first, second, third, - (void *)AA(ptr)); - break; - - case MSGSND: - err =3D do_sys32_msgsnd (first, second, third, - (void *)AA(ptr)); - break; - case MSGRCV: - err =3D do_sys32_msgrcv (first, second, fifth, third, - version, (void *)AA(ptr)); - break; - case MSGGET: - err =3D sys_msgget ((key_t) first, second); - break; - case MSGCTL: - err =3D do_sys32_msgctl (first, second, (void *)AA(ptr)); - break; + return sys_semop(first, (struct sembuf *)AA(ptr), second); + case SEMGET: + return sys_semget(first, second, third); + case SEMCTL: + return semctl32(first, second, third, (void *)AA(ptr)); + + case MSGSND: + return do_sys32_msgsnd(first, second, third, (void *)AA(ptr)); + case MSGRCV: + return do_sys32_msgrcv(first, second, fifth, third, version, (void *)AA(= ptr)); + case MSGGET: + return sys_msgget((key_t) first, second); + case MSGCTL: + return msgctl32(first, second, (void *)AA(ptr)); + + case SHMAT: + return shmat32(first, second, third, version, (void *)AA(ptr)); + break; + case SHMDT: + return sys_shmdt((char *)AA(ptr)); + case SHMGET: + return sys_shmget(first, second, third); + case SHMCTL: + return shmctl32(first, second, (void *)AA(ptr)); =20 - case SHMAT: - err =3D do_sys32_shmat (first, second, third, version, (void *)AA(ptr)); - break; - case SHMDT: - err =3D sys_shmdt ((char *)AA(ptr)); - break; - case SHMGET: - err =3D sys_shmget (first, second, third); - break; - case SHMCTL: - err =3D do_sys32_shmctl (first, second, (void *)AA(ptr)); - break; - default: - err =3D -EINVAL; - break; + default: + return -EINVAL; } - - return err; } =20 /* @@ -1929,7 +2597,8 @@ * sys_gettimeofday(). IA64 did this but i386 Linux did not * so we have to implement this system call here. */ -asmlinkage long sys32_time(int * tloc) +asmlinkage long +sys32_time (int *tloc) { int i; =20 @@ -1937,7 +2606,7 @@ stuff it to user space. No side effects */ i =3D CURRENT_TIME; if (tloc) { - if (put_user(i,tloc)) + if (put_user(i, tloc)) i =3D -EFAULT; } return i; @@ -1967,7 +2636,10 @@ { int err; =20 - err =3D put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec); + if (!access_ok(VERIFY_WRITE, ru, sizeof(*ru))) + return -EFAULT; + + err =3D __put_user (r->ru_utime.tv_sec, &ru->ru_utime.tv_sec); err |=3D __put_user (r->ru_utime.tv_usec, &ru->ru_utime.tv_usec); err |=3D __put_user (r->ru_stime.tv_sec, &ru->ru_stime.tv_sec); err |=3D __put_user (r->ru_stime.tv_usec, &ru->ru_stime.tv_usec); @@ -1989,8 +2661,7 @@ } =20 asmlinkage long -sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options, - struct rusage32 *ru) +sys32_wait4 (int pid, unsigned int *stat_addr, int options, struct rusage3= 2 *ru) { if (!ru) return sys_wait4(pid, stat_addr, options, NULL); @@ -2000,37 +2671,38 @@ unsigned int status; mm_segment_t old_fs =3D get_fs(); =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_wait4(pid, stat_addr ? &status : NULL, options, &r); - set_fs (old_fs); - if (put_rusage (ru, &r)) return -EFAULT; - if (stat_addr && put_user (status, stat_addr)) + set_fs(old_fs); + if (put_rusage(ru, &r)) + return -EFAULT; + if (stat_addr && put_user(status, stat_addr)) return -EFAULT; return ret; } } =20 asmlinkage long -sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options) +sys32_waitpid (int pid, unsigned int *stat_addr, int options) { return sys32_wait4(pid, stat_addr, options, NULL); } =20 =20 -extern asmlinkage long -sys_getrusage(int who, struct rusage *ru); +extern asmlinkage long sys_getrusage (int who, struct rusage *ru); =20 asmlinkage long -sys32_getrusage(int who, struct rusage32 *ru) +sys32_getrusage (int who, struct rusage32 *ru) { struct rusage r; int ret; mm_segment_t old_fs =3D get_fs(); =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_getrusage(who, &r); - set_fs (old_fs); - if (put_rusage (ru, &r)) return -EFAULT; + set_fs(old_fs); + if (put_rusage (ru, &r)) + return -EFAULT; return ret; } =20 @@ -2041,41 +2713,41 @@ __kernel_clock_t32 tms_cstime; }; =20 -extern asmlinkage long sys_times(struct tms * tbuf); +extern asmlinkage long sys_times (struct tms * tbuf); =20 asmlinkage long -sys32_times(struct tms32 *tbuf) +sys32_times (struct tms32 *tbuf) { + mm_segment_t old_fs =3D get_fs(); struct tms t; long ret; - mm_segment_t old_fs =3D get_fs (); int err; =20 - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); ret =3D sys_times(tbuf ? &t : NULL); - set_fs (old_fs); + set_fs(old_fs); if (tbuf) { err =3D put_user (IA32_TICK(t.tms_utime), &tbuf->tms_utime); - err |=3D __put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime); - err |=3D __put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime); - err |=3D __put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime); + err |=3D put_user (IA32_TICK(t.tms_stime), &tbuf->tms_stime); + err |=3D put_user (IA32_TICK(t.tms_cutime), &tbuf->tms_cutime); + err |=3D put_user (IA32_TICK(t.tms_cstime), &tbuf->tms_cstime); if (err) ret =3D -EFAULT; } return IA32_TICK(ret); } =20 -unsigned int +static unsigned int ia32_peek (struct pt_regs *regs, struct task_struct *child, unsigned long = addr, unsigned int *val) { size_t copied; unsigned int ret; =20 copied =3D access_process_vm(child, addr, val, sizeof(*val), 0); - return(copied !=3D sizeof(ret) ? -EIO : 0); + return (copied !=3D sizeof(ret)) ? -EIO : 0; } =20 -unsigned int +static unsigned int ia32_poke (struct pt_regs *regs, struct task_struct *child, unsigned long = addr, unsigned int val) { =20 @@ -2105,135 +2777,87 @@ #define PT_UESP 15 #define PT_SS 16 =20 -unsigned int -getreg(struct task_struct *child, int regno) +static unsigned int +getreg (struct task_struct *child, int regno) { struct pt_regs *child_regs; =20 child_regs =3D ia64_task_regs(child); switch (regno / sizeof(int)) { - - case PT_EBX: - return(child_regs->r11); - case PT_ECX: - return(child_regs->r9); - case PT_EDX: - return(child_regs->r10); - case PT_ESI: - return(child_regs->r14); - case PT_EDI: - return(child_regs->r15); - case PT_EBP: - return(child_regs->r13); - case PT_EAX: - case PT_ORIG_EAX: - return(child_regs->r8); - case PT_EIP: - return(child_regs->cr_iip); - case PT_UESP: - return(child_regs->r12); - case PT_EFL: - return(child->thread.eflag); - case PT_DS: - case PT_ES: - case PT_FS: - case PT_GS: - case PT_SS: - return((unsigned int)__USER_DS); - case PT_CS: - return((unsigned int)__USER_CS); - default: - printk(KERN_ERR "getregs:unknown register %d\n", regno); + case PT_EBX: return child_regs->r11; + case PT_ECX: return child_regs->r9; + case PT_EDX: return child_regs->r10; + case PT_ESI: return child_regs->r14; + case PT_EDI: return child_regs->r15; + case PT_EBP: return child_regs->r13; + case PT_EAX: return child_regs->r8; + case PT_ORIG_EAX: return child_regs->r1; /* see dispatch_to_ia32_ha= ndler() */ + case PT_EIP: return child_regs->cr_iip; + case PT_UESP: return child_regs->r12; + case PT_EFL: return child->thread.eflag; + case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS: + return __USER_DS; + case PT_CS: return __USER_CS; + default: + printk(KERN_ERR "ia32.getreg(): unknown register %d\n", regno); break; - } - return(0); + return 0; } =20 -void -putreg(struct task_struct *child, int regno, unsigned int value) +static void +putreg (struct task_struct *child, int regno, unsigned int value) { struct pt_regs *child_regs; =20 child_regs =3D ia64_task_regs(child); switch (regno / sizeof(int)) { - - case PT_EBX: - child_regs->r11 =3D value; - break; - case PT_ECX: - child_regs->r9 =3D value; - break; - case PT_EDX: - child_regs->r10 =3D value; - break; - case PT_ESI: - child_regs->r14 =3D value; - break; - case PT_EDI: - child_regs->r15 =3D value; - break; - case PT_EBP: - child_regs->r13 =3D value; - break; - case PT_EAX: - case PT_ORIG_EAX: - child_regs->r8 =3D value; - break; - case PT_EIP: - child_regs->cr_iip =3D value; - break; - case PT_UESP: - child_regs->r12 =3D value; - break; - case PT_EFL: - child->thread.eflag =3D value; - break; - case PT_DS: - case PT_ES: - case PT_FS: - case PT_GS: - case PT_SS: + case PT_EBX: child_regs->r11 =3D value; break; + case PT_ECX: child_regs->r9 =3D value; break; + case PT_EDX: child_regs->r10 =3D value; break; + case PT_ESI: child_regs->r14 =3D value; break; + case PT_EDI: child_regs->r15 =3D value; break; + case PT_EBP: child_regs->r13 =3D value; break; + case PT_EAX: child_regs->r8 =3D value; break; + case PT_ORIG_EAX: child_regs->r1 =3D value; break; + case PT_EIP: child_regs->cr_iip =3D value; break; + case PT_UESP: child_regs->r12 =3D value; break; + case PT_EFL: child->thread.eflag =3D value; break; + case PT_DS: case PT_ES: case PT_FS: case PT_GS: case PT_SS: if (value !=3D __USER_DS) - printk(KERN_ERR "setregs:try to set invalid segment register %d =3D %x\= n", + printk(KERN_ERR + "ia32.putreg: attempt to set invalid segment register %d =3D %x\= n", regno, value); break; - case PT_CS: + case PT_CS: if (value !=3D __USER_CS) - printk(KERN_ERR "setregs:try to set invalid segment register %d =3D %x\= n", + printk(KERN_ERR + "ia32.putreg: attempt to to set invalid segment register %d =3D = %x\n", regno, value); break; - default: - printk(KERN_ERR "getregs:unknown register %d\n", regno); + default: + printk(KERN_ERR "ia32.putreg: unknown register %d\n", regno); break; - } } =20 static inline void -ia32f2ia64f(void *dst, void *src) +ia32f2ia64f (void *dst, void *src) { - - __asm__ ("ldfe f6=3D[%1] ;;\n\t" - "stf.spill [%0]=F6" - : - : "r"(dst), "r"(src)); + asm volatile ("ldfe f6=3D[%1];; stf.spill [%0]=F6" :: "r"(dst), "r"(src) = : "memory"); return; } =20 static inline void -ia64f2ia32f(void *dst, void *src) +ia64f2ia32f (void *dst, void *src) { - - __asm__ ("ldf.fill f6=3D[%1] ;;\n\t" - "stfe [%0]=F6" - : - : "r"(dst), "r"(src)); + asm volatile ("ldf.fill f6=3D[%1];; stfe [%0]=F6" :: "r"(dst), "r"(src) = : "memory"); return; } =20 -void -put_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct = switch_stack *swp, int tos) +static void +put_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct= switch_stack *swp, + int tos) { struct _fpreg_ia32 *f; char buf[32]; @@ -2242,62 +2866,59 @@ if ((regno +=3D tos) >=3D 8) regno -=3D 8; switch (regno) { - - case 0: + case 0: ia64f2ia32f(f, &ptp->f8); break; - case 1: + case 1: ia64f2ia32f(f, &ptp->f9); break; - case 2: - case 3: - case 4: - case 5: - case 6: - case 7: + case 2: + case 3: + case 4: + case 5: + case 6: + case 7: ia64f2ia32f(f, &swp->f10 + (regno - 2)); break; - } - __copy_to_user(reg, f, sizeof(*reg)); + copy_to_user(reg, f, sizeof(*reg)); } =20 -void -get_fpreg(int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct = switch_stack *swp, int tos) +static void +get_fpreg (int regno, struct _fpreg_ia32 *reg, struct pt_regs *ptp, struct= switch_stack *swp, + int tos) { =20 if ((regno +=3D tos) >=3D 8) regno -=3D 8; switch (regno) { - - case 0: - __copy_from_user(&ptp->f8, reg, sizeof(*reg)); + case 0: + copy_from_user(&ptp->f8, reg, sizeof(*reg)); break; - case 1: - __copy_from_user(&ptp->f9, reg, sizeof(*reg)); + case 1: + copy_from_user(&ptp->f9, reg, sizeof(*reg)); break; - case 2: - case 3: - case 4: - case 5: - case 6: - case 7: - __copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg)); + case 2: + case 3: + case 4: + case 5: + case 6: + case 7: + copy_from_user(&swp->f10 + (regno - 2), reg, sizeof(*reg)); break; - } return; } =20 -int -save_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save) +static int +save_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save) { struct switch_stack *swp; struct pt_regs *ptp; int i, tos; =20 if (!access_ok(VERIFY_WRITE, save, sizeof(*save))) - return(-EIO); + return -EIO; __put_user(tsk->thread.fcr, &save->cw); __put_user(tsk->thread.fsr, &save->sw); __put_user(tsk->thread.fsr >> 32, &save->tag); @@ -2313,11 +2934,11 @@ tos =3D (tsk->thread.fsr >> 11) & 3; for (i =3D 0; i < 8; i++) put_fpreg(i, &save->_st[i], ptp, swp, tos); - return(0); + return 0; } =20 -int -restore_ia32_fpstate(struct task_struct *tsk, struct _fpstate_ia32 *save) +static int +restore_ia32_fpstate (struct task_struct *tsk, struct _fpstate_ia32 *save) { struct switch_stack *swp; struct pt_regs *ptp; @@ -2340,10 +2961,11 @@ tos =3D (tsk->thread.fsr >> 11) & 3; for (i =3D 0; i < 8; i++) get_fpreg(i, &save->_st[i], ptp, swp, tos); - return(ret ? -EFAULT : 0); + return ret ? -EFAULT : 0; } =20 -asmlinkage long sys_ptrace(long, pid_t, unsigned long, unsigned long, long= , long, long, long, long); +extern asmlinkage long sys_ptrace (long, pid_t, unsigned long, unsigned lo= ng, long, long, long, + long, long); =20 /* * Note that the IA32 version of `ptrace' calls the IA64 routine for @@ -2358,13 +2980,12 @@ { struct pt_regs *regs =3D (struct pt_regs *) &stack; struct task_struct *child; + unsigned int value, tmp; long i, ret; - unsigned int value; =20 lock_kernel(); if (request =3D PTRACE_TRACEME) { - ret =3D sys_ptrace(request, pid, addr, data, - arg4, arg5, arg6, arg7, stack); + ret =3D sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, sta= ck); goto out; } =20 @@ -2379,8 +3000,7 @@ goto out; =20 if (request =3D PTRACE_ATTACH) { - ret =3D sys_ptrace(request, pid, addr, data, - arg4, arg5, arg6, arg7, stack); + ret =3D sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, sta= ck); goto out; } ret =3D -ESRCH; @@ -2398,21 +3018,32 @@ case PTRACE_PEEKDATA: /* read word at location addr */ ret =3D ia32_peek(regs, child, addr, &value); if (ret =3D 0) - ret =3D put_user(value, (unsigned int *)A(data)); + ret =3D put_user(value, (unsigned int *) A(data)); else ret =3D -EIO; goto out; =20 case PTRACE_POKETEXT: case PTRACE_POKEDATA: /* write the word at location addr */ - ret =3D ia32_poke(regs, child, addr, (unsigned int)data); + ret =3D ia32_poke(regs, child, addr, data); goto out; =20 case PTRACE_PEEKUSR: /* read word at addr in USER area */ - ret =3D 0; + ret =3D -EIO; + if ((addr & 3) || addr > 17*sizeof(int)) + break; + + tmp =3D getreg(child, addr); + if (!put_user(tmp, (unsigned int *) A(data))) + ret =3D 0; break; =20 case PTRACE_POKEUSR: /* write word at addr in USER area */ + ret =3D -EIO; + if ((addr & 3) || addr > 17*sizeof(int)) + break; + + putreg(child, addr, data); ret =3D 0; break; =20 @@ -2421,28 +3052,25 @@ ret =3D -EIO; break; } - for ( i =3D 0; i < 17*sizeof(int); i +=3D sizeof(int) ) { - __put_user(getreg(child, i), (unsigned int *) A(data)); + for (i =3D 0; i < 17*sizeof(int); i +=3D sizeof(int) ) { + put_user(getreg(child, i), (unsigned int *) A(data)); data +=3D sizeof(int); } ret =3D 0; break; =20 case IA32_PTRACE_SETREGS: - { - unsigned int tmp; if (!access_ok(VERIFY_READ, (int *) A(data), 17*sizeof(int))) { ret =3D -EIO; break; } - for ( i =3D 0; i < 17*sizeof(int); i +=3D sizeof(int) ) { - __get_user(tmp, (unsigned int *) A(data)); + for (i =3D 0; i < 17*sizeof(int); i +=3D sizeof(int) ) { + get_user(tmp, (unsigned int *) A(data)); putreg(child, i, tmp); data +=3D sizeof(int); } ret =3D 0; break; - } =20 case IA32_PTRACE_GETFPREGS: ret =3D save_ia32_fpstate(child, (struct _fpstate_ia32 *) A(data)); @@ -2457,10 +3085,8 @@ case PTRACE_KILL: case PTRACE_SINGLESTEP: /* execute chile for one instruction */ case PTRACE_DETACH: /* detach a process */ - unlock_kernel(); - ret =3D sys_ptrace(request, pid, addr, data, - arg4, arg5, arg6, arg7, stack); - return(ret); + ret =3D sys_ptrace(request, pid, addr, data, arg4, arg5, arg6, arg7, sta= ck); + break; =20 default: ret =3D -EIO; @@ -2477,7 +3103,10 @@ { int err; =20 - err =3D get_user(kfl->l_type, &ufl->l_type); + if (!access_ok(VERIFY_READ, ufl, sizeof(*ufl))) + return -EFAULT; + + err =3D __get_user(kfl->l_type, &ufl->l_type); err |=3D __get_user(kfl->l_whence, &ufl->l_whence); err |=3D __get_user(kfl->l_start, &ufl->l_start); err |=3D __get_user(kfl->l_len, &ufl->l_len); @@ -2490,6 +3119,9 @@ { int err; =20 + if (!access_ok(VERIFY_WRITE, ufl, sizeof(*ufl))) + return -EFAULT; + err =3D __put_user(kfl->l_type, &ufl->l_type); err |=3D __put_user(kfl->l_whence, &ufl->l_whence); err |=3D __put_user(kfl->l_start, &ufl->l_start); @@ -2498,71 +3130,43 @@ return err; } =20 -extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd, - unsigned long arg); +extern asmlinkage long sys_fcntl (unsigned int fd, unsigned int cmd, unsig= ned long arg); =20 asmlinkage long -sys32_fcntl(unsigned int fd, unsigned int cmd, int arg) +sys32_fcntl (unsigned int fd, unsigned int cmd, unsigned int arg) { - struct flock f; mm_segment_t old_fs; + struct flock f; long ret; =20 switch (cmd) { - case F_GETLK: - case F_SETLK: - case F_SETLKW: - if(get_flock32(&f, (struct flock32 *)((long)arg))) + case F_GETLK: + case F_SETLK: + case F_SETLKW: + if (get_flock32(&f, (struct flock32 *) A(arg))) return -EFAULT; old_fs =3D get_fs(); set_fs(KERNEL_DS); - ret =3D sys_fcntl(fd, cmd, (unsigned long)&f); + ret =3D sys_fcntl(fd, cmd, (unsigned long) &f); set_fs(old_fs); - if(cmd =3D F_GETLK && put_flock32(&f, (struct flock32 *)((long)arg))) + if (cmd =3D F_GETLK && put_flock32(&f, (struct flock32 *) A(arg))) return -EFAULT; return ret; - default: + + default: /* * `sys_fcntl' lies about arg, for the F_SETOWN * sub-function arg can have a negative value. */ - return sys_fcntl(fd, cmd, (unsigned long)((long)arg)); - } -} - -asmlinkage long -sys32_sigaction (int sig, struct old_sigaction32 *act, struct old_sigactio= n32 *oact) -{ - struct k_sigaction new_ka, old_ka; - int ret; - - if (act) { - old_sigset32_t mask; - - ret =3D get_user((long)new_ka.sa.sa_handler, &act->sa_handler); - ret |=3D __get_user(new_ka.sa.sa_flags, &act->sa_flags); - ret |=3D __get_user(mask, &act->sa_mask); - if (ret) - return ret; - siginitset(&new_ka.sa.sa_mask, mask); - } - - ret =3D do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL); - - if (!ret && oact) { - ret =3D put_user((long)old_ka.sa.sa_handler, &oact->sa_handler); - ret |=3D __put_user(old_ka.sa.sa_flags, &oact->sa_flags); - ret |=3D __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask); + return sys_fcntl(fd, cmd, arg); } - - return ret; } =20 asmlinkage long sys_ni_syscall(void); =20 asmlinkage long -sys32_ni_syscall(int dummy0, int dummy1, int dummy2, int dummy3, - int dummy4, int dummy5, int dummy6, int dummy7, int stack) +sys32_ni_syscall (int dummy0, int dummy1, int dummy2, int dummy3, int dumm= y4, int dummy5, + int dummy6, int dummy7, int stack) { struct pt_regs *regs =3D (struct pt_regs *)&stack; =20 @@ -2577,7 +3181,7 @@ #define IOLEN ((65536 / 4) * 4096) =20 asmlinkage long -sys_iopl (int level) +sys32_iopl (int level) { extern unsigned long ia64_iobase; int fd; @@ -2612,9 +3216,8 @@ up_write(¤t->mm->mmap_sem); =20 if (addr >=3D 0) { - ia64_set_kr(IA64_KR_IO_BASE, addr); old =3D (old & ~0x3000) | (level << 12); - __asm__ __volatile__("mov ar.eflag=3D%0 ;;" :: "r"(old)); + asm volatile ("mov ar.eflag=3D%0;;" :: "r"(old)); } =20 fput(file); @@ -2623,7 +3226,7 @@ } =20 asmlinkage long -sys_ioperm (unsigned int from, unsigned int num, int on) +sys32_ioperm (unsigned int from, unsigned int num, int on) { =20 /* @@ -2636,7 +3239,7 @@ * XXX proper ioperm() support should be emulated by * manipulating the page protections... */ - return sys_iopl(3); + return sys32_iopl(3); } =20 typedef struct { @@ -2646,10 +3249,8 @@ } ia32_stack_t; =20 asmlinkage long -sys32_sigaltstack (const ia32_stack_t *uss32, ia32_stack_t *uoss32, -long arg2, long arg3, long arg4, -long arg5, long arg6, long arg7, -long stack) +sys32_sigaltstack (ia32_stack_t *uss32, ia32_stack_t *uoss32, + long arg2, long arg3, long arg4, long arg5, long arg6, long arg7, lon= g stack) { struct pt_regs *pt =3D (struct pt_regs *) &stack; stack_t uss, uoss; @@ -2658,8 +3259,8 @@ mm_segment_t old_fs =3D get_fs(); =20 if (uss32) - if (copy_from_user(&buf32, (void *)A(uss32), sizeof(ia32_stack_t))) - return(-EFAULT); + if (copy_from_user(&buf32, uss32, sizeof(ia32_stack_t))) + return -EFAULT; uss.ss_sp =3D (void *) (long) buf32.ss_sp; uss.ss_flags =3D buf32.ss_flags; uss.ss_size =3D buf32.ss_size; @@ -2672,34 +3273,34 @@ buf32.ss_sp =3D (long) uoss.ss_sp; buf32.ss_flags =3D uoss.ss_flags; buf32.ss_size =3D uoss.ss_size; - if (copy_to_user((void*)A(uoss32), &buf32, sizeof(ia32_stack_t))) - return(-EFAULT); + if (copy_to_user(uoss32, &buf32, sizeof(ia32_stack_t))) + return -EFAULT; } - return(ret); + return ret; } =20 asmlinkage int -sys_pause (void) +sys32_pause (void) { current->state =3D TASK_INTERRUPTIBLE; schedule(); return -ERESTARTNOHAND; } =20 -asmlinkage long sys_msync(unsigned long start, size_t len, int flags); +asmlinkage long sys_msync (unsigned long start, size_t len, int flags); =20 asmlinkage int -sys32_msync(unsigned int start, unsigned int len, int flags) +sys32_msync (unsigned int start, unsigned int len, int flags) { unsigned int addr; =20 if (OFFSET4K(start)) return -EINVAL; - addr =3D start & PAGE_MASK; - return(sys_msync(addr, len + (start - addr), flags)); + addr =3D PAGE_START(start); + return sys_msync(addr, len + (start - addr), flags); } =20 -struct sysctl_ia32 { +struct sysctl32 { unsigned int name; int nlen; unsigned int oldval; @@ -2712,16 +3313,16 @@ extern asmlinkage long sys_sysctl(struct __sysctl_args *args); =20 asmlinkage long -sys32_sysctl(struct sysctl_ia32 *args32) +sys32_sysctl (struct sysctl32 *args) { - struct sysctl_ia32 a32; + struct sysctl32 a32; mm_segment_t old_fs =3D get_fs (); void *oldvalp, *newvalp; size_t oldlen; int *namep; long ret; =20 - if (copy_from_user(&a32, args32, sizeof (a32))) + if (copy_from_user(&a32, args, sizeof(a32))) return -EFAULT; =20 /* @@ -2754,7 +3355,7 @@ } =20 asmlinkage long -sys32_newuname(struct new_utsname * name) +sys32_newuname (struct new_utsname *name) { extern asmlinkage long sys_newuname(struct new_utsname * name); int ret =3D sys_newuname(name); @@ -2765,10 +3366,10 @@ return ret; } =20 -extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid= ); +extern asmlinkage long sys_getresuid (uid_t *ruid, uid_t *euid, uid_t *sui= d); =20 asmlinkage long -sys32_getresuid (u16 *ruid, u16 *euid, u16 *suid) +sys32_getresuid16 (u16 *ruid, u16 *euid, u16 *suid) { uid_t a, b, c; int ret; @@ -2786,7 +3387,7 @@ extern asmlinkage long sys_getresgid (gid_t *rgid, gid_t *egid, gid_t *sgi= d); =20 asmlinkage long -sys32_getresgid(u16 *rgid, u16 *egid, u16 *sgid) +sys32_getresgid16 (u16 *rgid, u16 *egid, u16 *sgid) { gid_t a, b, c; int ret; @@ -2796,15 +3397,13 @@ ret =3D sys_getresgid(&a, &b, &c); set_fs(old_fs); =20 - if (!ret) { - ret =3D put_user(a, rgid); - ret |=3D put_user(b, egid); - ret |=3D put_user(c, sgid); - } - return ret; + if (ret) + return ret; + + return put_user(a, rgid) | put_user(b, egid) | put_user(c, sgid); } =20 -int +asmlinkage long sys32_lseek (unsigned int fd, int offset, unsigned int whence) { extern off_t sys_lseek (unsigned int fd, off_t offset, unsigned int origi= n); @@ -2813,36 +3412,272 @@ return sys_lseek(fd, offset, whence); } =20 -#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */ +extern asmlinkage long sys_getgroups (int gidsetsize, gid_t *grouplist); =20 -/* In order to reduce some races, while at the same time doing additional - * checking and hopefully speeding things up, we copy filenames to the - * kernel data space before using them.. - * - * POSIX.1 2.4: an empty pathname is invalid (ENOENT). - */ -static inline int -do_getname32(const char *filename, char *page) +asmlinkage long +sys32_getgroups16 (int gidsetsize, short *grouplist) { - int retval; + mm_segment_t old_fs =3D get_fs(); + gid_t gl[NGROUPS]; + int ret, i; =20 - /* 32bit pointer will be always far below TASK_SIZE :)) */ - retval =3D strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE); - if (retval > 0) { - if (retval < PAGE_SIZE) - return 0; - return -ENAMETOOLONG; - } else if (!retval) - retval =3D -ENOENT; - return retval; + set_fs(KERNEL_DS); + ret =3D sys_getgroups(gidsetsize, gl); + set_fs(old_fs); + + if (gidsetsize && ret > 0 && ret <=3D NGROUPS) + for (i =3D 0; i < ret; i++, grouplist++) + if (put_user(gl[i], grouplist)) + return -EFAULT; + return ret; } =20 -char * -getname32(const char *filename) +extern asmlinkage long sys_setgroups (int gidsetsize, gid_t *grouplist); + +asmlinkage long +sys32_setgroups16 (int gidsetsize, short *grouplist) { - char *tmp, *result; + mm_segment_t old_fs =3D get_fs(); + gid_t gl[NGROUPS]; + int ret, i; =20 - result =3D ERR_PTR(-ENOMEM); + if ((unsigned) gidsetsize > NGROUPS) + return -EINVAL; + for (i =3D 0; i < gidsetsize; i++, grouplist++) + if (get_user(gl[i], grouplist)) + return -EFAULT; + set_fs(KERNEL_DS); + ret =3D sys_setgroups(gidsetsize, gl); + set_fs(old_fs); + return ret; +} + +/* + * Unfortunately, the x86 compiler aligns variables of type "long long" to= a 4 byte boundary + * only, which means that the x86 version of "struct flock64" doesn't matc= h the ia64 version + * of struct flock. + */ + +static inline long +ia32_put_flock (struct flock *l, unsigned long addr) +{ + return (put_user(l->l_type, (short *) addr) + | put_user(l->l_whence, (short *) (addr + 2)) + | put_user(l->l_start, (long *) (addr + 4)) + | put_user(l->l_len, (long *) (addr + 12)) + | put_user(l->l_pid, (int *) (addr + 20))); +} + +static inline long +ia32_get_flock (struct flock *l, unsigned long addr) +{ + unsigned int start_lo, start_hi, len_lo, len_hi; + int err =3D (get_user(l->l_type, (short *) addr) + | get_user(l->l_whence, (short *) (addr + 2)) + | get_user(start_lo, (int *) (addr + 4)) + | get_user(start_hi, (int *) (addr + 8)) + | get_user(len_lo, (int *) (addr + 12)) + | get_user(len_hi, (int *) (addr + 16)) + | get_user(l->l_pid, (int *) (addr + 20))); + l->l_start =3D ((unsigned long) start_hi << 32) | start_lo; + l->l_len =3D ((unsigned long) len_hi << 32) | len_lo; + return err; +} + +asmlinkage long +sys32_fcntl64 (unsigned int fd, unsigned int cmd, unsigned int arg) +{ + mm_segment_t old_fs; + struct flock f; + long ret; + + switch (cmd) { + case F_GETLK64: + case F_SETLK64: + case F_SETLKW64: + if (ia32_get_flock(&f, arg)) + return -EFAULT; + old_fs =3D get_fs(); + set_fs(KERNEL_DS); + ret =3D sys_fcntl(fd, cmd, (unsigned long) &f); + set_fs(old_fs); + if (cmd =3D F_GETLK && ia32_put_flock(&f, arg)) + return -EFAULT; + break; + + default: + ret =3D sys32_fcntl(fd, cmd, arg); + break; + } + return ret; +} + +asmlinkage long +sys32_truncate64 (unsigned int path, unsigned int len_lo, unsigned int len= _hi) +{ + extern asmlinkage long sys_truncate (const char *path, unsigned long leng= th); + + return sys_truncate((const char *) A(path), ((unsigned long) len_hi << 32= ) | len_lo); +} + +asmlinkage long +sys32_ftruncate64 (int fd, unsigned int len_lo, unsigned int len_hi) +{ + extern asmlinkage long sys_ftruncate (int fd, unsigned long length); + + return sys_ftruncate(fd, ((unsigned long) len_hi << 32) | len_lo); +} + +static int +putstat64 (struct stat64 *ubuf, struct stat *kbuf) +{ + int err; + + if (clear_user(ubuf, sizeof(*ubuf))) + return 1; + + err =3D __put_user(kbuf->st_dev, &ubuf->st_dev); + err |=3D __put_user(kbuf->st_ino, &ubuf->__st_ino); + err |=3D __put_user(kbuf->st_ino, &ubuf->st_ino_lo); + err |=3D __put_user(kbuf->st_ino >> 32, &ubuf->st_ino_hi); + err |=3D __put_user(kbuf->st_mode, &ubuf->st_mode); + err |=3D __put_user(kbuf->st_nlink, &ubuf->st_nlink); + err |=3D __put_user(kbuf->st_uid, &ubuf->st_uid); + err |=3D __put_user(kbuf->st_gid, &ubuf->st_gid); + err |=3D __put_user(kbuf->st_rdev, &ubuf->st_rdev); + err |=3D __put_user(kbuf->st_size, &ubuf->st_size_lo); + err |=3D __put_user((kbuf->st_size >> 32), &ubuf->st_size_hi); + err |=3D __put_user(kbuf->st_atime, &ubuf->st_atime); + err |=3D __put_user(kbuf->st_mtime, &ubuf->st_mtime); + err |=3D __put_user(kbuf->st_ctime, &ubuf->st_ctime); + err |=3D __put_user(kbuf->st_blksize, &ubuf->st_blksize); + err |=3D __put_user(kbuf->st_blocks, &ubuf->st_blocks); + return err; +} + +asmlinkage long +sys32_stat64 (char *filename, struct stat64 *statbuf) +{ + mm_segment_t old_fs =3D get_fs(); + struct stat s; + long ret; + + set_fs(KERNEL_DS); + ret =3D sys_newstat(filename, &s); + set_fs(old_fs); + if (putstat64(statbuf, &s)) + return -EFAULT; + return ret; +} + +asmlinkage long +sys32_lstat64 (char *filename, struct stat64 *statbuf) +{ + mm_segment_t old_fs =3D get_fs(); + struct stat s; + long ret; + + set_fs(KERNEL_DS); + ret =3D sys_newlstat(filename, &s); + set_fs(old_fs); + if (putstat64(statbuf, &s)) + return -EFAULT; + return ret; +} + +asmlinkage long +sys32_fstat64 (unsigned int fd, struct stat64 *statbuf) +{ + mm_segment_t old_fs =3D get_fs(); + struct stat s; + long ret; + + set_fs(KERNEL_DS); + ret =3D sys_newfstat(fd, &s); + set_fs(old_fs); + if (putstat64(statbuf, &s)) + return -EFAULT; + return ret; +} + +asmlinkage long +sys32_sigpending (unsigned int *set) +{ + return do_sigpending(set, sizeof(*set)); +} + +struct sysinfo32 { + s32 uptime; + u32 loads[3]; + u32 totalram; + u32 freeram; + u32 sharedram; + u32 bufferram; + u32 totalswap; + u32 freeswap; + unsigned short procs; + char _f[22]; +}; + +asmlinkage long +sys32_sysinfo (struct sysinfo32 *info) +{ + extern asmlinkage long sys_sysinfo (struct sysinfo *); + mm_segment_t old_fs =3D get_fs(); + struct sysinfo s; + long ret, err; + + set_fs(KERNEL_DS); + ret =3D sys_sysinfo(&s); + set_fs(old_fs); + + if (!access_ok(VERIFY_WRITE, info, sizeof(*info))) + return -EFAULT; + + err =3D __put_user(s.uptime, &info->uptime); + err |=3D __put_user(s.loads[0], &info->loads[0]); + err |=3D __put_user(s.loads[1], &info->loads[1]); + err |=3D __put_user(s.loads[2], &info->loads[2]); + err |=3D __put_user(s.totalram, &info->totalram); + err |=3D __put_user(s.freeram, &info->freeram); + err |=3D __put_user(s.sharedram, &info->sharedram); + err |=3D __put_user(s.bufferram, &info->bufferram); + err |=3D __put_user(s.totalswap, &info->totalswap); + err |=3D __put_user(s.freeswap, &info->freeswap); + err |=3D __put_user(s.procs, &info->procs); + if (err) + return -EFAULT; + return ret; +} + +/* In order to reduce some races, while at the same time doing additional + * checking and hopefully speeding things up, we copy filenames to the + * kernel data space before using them.. + * + * POSIX.1 2.4: an empty pathname is invalid (ENOENT). + */ +static inline int +do_getname32 (const char *filename, char *page) +{ + int retval; + + /* 32bit pointer will be always far below TASK_SIZE :)) */ + retval =3D strncpy_from_user((char *)page, (char *)filename, PAGE_SIZE); + if (retval > 0) { + if (retval < PAGE_SIZE) + return 0; + return -ENAMETOOLONG; + } else if (!retval) + retval =3D -ENOENT; + return retval; +} + +static char * +getname32 (const char *filename) +{ + char *tmp, *result; + + result =3D ERR_PTR(-ENOMEM); tmp =3D (char *)__get_free_page(GFP_KERNEL); if (tmp) { int retval =3D do_getname32(filename, tmp); @@ -2856,178 +3691,132 @@ return result; } =20 -/* 32-bit timeval and related flotsam. */ - -extern asmlinkage long sys_ioperm(unsigned long from, unsigned long num, i= nt on); - -asmlinkage long -sys32_ioperm(u32 from, u32 num, int on) -{ - return sys_ioperm((unsigned long)from, (unsigned long)num, on); -} - struct dqblk32 { - __u32 dqb_bhardlimit; - __u32 dqb_bsoftlimit; - __u32 dqb_curblocks; - __u32 dqb_ihardlimit; - __u32 dqb_isoftlimit; - __u32 dqb_curinodes; - __kernel_time_t32 dqb_btime; - __kernel_time_t32 dqb_itime; + __u32 dqb_bhardlimit; + __u32 dqb_bsoftlimit; + __u32 dqb_curblocks; + __u32 dqb_ihardlimit; + __u32 dqb_isoftlimit; + __u32 dqb_curinodes; + __kernel_time_t32 dqb_btime; + __kernel_time_t32 dqb_itime; }; =20 -extern asmlinkage long sys_quotactl(int cmd, const char *special, int id, - caddr_t addr); - asmlinkage long -sys32_quotactl(int cmd, const char *special, int id, unsigned long addr) +sys32_quotactl (int cmd, unsigned int special, int id, struct dqblk32 *add= r) { + extern asmlinkage long sys_quotactl (int, const char *, int, caddr_t); int cmds =3D cmd >> SUBCMDSHIFT; - int err; - struct dqblk d; mm_segment_t old_fs; + struct dqblk d; char *spec; + long err; =20 switch (cmds) { - case Q_GETQUOTA: + case Q_GETQUOTA: break; - case Q_SETQUOTA: - case Q_SETUSE: - case Q_SETQLIM: - if (copy_from_user (&d, (struct dqblk32 *)addr, - sizeof (struct dqblk32))) + case Q_SETQUOTA: + case Q_SETUSE: + case Q_SETQLIM: + if (copy_from_user (&d, addr, sizeof(struct dqblk32))) return -EFAULT; d.dqb_itime =3D ((struct dqblk32 *)&d)->dqb_itime; d.dqb_btime =3D ((struct dqblk32 *)&d)->dqb_btime; break; - default: - return sys_quotactl(cmd, special, - id, (caddr_t)addr); + default: + return sys_quotactl(cmd, (void *) A(special), id, (caddr_t) addr); } - spec =3D getname32 (special); + spec =3D getname32((void *) A(special)); err =3D PTR_ERR(spec); - if (IS_ERR(spec)) return err; + if (IS_ERR(spec)) + return err; old_fs =3D get_fs (); - set_fs (KERNEL_DS); + set_fs(KERNEL_DS); err =3D sys_quotactl(cmd, (const char *)spec, id, (caddr_t)&d); - set_fs (old_fs); - putname (spec); + set_fs(old_fs); + putname(spec); if (cmds =3D Q_GETQUOTA) { __kernel_time_t b =3D d.dqb_btime, i =3D d.dqb_itime; ((struct dqblk32 *)&d)->dqb_itime =3D i; ((struct dqblk32 *)&d)->dqb_btime =3D b; - if (copy_to_user ((struct dqblk32 *)addr, &d, - sizeof (struct dqblk32))) + if (copy_to_user(addr, &d, sizeof(struct dqblk32))) return -EFAULT; } return err; } =20 -extern asmlinkage long sys_utime(char * filename, struct utimbuf * times); - -struct utimbuf32 { - __kernel_time_t32 actime, modtime; -}; - asmlinkage long -sys32_utime(char * filename, struct utimbuf32 *times) +sys32_sched_rr_get_interval (pid_t pid, struct timespec32 *interval) { - struct utimbuf t; - mm_segment_t old_fs; - int ret; - char *filenam; + extern asmlinkage long sys_sched_rr_get_interval (pid_t, struct timespec = *); + mm_segment_t old_fs =3D get_fs(); + struct timespec t; + long ret; =20 - if (!times) - return sys_utime(filename, NULL); - if (get_user (t.actime, ×->actime) || - __get_user (t.modtime, ×->modtime)) - return -EFAULT; - filenam =3D getname32 (filename); - ret =3D PTR_ERR(filenam); - if (!IS_ERR(filenam)) { - old_fs =3D get_fs(); - set_fs (KERNEL_DS); - ret =3D sys_utime(filenam, &t); - set_fs (old_fs); - putname (filenam); - } + set_fs(KERNEL_DS); + ret =3D sys_sched_rr_get_interval(pid, &t); + set_fs(old_fs); + if (put_user (t.tv_sec, &interval->tv_sec) || put_user (t.tv_nsec, &inter= val->tv_nsec)) + return -EFAULT; return ret; } =20 -/* - * Ooo, nasty. We need here to frob 32-bit unsigned longs to - * 64-bit unsigned longs. - */ - -static inline int -get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset) +asmlinkage long +sys32_pread (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, u= 32 pos_hi) { - if (ufdset) { - unsigned long odd; - - if (verify_area(VERIFY_WRITE, ufdset, n*sizeof(u32))) - return -EFAULT; + extern asmlinkage long sys_pread (unsigned int, char *, size_t, loff_t); + return sys_pread(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo); +} =20 - odd =3D n & 1UL; - n &=3D ~1UL; - while (n) { - unsigned long h, l; - __get_user(l, ufdset); - __get_user(h, ufdset+1); - ufdset +=3D 2; - *fdset++ =3D h << 32 | l; - n -=3D 2; - } - if (odd) - __get_user(*fdset, ufdset); - } else { - /* Tricky, must clear full unsigned long in the - * kernel fdset at the end, this makes sure that - * actually happens. - */ - memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32)); - } - return 0; +asmlinkage long +sys32_pwrite (unsigned int fd, void *buf, unsigned int count, u32 pos_lo, = u32 pos_hi) +{ + extern asmlinkage long sys_pwrite (unsigned int, const char *, size_t, lo= ff_t); + return sys_pwrite(fd, buf, count, ((unsigned long) pos_hi << 32) | pos_lo= ); } =20 -static inline void -set_fd_set32(unsigned long n, u32 *ufdset, unsigned long *fdset) +asmlinkage long +sys32_sendfile (int out_fd, int in_fd, int *offset, unsigned int count) { - unsigned long odd; + extern asmlinkage long sys_sendfile (int, int, off_t *, size_t); + mm_segment_t old_fs =3D get_fs(); + long ret; + off_t of; =20 - if (!ufdset) - return; + if (offset && get_user(of, offset)) + return -EFAULT; =20 - odd =3D n & 1UL; - n &=3D ~1UL; - while (n) { - unsigned long h, l; - l =3D *fdset++; - h =3D l >> 32; - __put_user(l, ufdset); - __put_user(h, ufdset+1); - ufdset +=3D 2; - n -=3D 2; - } - if (odd) - __put_user(*fdset, ufdset); -} + set_fs(KERNEL_DS); + ret =3D sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count); + set_fs(old_fs); + + if (!ret && offset && put_user(of, offset)) + return -EFAULT; =20 -extern asmlinkage long sys_sysfs(int option, unsigned long arg1, - unsigned long arg2); + return ret; +} =20 asmlinkage long -sys32_sysfs(int option, u32 arg1, u32 arg2) +sys32_personality (unsigned int personality) { - return sys_sysfs(option, arg1, arg2); + extern asmlinkage long sys_personality (unsigned long); + long ret; + + if (current->personality =3D PER_LINUX32 && personality =3D PER_LINUX) + personality =3D PER_LINUX32; + ret =3D sys_personality(personality); + if (ret =3D PER_LINUX32) + ret =3D PER_LINUX; + return ret; } =20 +#ifdef NOTYET /* UNTESTED FOR IA64 FROM HERE DOWN */ + struct ncp_mount_data32 { int version; unsigned int ncp_fd; __kernel_uid_t32 mounted_uid; - __kernel_pid_t32 wdog_pid; + int wdog_pid; unsigned char mounted_vol[NCP_VOLNAME_LEN + 1]; unsigned int time_out; unsigned int retry_count; @@ -3061,1485 +3850,169 @@ __kernel_uid_t32 uid; __kernel_gid_t32 gid; __kernel_mode_t32 file_mode; - __kernel_mode_t32 dir_mode; -}; - -static void * -do_smb_super_data_conv(void *raw_data) -{ - struct smb_mount_data *s =3D (struct smb_mount_data *)raw_data; - struct smb_mount_data32 *s32 =3D (struct smb_mount_data32 *)raw_data; - - s->version =3D s32->version; - s->mounted_uid =3D s32->mounted_uid; - s->uid =3D s32->uid; - s->gid =3D s32->gid; - s->file_mode =3D s32->file_mode; - s->dir_mode =3D s32->dir_mode; - return raw_data; -} - -static int -copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel) -{ - int i; - unsigned long page; - struct vm_area_struct *vma; - - *kernel =3D 0; - if(!user) - return 0; - vma =3D find_vma(current->mm, (unsigned long)user); - if(!vma || (unsigned long)user < vma->vm_start) - return -EFAULT; - if(!(vma->vm_flags & VM_READ)) - return -EFAULT; - i =3D vma->vm_end - (unsigned long) user; - if(PAGE_SIZE <=3D (unsigned long) i) - i =3D PAGE_SIZE - 1; - if(!(page =3D __get_free_page(GFP_KERNEL))) - return -ENOMEM; - if(copy_from_user((void *) page, user, i)) { - free_page(page); - return -EFAULT; - } - *kernel =3D page; - return 0; -} - -extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * = type, - unsigned long new_flags, void *data); - -#define SMBFS_NAME "smbfs" -#define NCPFS_NAME "ncpfs" - -asmlinkage long -sys32_mount(char *dev_name, char *dir_name, char *type, - unsigned long new_flags, u32 data) -{ - unsigned long type_page; - int err, is_smb, is_ncp; - - if(!capable(CAP_SYS_ADMIN)) - return -EPERM; - is_smb =3D is_ncp =3D 0; - err =3D copy_mount_stuff_to_kernel((const void *)type, &type_page); - if(err) - return err; - if(type_page) { - is_smb =3D !strcmp((char *)type_page, SMBFS_NAME); - is_ncp =3D !strcmp((char *)type_page, NCPFS_NAME); - } - if(!is_smb && !is_ncp) { - if(type_page) - free_page(type_page); - return sys_mount(dev_name, dir_name, type, new_flags, - (void *)AA(data)); - } else { - unsigned long dev_page, dir_page, data_page; - - err =3D copy_mount_stuff_to_kernel((const void *)dev_name, - &dev_page); - if(err) - goto out; - err =3D copy_mount_stuff_to_kernel((const void *)dir_name, - &dir_page); - if(err) - goto dev_out; - err =3D copy_mount_stuff_to_kernel((const void *)AA(data), - &data_page); - if(err) - goto dir_out; - if(is_ncp) - do_ncp_super_data_conv((void *)data_page); - else if(is_smb) - do_smb_super_data_conv((void *)data_page); - else - panic("The problem is here..."); - err =3D do_mount((char *)dev_page, (char *)dir_page, - (char *)type_page, new_flags, - (void *)data_page); - if(data_page) - free_page(data_page); - dir_out: - if(dir_page) - free_page(dir_page); - dev_out: - if(dev_page) - free_page(dev_page); - out: - if(type_page) - free_page(type_page); - return err; - } -} - -struct sysinfo32 { - s32 uptime; - u32 loads[3]; - u32 totalram; - u32 freeram; - u32 sharedram; - u32 bufferram; - u32 totalswap; - u32 freeswap; - unsigned short procs; - char _f[22]; -}; - -extern asmlinkage long sys_sysinfo(struct sysinfo *info); - -asmlinkage long -sys32_sysinfo(struct sysinfo32 *info) -{ - struct sysinfo s; - int ret, err; - mm_segment_t old_fs =3D get_fs (); - - set_fs (KERNEL_DS); - ret =3D sys_sysinfo(&s); - set_fs (old_fs); - err =3D put_user (s.uptime, &info->uptime); - err |=3D __put_user (s.loads[0], &info->loads[0]); - err |=3D __put_user (s.loads[1], &info->loads[1]); - err |=3D __put_user (s.loads[2], &info->loads[2]); - err |=3D __put_user (s.totalram, &info->totalram); - err |=3D __put_user (s.freeram, &info->freeram); - err |=3D __put_user (s.sharedram, &info->sharedram); - err |=3D __put_user (s.bufferram, &info->bufferram); - err |=3D __put_user (s.totalswap, &info->totalswap); - err |=3D __put_user (s.freeswap, &info->freeswap); - err |=3D __put_user (s.procs, &info->procs); - if (err) - return -EFAULT; - return ret; -} - -extern asmlinkage long sys_sched_rr_get_interval(pid_t pid, - struct timespec *interval); - -asmlinkage long -sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *inter= val) -{ - struct timespec t; - int ret; - mm_segment_t old_fs =3D get_fs (); - - set_fs (KERNEL_DS); - ret =3D sys_sched_rr_get_interval(pid, &t); - set_fs (old_fs); - if (put_user (t.tv_sec, &interval->tv_sec) || - __put_user (t.tv_nsec, &interval->tv_nsec)) - return -EFAULT; - return ret; -} - -extern asmlinkage long sys_sigprocmask(int how, old_sigset_t *set, - old_sigset_t *oset); - -asmlinkage long -sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset) -{ - old_sigset_t s; - int ret; - mm_segment_t old_fs =3D get_fs(); - - if (set && get_user (s, set)) return -EFAULT; - set_fs (KERNEL_DS); - ret =3D sys_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL); - set_fs (old_fs); - if (ret) return ret; - if (oset && put_user (s, oset)) return -EFAULT; - return 0; -} - -extern asmlinkage long sys_sigpending(old_sigset_t *set); - -asmlinkage long -sys32_sigpending(old_sigset_t32 *set) -{ - old_sigset_t s; - int ret; - mm_segment_t old_fs =3D get_fs(); - - set_fs (KERNEL_DS); - ret =3D sys_sigpending(&s); - set_fs (old_fs); - if (put_user (s, set)) return -EFAULT; - return ret; -} - -extern asmlinkage long sys_rt_sigpending(sigset_t *set, size_t sigsetsize); - -asmlinkage long -sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize) -{ - sigset_t s; - sigset_t32 s32; - int ret; - mm_segment_t old_fs =3D get_fs(); - - set_fs (KERNEL_DS); - ret =3D sys_rt_sigpending(&s, sigsetsize); - set_fs (old_fs); - if (!ret) { - switch (_NSIG_WORDS) { - case 4: s32.sig[7] =3D (s.sig[3] >> 32); s32.sig[6] =3D s.sig[3]; - case 3: s32.sig[5] =3D (s.sig[2] >> 32); s32.sig[4] =3D s.sig[2]; - case 2: s32.sig[3] =3D (s.sig[1] >> 32); s32.sig[2] =3D s.sig[1]; - case 1: s32.sig[1] =3D (s.sig[0] >> 32); s32.sig[0] =3D s.sig[0]; - } - if (copy_to_user (set, &s32, sizeof(sigset_t32))) - return -EFAULT; - } - return ret; -} - -siginfo_t32 * -siginfo64to32(siginfo_t32 *d, siginfo_t *s) -{ - memset(d, 0, sizeof(siginfo_t32)); - d->si_signo =3D s->si_signo; - d->si_errno =3D s->si_errno; - d->si_code =3D s->si_code; - if (s->si_signo >=3D SIGRTMIN) { - d->si_pid =3D s->si_pid; - d->si_uid =3D s->si_uid; - /* XXX: Ouch, how to find this out??? */ - d->si_int =3D s->si_int; - } else switch (s->si_signo) { - /* XXX: What about POSIX1.b timers */ - case SIGCHLD: - d->si_pid =3D s->si_pid; - d->si_status =3D s->si_status; - d->si_utime =3D s->si_utime; - d->si_stime =3D s->si_stime; - break; - case SIGSEGV: - case SIGBUS: - case SIGFPE: - case SIGILL: - d->si_addr =3D (long)(s->si_addr); - /* XXX: Do we need to translate this from ia64 to ia32 traps? */ - d->si_trapno =3D s->si_trapno; - break; - case SIGPOLL: - d->si_band =3D s->si_band; - d->si_fd =3D s->si_fd; - break; - default: - d->si_pid =3D s->si_pid; - d->si_uid =3D s->si_uid; - break; - } - return d; -} - -siginfo_t * -siginfo32to64(siginfo_t *d, siginfo_t32 *s) -{ - d->si_signo =3D s->si_signo; - d->si_errno =3D s->si_errno; - d->si_code =3D s->si_code; - if (s->si_signo >=3D SIGRTMIN) { - d->si_pid =3D s->si_pid; - d->si_uid =3D s->si_uid; - /* XXX: Ouch, how to find this out??? */ - d->si_int =3D s->si_int; - } else switch (s->si_signo) { - /* XXX: What about POSIX1.b timers */ - case SIGCHLD: - d->si_pid =3D s->si_pid; - d->si_status =3D s->si_status; - d->si_utime =3D s->si_utime; - d->si_stime =3D s->si_stime; - break; - case SIGSEGV: - case SIGBUS: - case SIGFPE: - case SIGILL: - d->si_addr =3D (void *)A(s->si_addr); - /* XXX: Do we need to translate this from ia32 to ia64 traps? */ - d->si_trapno =3D s->si_trapno; - break; - case SIGPOLL: - d->si_band =3D s->si_band; - d->si_fd =3D s->si_fd; - break; - default: - d->si_pid =3D s->si_pid; - d->si_uid =3D s->si_uid; - break; - } - return d; -} - -extern asmlinkage long -sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo, - const struct timespec *uts, size_t sigsetsize); - -asmlinkage long -sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo, - struct timespec32 *uts, __kernel_size_t32 sigsetsize) -{ - sigset_t s; - sigset_t32 s32; - struct timespec t; - int ret; - mm_segment_t old_fs =3D get_fs(); - siginfo_t info; - siginfo_t32 info32; - - if (copy_from_user (&s32, uthese, sizeof(sigset_t32))) - return -EFAULT; - switch (_NSIG_WORDS) { - case 4: s.sig[3] =3D s32.sig[6] | (((long)s32.sig[7]) << 32); - case 3: s.sig[2] =3D s32.sig[4] | (((long)s32.sig[5]) << 32); - case 2: s.sig[1] =3D s32.sig[2] | (((long)s32.sig[3]) << 32); - case 1: s.sig[0] =3D s32.sig[0] | (((long)s32.sig[1]) << 32); - } - if (uts) { - ret =3D get_user (t.tv_sec, &uts->tv_sec); - ret |=3D __get_user (t.tv_nsec, &uts->tv_nsec); - if (ret) - return -EFAULT; - } - set_fs (KERNEL_DS); - ret =3D sys_rt_sigtimedwait(&s, &info, &t, sigsetsize); - set_fs (old_fs); - if (ret >=3D 0 && uinfo) { - if (copy_to_user (uinfo, siginfo64to32(&info32, &info), - sizeof(siginfo_t32))) - return -EFAULT; - } - return ret; -} - -extern asmlinkage long -sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo); - -asmlinkage long -sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo) -{ - siginfo_t info; - siginfo_t32 info32; - int ret; - mm_segment_t old_fs =3D get_fs(); - - if (copy_from_user (&info32, uinfo, sizeof(siginfo_t32))) - return -EFAULT; - /* XXX: Is this correct? */ - siginfo32to64(&info, &info32); - set_fs (KERNEL_DS); - ret =3D sys_rt_sigqueueinfo(pid, sig, &info); - set_fs (old_fs); - return ret; -} - -extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid); - -asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 eui= d) -{ - uid_t sruid, seuid; - - sruid =3D (ruid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid); - seuid =3D (euid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid); - return sys_setreuid(sruid, seuid); -} - -extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid); - -asmlinkage long -sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid, - __kernel_uid_t32 suid) -{ - uid_t sruid, seuid, ssuid; - - sruid =3D (ruid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid); - seuid =3D (euid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid); - ssuid =3D (suid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid); - return sys_setresuid(sruid, seuid, ssuid); -} - -extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid= ); - -asmlinkage long -sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid, - __kernel_uid_t32 *suid) -{ - uid_t a, b, c; - int ret; - mm_segment_t old_fs =3D get_fs(); - - set_fs (KERNEL_DS); - ret =3D sys_getresuid(&a, &b, &c); - set_fs (old_fs); - if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid)) - return -EFAULT; - return ret; -} - -extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid); - -asmlinkage long -sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid) -{ - gid_t srgid, segid; - - srgid =3D (rgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid); - segid =3D (egid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid); - return sys_setregid(srgid, segid); -} - -extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid); - -asmlinkage long -sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid, - __kernel_gid_t32 sgid) -{ - gid_t srgid, segid, ssgid; - - srgid =3D (rgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid); - segid =3D (egid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid); - ssgid =3D (sgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid); - return sys_setresgid(srgid, segid, ssgid); -} - -extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist); - -asmlinkage long -sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist) -{ - gid_t gl[NGROUPS]; - int ret, i; - mm_segment_t old_fs =3D get_fs (); - - set_fs (KERNEL_DS); - ret =3D sys_getgroups(gidsetsize, gl); - set_fs (old_fs); - if (gidsetsize && ret > 0 && ret <=3D NGROUPS) - for (i =3D 0; i < ret; i++, grouplist++) - if (__put_user (gl[i], grouplist)) - return -EFAULT; - return ret; -} - -extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist); - -asmlinkage long -sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist) -{ - gid_t gl[NGROUPS]; - int ret, i; - mm_segment_t old_fs =3D get_fs (); - - if ((unsigned) gidsetsize > NGROUPS) - return -EINVAL; - for (i =3D 0; i < gidsetsize; i++, grouplist++) - if (__get_user (gl[i], grouplist)) - return -EFAULT; - set_fs (KERNEL_DS); - ret =3D sys_setgroups(gidsetsize, gl); - set_fs (old_fs); - return ret; -} - - -/* XXX These as well... */ -extern __inline__ struct socket * -socki_lookup(struct inode *inode) -{ - return &inode->u.socket_i; -} - -extern __inline__ struct socket * -sockfd_lookup(int fd, int *err) -{ - struct file *file; - struct inode *inode; - - if (!(file =3D fget(fd))) - { - *err =3D -EBADF; - return NULL; - } - - inode =3D file->f_dentry->d_inode; - if (!inode->i_sock || !socki_lookup(inode)) - { - *err =3D -ENOTSOCK; - fput(file); - return NULL; - } - - return socki_lookup(inode); -} - -struct msghdr32 { - u32 msg_name; - int msg_namelen; - u32 msg_iov; - __kernel_size_t32 msg_iovlen; - u32 msg_control; - __kernel_size_t32 msg_controllen; - unsigned msg_flags; -}; - -struct cmsghdr32 { - __kernel_size_t32 cmsg_len; - int cmsg_level; - int cmsg_type; -}; - -/* Bleech... */ -#define __CMSG32_NXTHDR(ctl, len, cmsg, cmsglen) \ - __cmsg32_nxthdr((ctl),(len),(cmsg),(cmsglen)) -#define CMSG32_NXTHDR(mhdr, cmsg, cmsglen) \ - cmsg32_nxthdr((mhdr), (cmsg), (cmsglen)) - -#define CMSG32_ALIGN(len) ( ((len)+sizeof(int)-1) & ~(sizeof(int)-1) ) - -#define CMSG32_DATA(cmsg) \ - ((void *)((char *)(cmsg) + CMSG32_ALIGN(sizeof(struct cmsghdr32)))) -#define CMSG32_SPACE(len) \ - (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + CMSG32_ALIGN(len)) -#define CMSG32_LEN(len) (CMSG32_ALIGN(sizeof(struct cmsghdr32)) + (len)) - -#define __CMSG32_FIRSTHDR(ctl,len) ((len) >=3D sizeof(struct cmsghdr32) ? \ - (struct cmsghdr32 *)(ctl) : \ - (struct cmsghdr32 *)NULL) -#define CMSG32_FIRSTHDR(msg) \ - __CMSG32_FIRSTHDR((msg)->msg_control, (msg)->msg_controllen) - -__inline__ struct cmsghdr32 * -__cmsg32_nxthdr(void *__ctl, __kernel_size_t __size, - struct cmsghdr32 *__cmsg, int __cmsg_len) -{ - struct cmsghdr32 * __ptr; - - __ptr =3D (struct cmsghdr32 *)(((unsigned char *) __cmsg) + - CMSG32_ALIGN(__cmsg_len)); - if ((unsigned long)((char*)(__ptr+1) - (char *) __ctl) > __size) - return NULL; - - return __ptr; -} - -__inline__ struct cmsghdr32 * -cmsg32_nxthdr (struct msghdr *__msg, struct cmsghdr32 *__cmsg, int __cmsg_= len) -{ - return __cmsg32_nxthdr(__msg->msg_control, __msg->msg_controllen, - __cmsg, __cmsg_len); -} - -static inline int -iov_from_user32_to_kern(struct iovec *kiov, struct iovec32 *uiov32, int ni= ov) -{ - int tot_len =3D 0; - - while(niov > 0) { - u32 len, buf; - - if(get_user(len, &uiov32->iov_len) || - get_user(buf, &uiov32->iov_base)) { - tot_len =3D -EFAULT; - break; - } - tot_len +=3D len; - kiov->iov_base =3D (void *)A(buf); - kiov->iov_len =3D (__kernel_size_t) len; - uiov32++; - kiov++; - niov--; - } - return tot_len; -} - -static inline int -msghdr_from_user32_to_kern(struct msghdr *kmsg, struct msghdr32 *umsg) -{ - u32 tmp1, tmp2, tmp3; - int err; - - err =3D get_user(tmp1, &umsg->msg_name); - err |=3D __get_user(tmp2, &umsg->msg_iov); - err |=3D __get_user(tmp3, &umsg->msg_control); - if (err) - return -EFAULT; - - kmsg->msg_name =3D (void *)A(tmp1); - kmsg->msg_iov =3D (struct iovec *)A(tmp2); - kmsg->msg_control =3D (void *)A(tmp3); - - err =3D get_user(kmsg->msg_namelen, &umsg->msg_namelen); - err |=3D get_user(kmsg->msg_iovlen, &umsg->msg_iovlen); - err |=3D get_user(kmsg->msg_controllen, &umsg->msg_controllen); - err |=3D get_user(kmsg->msg_flags, &umsg->msg_flags); - - return err; -} - -/* I've named the args so it is easy to tell whose space the pointers are = in. */ -static int -verify_iovec32(struct msghdr *kern_msg, struct iovec *kern_iov, - char *kern_address, int mode) -{ - int tot_len; - - if(kern_msg->msg_namelen) { - if(mode=3DVERIFY_READ) { - int err =3D move_addr_to_kernel(kern_msg->msg_name, - kern_msg->msg_namelen, - kern_address); - if(err < 0) - return err; - } - kern_msg->msg_name =3D kern_address; - } else - kern_msg->msg_name =3D NULL; - - if(kern_msg->msg_iovlen > UIO_FASTIOV) { - kern_iov =3D kmalloc(kern_msg->msg_iovlen * sizeof(struct iovec), - GFP_KERNEL); - if(!kern_iov) - return -ENOMEM; - } - - tot_len =3D iov_from_user32_to_kern(kern_iov, - (struct iovec32 *)kern_msg->msg_iov, - kern_msg->msg_iovlen); - if(tot_len >=3D 0) - kern_msg->msg_iov =3D kern_iov; - else if(kern_msg->msg_iovlen > UIO_FASTIOV) - kfree(kern_iov); - - return tot_len; -} - -/* There is a lot of hair here because the alignment rules (and - * thus placement) of cmsg headers and length are different for - * 32-bit apps. -DaveM - */ -static int -cmsghdr_from_user32_to_kern(struct msghdr *kmsg, unsigned char *stackbuf, - int stackbuf_size) -{ - struct cmsghdr32 *ucmsg; - struct cmsghdr *kcmsg, *kcmsg_base; - __kernel_size_t32 ucmlen; - __kernel_size_t kcmlen, tmp; - - kcmlen =3D 0; - kcmsg_base =3D kcmsg =3D (struct cmsghdr *)stackbuf; - ucmsg =3D CMSG32_FIRSTHDR(kmsg); - while(ucmsg !=3D NULL) { - if(get_user(ucmlen, &ucmsg->cmsg_len)) - return -EFAULT; - - /* Catch bogons. */ - if(CMSG32_ALIGN(ucmlen) < - CMSG32_ALIGN(sizeof(struct cmsghdr32))) - return -EINVAL; - if((unsigned long)(((char *)ucmsg - (char *)kmsg->msg_control) - + ucmlen) > kmsg->msg_controllen) - return -EINVAL; - - tmp =3D ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) + - CMSG_ALIGN(sizeof(struct cmsghdr))); - kcmlen +=3D tmp; - ucmsg =3D CMSG32_NXTHDR(kmsg, ucmsg, ucmlen); - } - if(kcmlen =3D 0) - return -EINVAL; - - /* The kcmlen holds the 64-bit version of the control length. - * It may not be modified as we do not stick it into the kmsg - * until we have successfully copied over all of the data - * from the user. - */ - if(kcmlen > stackbuf_size) - kcmsg_base =3D kcmsg =3D kmalloc(kcmlen, GFP_KERNEL); - if(kcmsg =3D NULL) - return -ENOBUFS; - - /* Now copy them over neatly. */ - memset(kcmsg, 0, kcmlen); - ucmsg =3D CMSG32_FIRSTHDR(kmsg); - while(ucmsg !=3D NULL) { - __get_user(ucmlen, &ucmsg->cmsg_len); - tmp =3D ((ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))) + - CMSG_ALIGN(sizeof(struct cmsghdr))); - kcmsg->cmsg_len =3D tmp; - __get_user(kcmsg->cmsg_level, &ucmsg->cmsg_level); - __get_user(kcmsg->cmsg_type, &ucmsg->cmsg_type); - - /* Copy over the data. */ - if(copy_from_user(CMSG_DATA(kcmsg), - CMSG32_DATA(ucmsg), - (ucmlen - CMSG32_ALIGN(sizeof(*ucmsg))))) - goto out_free_efault; - - /* Advance. */ - kcmsg =3D (struct cmsghdr *)((char *)kcmsg + CMSG_ALIGN(tmp)); - ucmsg =3D CMSG32_NXTHDR(kmsg, ucmsg, ucmlen); - } - - /* Ok, looks like we made it. Hook it up and return success. */ - kmsg->msg_control =3D kcmsg_base; - kmsg->msg_controllen =3D kcmlen; - return 0; - -out_free_efault: - if(kcmsg_base !=3D (struct cmsghdr *)stackbuf) - kfree(kcmsg_base); - return -EFAULT; -} - -static void -put_cmsg32(struct msghdr *kmsg, int level, int type, int len, void *data) -{ - struct cmsghdr32 *cm =3D (struct cmsghdr32 *) kmsg->msg_control; - struct cmsghdr32 cmhdr; - int cmlen =3D CMSG32_LEN(len); - - if(cm =3D NULL || kmsg->msg_controllen < sizeof(*cm)) { - kmsg->msg_flags |=3D MSG_CTRUNC; - return; - } - - if(kmsg->msg_controllen < cmlen) { - kmsg->msg_flags |=3D MSG_CTRUNC; - cmlen =3D kmsg->msg_controllen; - } - cmhdr.cmsg_level =3D level; - cmhdr.cmsg_type =3D type; - cmhdr.cmsg_len =3D cmlen; - - if(copy_to_user(cm, &cmhdr, sizeof cmhdr)) - return; - if(copy_to_user(CMSG32_DATA(cm), data, - cmlen - sizeof(struct cmsghdr32))) - return; - cmlen =3D CMSG32_SPACE(len); - kmsg->msg_control +=3D cmlen; - kmsg->msg_controllen -=3D cmlen; -} - -static void scm_detach_fds32(struct msghdr *kmsg, struct scm_cookie *scm) -{ - struct cmsghdr32 *cm =3D (struct cmsghdr32 *) kmsg->msg_control; - int fdmax =3D (kmsg->msg_controllen - sizeof(struct cmsghdr32)) - / sizeof(int); - int fdnum =3D scm->fp->count; - struct file **fp =3D scm->fp->fp; - int *cmfptr; - int err =3D 0, i; - - if (fdnum < fdmax) - fdmax =3D fdnum; - - for (i =3D 0, cmfptr =3D (int *) CMSG32_DATA(cm); - i < fdmax; - i++, cmfptr++) { - int new_fd; - err =3D get_unused_fd(); - if (err < 0) - break; - new_fd =3D err; - err =3D put_user(new_fd, cmfptr); - if (err) { - put_unused_fd(new_fd); - break; - } - /* Bump the usage count and install the file. */ - fp[i]->f_count++; - current->files->fd[new_fd] =3D fp[i]; - } - - if (i > 0) { - int cmlen =3D CMSG32_LEN(i * sizeof(int)); - if (!err) - err =3D put_user(SOL_SOCKET, &cm->cmsg_level); - if (!err) - err =3D put_user(SCM_RIGHTS, &cm->cmsg_type); - if (!err) - err =3D put_user(cmlen, &cm->cmsg_len); - if (!err) { - cmlen =3D CMSG32_SPACE(i * sizeof(int)); - kmsg->msg_control +=3D cmlen; - kmsg->msg_controllen -=3D cmlen; - } - } - if (i < fdnum) - kmsg->msg_flags |=3D MSG_CTRUNC; - - /* - * All of the files that fit in the message have had their - * usage counts incremented, so we just free the list. - */ - __scm_destroy(scm); -} - -/* In these cases we (currently) can just copy to data over verbatim - * because all CMSGs created by the kernel have well defined types which - * have the same layout in both the 32-bit and 64-bit API. One must add - * some special cased conversions here if we start sending control messages - * with incompatible types. - * - * SCM_RIGHTS and SCM_CREDENTIALS are done by hand in recvmsg32 right after - * we do our work. The remaining cases are: - * - * SOL_IP IP_PKTINFO struct in_pktinfo 32-bit clean - * IP_TTL int 32-bit clean - * IP_TOS __u8 32-bit clean - * IP_RECVOPTS variable length 32-bit clean - * IP_RETOPTS variable length 32-bit clean - * (these last two are clean because the types are defined - * by the IPv4 protocol) - * IP_RECVERR struct sock_extended_err + - * struct sockaddr_in 32-bit clean - * SOL_IPV6 IPV6_RECVERR struct sock_extended_err + - * struct sockaddr_in6 32-bit clean - * IPV6_PKTINFO struct in6_pktinfo 32-bit clean - * IPV6_HOPLIMIT int 32-bit clean - * IPV6_FLOWINFO u32 32-bit clean - * IPV6_HOPOPTS ipv6 hop exthdr 32-bit clean - * IPV6_DSTOPTS ipv6 dst exthdr(s) 32-bit clean - * IPV6_RTHDR ipv6 routing exthdr 32-bit clean - * IPV6_AUTHHDR ipv6 auth exthdr 32-bit clean - */ -static void -cmsg32_recvmsg_fixup(struct msghdr *kmsg, unsigned long orig_cmsg_uptr) -{ - unsigned char *workbuf, *wp; - unsigned long bufsz, space_avail; - struct cmsghdr *ucmsg; - - bufsz =3D ((unsigned long)kmsg->msg_control) - orig_cmsg_uptr; - space_avail =3D kmsg->msg_controllen + bufsz; - wp =3D workbuf =3D kmalloc(bufsz, GFP_KERNEL); - if(workbuf =3D NULL) - goto fail; - - /* To make this more sane we assume the kernel sends back properly - * formatted control messages. Because of how the kernel will truncate - * the cmsg_len for MSG_TRUNC cases, we need not check that case either. - */ - ucmsg =3D (struct cmsghdr *) orig_cmsg_uptr; - while(((unsigned long)ucmsg) < ((unsigned long)kmsg->msg_control)) { - struct cmsghdr32 *kcmsg32 =3D (struct cmsghdr32 *) wp; - int clen64, clen32; - - /* UCMSG is the 64-bit format CMSG entry in user-space. - * KCMSG32 is within the kernel space temporary buffer - * we use to convert into a 32-bit style CMSG. - */ - __get_user(kcmsg32->cmsg_len, &ucmsg->cmsg_len); - __get_user(kcmsg32->cmsg_level, &ucmsg->cmsg_level); - __get_user(kcmsg32->cmsg_type, &ucmsg->cmsg_type); - - clen64 =3D kcmsg32->cmsg_len; - copy_from_user(CMSG32_DATA(kcmsg32), CMSG_DATA(ucmsg), - clen64 - CMSG_ALIGN(sizeof(*ucmsg))); - clen32 =3D ((clen64 - CMSG_ALIGN(sizeof(*ucmsg))) + - CMSG32_ALIGN(sizeof(struct cmsghdr32))); - kcmsg32->cmsg_len =3D clen32; - - ucmsg =3D (struct cmsghdr *) (((char *)ucmsg) + - CMSG_ALIGN(clen64)); - wp =3D (((char *)kcmsg32) + CMSG32_ALIGN(clen32)); - } - - /* Copy back fixed up data, and adjust pointers. */ - bufsz =3D (wp - workbuf); - copy_to_user((void *)orig_cmsg_uptr, workbuf, bufsz); - - kmsg->msg_control =3D (struct cmsghdr *) - (((char *)orig_cmsg_uptr) + bufsz); - kmsg->msg_controllen =3D space_avail - bufsz; - - kfree(workbuf); - return; - -fail: - /* If we leave the 64-bit format CMSG chunks in there, - * the application could get confused and crash. So to - * ensure greater recovery, we report no CMSGs. - */ - kmsg->msg_controllen +=3D bufsz; - kmsg->msg_control =3D (void *) orig_cmsg_uptr; -} - -asmlinkage long -sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags) -{ - struct socket *sock; - char address[MAX_SOCK_ADDR]; - struct iovec iov[UIO_FASTIOV]; - unsigned char ctl[sizeof(struct cmsghdr) + 20]; - unsigned char *ctl_buf =3D ctl; - struct msghdr kern_msg; - int err, total_len; - - if(msghdr_from_user32_to_kern(&kern_msg, user_msg)) - return -EFAULT; - if(kern_msg.msg_iovlen > UIO_MAXIOV) - return -EINVAL; - err =3D verify_iovec32(&kern_msg, iov, address, VERIFY_READ); - if (err < 0) - goto out; - total_len =3D err; - - if(kern_msg.msg_controllen) { - err =3D cmsghdr_from_user32_to_kern(&kern_msg, ctl, sizeof(ctl)); - if(err) - goto out_freeiov; - ctl_buf =3D kern_msg.msg_control; - } - kern_msg.msg_flags =3D user_flags; - - sock =3D sockfd_lookup(fd, &err); - if (sock !=3D NULL) { - if (sock->file->f_flags & O_NONBLOCK) - kern_msg.msg_flags |=3D MSG_DONTWAIT; - err =3D sock_sendmsg(sock, &kern_msg, total_len); - sockfd_put(sock); - } - - /* N.B. Use kfree here, as kern_msg.msg_controllen might change? */ - if(ctl_buf !=3D ctl) - kfree(ctl_buf); -out_freeiov: - if(kern_msg.msg_iov !=3D iov) - kfree(kern_msg.msg_iov); -out: - return err; -} - -asmlinkage long -sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags) -{ - struct iovec iovstack[UIO_FASTIOV]; - struct msghdr kern_msg; - char addr[MAX_SOCK_ADDR]; - struct socket *sock; - struct iovec *iov =3D iovstack; - struct sockaddr *uaddr; - int *uaddr_len; - unsigned long cmsg_ptr; - int err, total_len, len =3D 0; - - if(msghdr_from_user32_to_kern(&kern_msg, user_msg)) - return -EFAULT; - if(kern_msg.msg_iovlen > UIO_MAXIOV) - return -EINVAL; - - uaddr =3D kern_msg.msg_name; - uaddr_len =3D &user_msg->msg_namelen; - err =3D verify_iovec32(&kern_msg, iov, addr, VERIFY_WRITE); - if (err < 0) - goto out; - total_len =3D err; - - cmsg_ptr =3D (unsigned long) kern_msg.msg_control; - kern_msg.msg_flags =3D 0; - - sock =3D sockfd_lookup(fd, &err); - if (sock !=3D NULL) { - struct scm_cookie scm; - - if (sock->file->f_flags & O_NONBLOCK) - user_flags |=3D MSG_DONTWAIT; - memset(&scm, 0, sizeof(scm)); - lock_kernel(); - err =3D sock->ops->recvmsg(sock, &kern_msg, total_len, - user_flags, &scm); - if(err >=3D 0) { - len =3D err; - if(!kern_msg.msg_control) { - if(sock->passcred || scm.fp) - kern_msg.msg_flags |=3D MSG_CTRUNC; - if(scm.fp) - __scm_destroy(&scm); - } else { - /* If recvmsg processing itself placed some - * control messages into user space, it's is - * using 64-bit CMSG processing, so we need - * to fix it up before we tack on more stuff. - */ - if((unsigned long) kern_msg.msg_control - !=3D cmsg_ptr) - cmsg32_recvmsg_fixup(&kern_msg, - cmsg_ptr); - - /* Wheee... */ - if(sock->passcred) - put_cmsg32(&kern_msg, - SOL_SOCKET, SCM_CREDENTIALS, - sizeof(scm.creds), - &scm.creds); - if(scm.fp !=3D NULL) - scm_detach_fds32(&kern_msg, &scm); - } - } - unlock_kernel(); - sockfd_put(sock); - } - - if(uaddr !=3D NULL && err >=3D 0) - err =3D move_addr_to_user(addr, kern_msg.msg_namelen, uaddr, - uaddr_len); - if(cmsg_ptr !=3D 0 && err >=3D 0) { - unsigned long ucmsg_ptr =3D ((unsigned long)kern_msg.msg_control); - __kernel_size_t32 uclen =3D (__kernel_size_t32) (ucmsg_ptr - - cmsg_ptr); - err |=3D __put_user(uclen, &user_msg->msg_controllen); - } - if(err >=3D 0) - err =3D __put_user(kern_msg.msg_flags, &user_msg->msg_flags); - if(kern_msg.msg_iov !=3D iov) - kfree(kern_msg.msg_iov); -out: - if(err < 0) - return err; - return len; -} - -extern void check_pending(int signum); - -#ifdef CONFIG_MODULES - -extern asmlinkage unsigned long sys_create_module(const char *name_user, - size_t size); - -asmlinkage unsigned long -sys32_create_module(const char *name_user, __kernel_size_t32 size) -{ - return sys_create_module(name_user, (size_t)size); -} - -extern asmlinkage long sys_init_module(const char *name_user, - struct module *mod_user); - -/* Hey, when you're trying to init module, take time and prepare us a nice= 64bit - * module structure, even if from 32bit modutils... Why to pollute kernel.= .. :)) - */ -asmlinkage long -sys32_init_module(const char *name_user, struct module *mod_user) -{ - return sys_init_module(name_user, mod_user); -} - -extern asmlinkage long sys_delete_module(const char *name_user); - -asmlinkage long -sys32_delete_module(const char *name_user) -{ - return sys_delete_module(name_user); -} - -struct module_info32 { - u32 addr; - u32 size; - u32 flags; - s32 usecount; -}; - -/* Query various bits about modules. */ - -static inline long -get_mod_name(const char *user_name, char **buf) -{ - unsigned long page; - long retval; - - if ((unsigned long)user_name >=3D TASK_SIZE - && !segment_eq(get_fs (), KERNEL_DS)) - return -EFAULT; - - page =3D __get_free_page(GFP_KERNEL); - if (!page) - return -ENOMEM; - - retval =3D strncpy_from_user((char *)page, user_name, PAGE_SIZE); - if (retval > 0) { - if (retval < PAGE_SIZE) { - *buf =3D (char *)page; - return retval; - } - retval =3D -ENAMETOOLONG; - } else if (!retval) - retval =3D -EINVAL; - - free_page(page); - return retval; -} - -static inline void -put_mod_name(char *buf) -{ - free_page((unsigned long)buf); -} - -static __inline__ struct module * -find_module(const char *name) -{ - struct module *mod; - - for (mod =3D module_list; mod ; mod =3D mod->next) { - if (mod->flags & MOD_DELETED) - continue; - if (!strcmp(mod->name, name)) - break; - } - - return mod; -} - -static int -qm_modules(char *buf, size_t bufsize, __kernel_size_t32 *ret) -{ - struct module *mod; - size_t nmod, space, len; - - nmod =3D space =3D 0; - - for (mod =3D module_list; mod->next !=3D NULL; mod =3D mod->next, ++nmod)= { - len =3D strlen(mod->name)+1; - if (len > bufsize) - goto calc_space_needed; - if (copy_to_user(buf, mod->name, len)) - return -EFAULT; - buf +=3D len; - bufsize -=3D len; - space +=3D len; - } - - if (put_user(nmod, ret)) - return -EFAULT; - else - return 0; - -calc_space_needed: - space +=3D len; - while ((mod =3D mod->next)->next !=3D NULL) - space +=3D strlen(mod->name)+1; - - if (put_user(space, ret)) - return -EFAULT; - else - return -ENOSPC; -} - -static int -qm_deps(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *= ret) -{ - size_t i, space, len; - - if (mod->next =3D NULL) - return -EINVAL; - if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) !=3D MOD_RUNNING) - if (put_user(0, ret)) - return -EFAULT; - else - return 0; - - space =3D 0; - for (i =3D 0; i < mod->ndeps; ++i) { - const char *dep_name =3D mod->deps[i].dep->name; - - len =3D strlen(dep_name)+1; - if (len > bufsize) - goto calc_space_needed; - if (copy_to_user(buf, dep_name, len)) - return -EFAULT; - buf +=3D len; - bufsize -=3D len; - space +=3D len; - } - - if (put_user(i, ret)) - return -EFAULT; - else - return 0; - -calc_space_needed: - space +=3D len; - while (++i < mod->ndeps) - space +=3D strlen(mod->deps[i].dep->name)+1; - - if (put_user(space, ret)) - return -EFAULT; - else - return -ENOSPC; -} - -static int -qm_refs(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *= ret) -{ - size_t nrefs, space, len; - struct module_ref *ref; - - if (mod->next =3D NULL) - return -EINVAL; - if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) !=3D MOD_RUNNING) - if (put_user(0, ret)) - return -EFAULT; - else - return 0; - - space =3D 0; - for (nrefs =3D 0, ref =3D mod->refs; ref ; ++nrefs, ref =3D ref->next_ref= ) { - const char *ref_name =3D ref->ref->name; - - len =3D strlen(ref_name)+1; - if (len > bufsize) - goto calc_space_needed; - if (copy_to_user(buf, ref_name, len)) - return -EFAULT; - buf +=3D len; - bufsize -=3D len; - space +=3D len; - } - - if (put_user(nrefs, ret)) - return -EFAULT; - else - return 0; - -calc_space_needed: - space +=3D len; - while ((ref =3D ref->next_ref) !=3D NULL) - space +=3D strlen(ref->ref->name)+1; - - if (put_user(space, ret)) - return -EFAULT; - else - return -ENOSPC; -} - -static inline int -qm_symbols(struct module *mod, char *buf, size_t bufsize, - __kernel_size_t32 *ret) -{ - size_t i, space, len; - struct module_symbol *s; - char *strings; - unsigned *vals; - - if ((mod->flags & (MOD_RUNNING | MOD_DELETED)) !=3D MOD_RUNNING) - if (put_user(0, ret)) - return -EFAULT; - else - return 0; - - space =3D mod->nsyms * 2*sizeof(u32); - - i =3D len =3D 0; - s =3D mod->syms; - - if (space > bufsize) - goto calc_space_needed; - - if (!access_ok(VERIFY_WRITE, buf, space)) - return -EFAULT; - - bufsize -=3D space; - vals =3D (unsigned *)buf; - strings =3D buf+space; - - for (; i < mod->nsyms ; ++i, ++s, vals +=3D 2) { - len =3D strlen(s->name)+1; - if (len > bufsize) - goto calc_space_needed; - - if (copy_to_user(strings, s->name, len) - || __put_user(s->value, vals+0) - || __put_user(space, vals+1)) - return -EFAULT; - - strings +=3D len; - bufsize -=3D len; - space +=3D len; - } - - if (put_user(i, ret)) - return -EFAULT; - else - return 0; + __kernel_mode_t32 dir_mode; +}; =20 -calc_space_needed: - for (; i < mod->nsyms; ++i, ++s) - space +=3D strlen(s->name)+1; +static void * +do_smb_super_data_conv(void *raw_data) +{ + struct smb_mount_data *s =3D (struct smb_mount_data *)raw_data; + struct smb_mount_data32 *s32 =3D (struct smb_mount_data32 *)raw_data; =20 - if (put_user(space, ret)) - return -EFAULT; - else - return -ENOSPC; + s->version =3D s32->version; + s->mounted_uid =3D s32->mounted_uid; + s->uid =3D s32->uid; + s->gid =3D s32->gid; + s->file_mode =3D s32->file_mode; + s->dir_mode =3D s32->dir_mode; + return raw_data; } =20 -static inline int -qm_info(struct module *mod, char *buf, size_t bufsize, __kernel_size_t32 *= ret) +static int +copy_mount_stuff_to_kernel(const void *user, unsigned long *kernel) { - int error =3D 0; - - if (mod->next =3D NULL) - return -EINVAL; - - if (sizeof(struct module_info32) <=3D bufsize) { - struct module_info32 info; - info.addr =3D (unsigned long)mod; - info.size =3D mod->size; - info.flags =3D mod->flags; - info.usecount - ((mod_member_present(mod, can_unload) - && mod->can_unload) - ? -1 : atomic_read(&mod->uc.usecount)); - - if (copy_to_user(buf, &info, sizeof(struct module_info32))) - return -EFAULT; - } else - error =3D -ENOSPC; + int i; + unsigned long page; + struct vm_area_struct *vma; =20 - if (put_user(sizeof(struct module_info32), ret)) + *kernel =3D 0; + if(!user) + return 0; + vma =3D find_vma(current->mm, (unsigned long)user); + if(!vma || (unsigned long)user < vma->vm_start) return -EFAULT; - - return error; + if(!(vma->vm_flags & VM_READ)) + return -EFAULT; + i =3D vma->vm_end - (unsigned long) user; + if(PAGE_SIZE <=3D (unsigned long) i) + i =3D PAGE_SIZE - 1; + if(!(page =3D __get_free_page(GFP_KERNEL))) + return -ENOMEM; + if(copy_from_user((void *) page, user, i)) { + free_page(page); + return -EFAULT; + } + *kernel =3D page; + return 0; } =20 +extern asmlinkage long sys_mount(char * dev_name, char * dir_name, char * = type, + unsigned long new_flags, void *data); + +#define SMBFS_NAME "smbfs" +#define NCPFS_NAME "ncpfs" + asmlinkage long -sys32_query_module(char *name_user, int which, char *buf, - __kernel_size_t32 bufsize, u32 ret) +sys32_mount(char *dev_name, char *dir_name, char *type, + unsigned long new_flags, u32 data) { - struct module *mod; - int err; + unsigned long type_page; + int err, is_smb, is_ncp; =20 - lock_kernel(); - if (name_user =3D 0) { - /* This finds "kernel_module" which is not exported. */ - for(mod =3D module_list; mod->next !=3D NULL; mod =3D mod->next) - ; + if(!capable(CAP_SYS_ADMIN)) + return -EPERM; + is_smb =3D is_ncp =3D 0; + err =3D copy_mount_stuff_to_kernel((const void *)type, &type_page); + if(err) + return err; + if(type_page) { + is_smb =3D !strcmp((char *)type_page, SMBFS_NAME); + is_ncp =3D !strcmp((char *)type_page, NCPFS_NAME); + } + if(!is_smb && !is_ncp) { + if(type_page) + free_page(type_page); + return sys_mount(dev_name, dir_name, type, new_flags, + (void *)AA(data)); } else { - long namelen; - char *name; + unsigned long dev_page, dir_page, data_page; =20 - if ((namelen =3D get_mod_name(name_user, &name)) < 0) { - err =3D namelen; - goto out; - } - err =3D -ENOENT; - if (namelen =3D 0) { - /* This finds "kernel_module" which is not exported. */ - for(mod =3D module_list; - mod->next !=3D NULL; - mod =3D mod->next) ; - } else if ((mod =3D find_module(name)) =3D NULL) { - put_mod_name(name); + err =3D copy_mount_stuff_to_kernel((const void *)dev_name, + &dev_page); + if(err) goto out; - } - put_mod_name(name); - } - - switch (which) - { - case 0: - err =3D 0; - break; - case QM_MODULES: - err =3D qm_modules(buf, bufsize, (__kernel_size_t32 *)AA(ret)); - break; - case QM_DEPS: - err =3D qm_deps(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret)); - break; - case QM_REFS: - err =3D qm_refs(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret)); - break; - case QM_SYMBOLS: - err =3D qm_symbols(mod, buf, bufsize, - (__kernel_size_t32 *)AA(ret)); - break; - case QM_INFO: - err =3D qm_info(mod, buf, bufsize, (__kernel_size_t32 *)AA(ret)); - break; - default: - err =3D -EINVAL; - break; + err =3D copy_mount_stuff_to_kernel((const void *)dir_name, + &dir_page); + if(err) + goto dev_out; + err =3D copy_mount_stuff_to_kernel((const void *)AA(data), + &data_page); + if(err) + goto dir_out; + if(is_ncp) + do_ncp_super_data_conv((void *)data_page); + else if(is_smb) + do_smb_super_data_conv((void *)data_page); + else + panic("The problem is here..."); + err =3D do_mount((char *)dev_page, (char *)dir_page, + (char *)type_page, new_flags, + (void *)data_page); + if(data_page) + free_page(data_page); + dir_out: + if(dir_page) + free_page(dir_page); + dev_out: + if(dev_page) + free_page(dev_page); + out: + if(type_page) + free_page(type_page); + return err; } -out: - unlock_kernel(); - return err; } =20 -struct kernel_sym32 { - u32 value; - char name[60]; -}; - -extern asmlinkage long sys_get_kernel_syms(struct kernel_sym *table); +extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid); =20 -asmlinkage long -sys32_get_kernel_syms(struct kernel_sym32 *table) +asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 eui= d) { - int len, i; - struct kernel_sym *tbl; - mm_segment_t old_fs; + uid_t sruid, seuid; =20 - len =3D sys_get_kernel_syms(NULL); - if (!table) return len; - tbl =3D kmalloc (len * sizeof (struct kernel_sym), GFP_KERNEL); - if (!tbl) return -ENOMEM; - old_fs =3D get_fs(); - set_fs (KERNEL_DS); - sys_get_kernel_syms(tbl); - set_fs (old_fs); - for (i =3D 0; i < len; i++, table +=3D sizeof (struct kernel_sym32)) { - if (put_user (tbl[i].value, &table->value) || - copy_to_user (table->name, tbl[i].name, 60)) - break; - } - kfree (tbl); - return i; + sruid =3D (ruid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid); + seuid =3D (euid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid); + return sys_setreuid(sruid, seuid); } =20 -#else /* CONFIG_MODULES */ - -asmlinkage unsigned long -sys32_create_module(const char *name_user, size_t size) -{ - return -ENOSYS; -} +extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid); =20 asmlinkage long -sys32_init_module(const char *name_user, struct module *mod_user) +sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid, + __kernel_uid_t32 suid) { - return -ENOSYS; -} + uid_t sruid, seuid, ssuid; =20 -asmlinkage long -sys32_delete_module(const char *name_user) -{ - return -ENOSYS; + sruid =3D (ruid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid); + seuid =3D (euid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid); + ssuid =3D (suid =3D (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid); + return sys_setresuid(sruid, seuid, ssuid); } =20 +extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid); + asmlinkage long -sys32_query_module(const char *name_user, int which, char *buf, size_t buf= size, - size_t *ret) +sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid) { - /* Let the program know about the new interface. Not that - it'll do them much good. */ - if (which =3D 0) - return 0; + gid_t srgid, segid; =20 - return -ENOSYS; + srgid =3D (rgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid); + segid =3D (egid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid); + return sys_setregid(srgid, segid); } =20 +extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid); + asmlinkage long -sys32_get_kernel_syms(struct kernel_sym *table) +sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid, + __kernel_gid_t32 sgid) { - return -ENOSYS; -} + gid_t srgid, segid, ssgid; =20 -#endif /* CONFIG_MODULES */ + srgid =3D (rgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid); + segid =3D (egid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid); + ssgid =3D (sgid =3D (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid); + return sys_setresgid(srgid, segid, ssgid); +} =20 /* Stuff for NFS server syscalls... */ struct nfsctl_svc32 { @@ -4820,154 +4293,6 @@ return err; } =20 -asmlinkage long sys_utimes(char *, struct timeval *); - -asmlinkage long -sys32_utimes(char *filename, struct timeval32 *tvs) -{ - char *kfilename; - struct timeval ktvs[2]; - mm_segment_t old_fs; - int ret; - - kfilename =3D getname32(filename); - ret =3D PTR_ERR(kfilename); - if (!IS_ERR(kfilename)) { - if (tvs) { - if (get_tv32(&ktvs[0], tvs) || - get_tv32(&ktvs[1], 1+tvs)) - return -EFAULT; - } - - old_fs =3D get_fs(); - set_fs(KERNEL_DS); - ret =3D sys_utimes(kfilename, &ktvs[0]); - set_fs(old_fs); - - putname(kfilename); - } - return ret; -} - -/* These are here just in case some old ia32 binary calls it. */ -asmlinkage long -sys32_pause(void) -{ - current->state =3D TASK_INTERRUPTIBLE; - schedule(); - return -ERESTARTNOHAND; -} - -/* PCI config space poking. */ -extern asmlinkage long sys_pciconfig_read(unsigned long bus, - unsigned long dfn, - unsigned long off, - unsigned long len, - unsigned char *buf); - -extern asmlinkage long sys_pciconfig_write(unsigned long bus, - unsigned long dfn, - unsigned long off, - unsigned long len, - unsigned char *buf); - -asmlinkage long -sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf) -{ - return sys_pciconfig_read((unsigned long) bus, - (unsigned long) dfn, - (unsigned long) off, - (unsigned long) len, - (unsigned char *)AA(ubuf)); -} - -asmlinkage long -sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf) -{ - return sys_pciconfig_write((unsigned long) bus, - (unsigned long) dfn, - (unsigned long) off, - (unsigned long) len, - (unsigned char *)AA(ubuf)); -} - -extern asmlinkage long sys_prctl(int option, unsigned long arg2, - unsigned long arg3, unsigned long arg4, - unsigned long arg5); - -asmlinkage long -sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5) -{ - return sys_prctl(option, - (unsigned long) arg2, - (unsigned long) arg3, - (unsigned long) arg4, - (unsigned long) arg5); -} - - -extern asmlinkage ssize_t sys_pread(unsigned int fd, char * buf, - size_t count, loff_t pos); - -extern asmlinkage ssize_t sys_pwrite(unsigned int fd, const char * buf, - size_t count, loff_t pos); - -typedef __kernel_ssize_t32 ssize_t32; - -asmlinkage ssize_t32 -sys32_pread(unsigned int fd, char *ubuf, __kernel_size_t32 count, - u32 poshi, u32 poslo) -{ - return sys_pread(fd, ubuf, count, - ((loff_t)AA(poshi) << 32) | AA(poslo)); -} - -asmlinkage ssize_t32 -sys32_pwrite(unsigned int fd, char *ubuf, __kernel_size_t32 count, - u32 poshi, u32 poslo) -{ - return sys_pwrite(fd, ubuf, count, - ((loff_t)AA(poshi) << 32) | AA(poslo)); -} - - -extern asmlinkage long sys_personality(unsigned long); - -asmlinkage long -sys32_personality(unsigned long personality) -{ - int ret; - if (current->personality =3D PER_LINUX32 && personality =3D PER_LINUX) - personality =3D PER_LINUX32; - ret =3D sys_personality(personality); - if (ret =3D PER_LINUX32) - ret =3D PER_LINUX; - return ret; -} - -extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offse= t, - size_t count); - -asmlinkage long -sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count) -{ - mm_segment_t old_fs =3D get_fs(); - int ret; - off_t of; - - if (offset && get_user(of, offset)) - return -EFAULT; - - set_fs(KERNEL_DS); - ret =3D sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count); - set_fs(old_fs); - - if (!ret && offset && put_user(of, offset)) - return -EFAULT; - - return ret; -} - /* Handle adjtimex compatability. */ =20 struct timex32 { @@ -5041,4 +4366,4 @@ =20 return ret; } -#endif // NOTYET +#endif /* NOTYET */ diff -urN linux-2.4.13/arch/ia64/kernel/Makefile linux-2.4.13-lia/arch/ia64= /kernel/Makefile --- linux-2.4.13/arch/ia64/kernel/Makefile Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/Makefile Wed Oct 10 17:55:55 2001 @@ -16,7 +16,7 @@ obj-y :=3D acpi.o entry.o gate.o efi.o efi_stub.o ia64_ksyms.o irq.o irq_i= a64.o irq_lsapic.o ivt.o \ machvec.o pal.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o \ signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o -obj-$(CONFIG_IA64_GENERIC) +=3D machvec.o iosapic.o +obj-$(CONFIG_IA64_GENERIC) +=3D iosapic.o obj-$(CONFIG_IA64_DIG) +=3D iosapic.o obj-$(CONFIG_IA64_PALINFO) +=3D palinfo.o obj-$(CONFIG_EFI_VARS) +=3D efivars.o diff -urN linux-2.4.13/arch/ia64/kernel/acpi.c linux-2.4.13-lia/arch/ia64/k= ernel/acpi.c --- linux-2.4.13/arch/ia64/kernel/acpi.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/acpi.c Thu Oct 4 00:21:39 2001 @@ -9,7 +9,7 @@ * Copyright (C) 2000 Hewlett-Packard Co. * Copyright (C) 2000 David Mosberger-Tang * Copyright (C) 2000 Intel Corp. - * Copyright (C) 2000 J.I. Lee + * Copyright (C) 2000,2001 J.I. Lee * ACPI based kernel configuration manager. * ACPI 2.0 & IA64 ext 0.71 */ @@ -34,6 +34,9 @@ =20 #undef ACPI_DEBUG /* Guess what this does? */ =20 +/* global array to record platform interrupt vectors for generic int routi= ng */ +int platform_irq_list[ACPI_MAX_PLATFORM_IRQS]; + /* These are ugly but will be reclaimed by the kernel */ int __initdata available_cpus; int __initdata total_cpus; @@ -42,7 +45,9 @@ void (*pm_power_off) (void); =20 asm (".weak iosapic_register_legacy_irq"); +asm (".weak iosapic_register_platform_irq"); asm (".weak iosapic_init"); +asm (".weak iosapic_version"); =20 const char * acpi_get_sysname (void) @@ -55,6 +60,8 @@ return "hpsim"; # elif defined (CONFIG_IA64_SGI_SN1) return "sn1"; +# elif defined (CONFIG_IA64_SGI_SN2) + return "sn2"; # elif defined (CONFIG_IA64_DIG) return "dig"; # else @@ -65,6 +72,25 @@ } =20 /* + * Interrupt routing API for device drivers. + * Provides the interrupt vector for a generic platform event + * (currently only CPEI implemented) + */ +int +acpi_request_vector(u32 int_type) +{ + int vector =3D -1; + + if (int_type < ACPI_MAX_PLATFORM_IRQS) { + /* correctable platform error interrupt */ + vector =3D platform_irq_list[int_type]; + } else + printk("acpi_request_vector(): invalid interrupt type\n"); + + return vector; +} + +/* * Configure legacy IRQ information. */ static void __init @@ -139,15 +165,93 @@ } =20 /* - * Info on platform interrupt sources: NMI. PMI, INIT, etc. + * Extract iosapic info from madt (again) to determine which iosapic + * this platform interrupt resides in + */ +static int __init +acpi20_which_iosapic (int global_vector, acpi_madt_t *madt, u32 *irq_base,= char **iosapic_address) +{ + acpi_entry_iosapic_t *iosapic; + char *p, *end; + int ver, max_pin; + + p =3D (char *) (madt + 1); + end =3D p + (madt->header.length - sizeof(acpi_madt_t)); + + while (p < end) { + switch (*p) { + case ACPI20_ENTRY_IO_SAPIC: + /* collect IOSAPIC info for platform int use later */ + iosapic =3D (acpi_entry_iosapic_t *)p; + *irq_base =3D iosapic->irq_base; + *iosapic_address =3D ioremap(iosapic->address, 0); + /* is this the iosapic we're looking for? */ + ver =3D iosapic_version(*iosapic_address); + max_pin =3D (ver >> 16) & 0xff; + if ((global_vector - *irq_base) <=3D max_pin) + return 0; /* found it! */ + break; + default: + break; + } + p +=3D p[1]; + } + return 1; +} + +/* + * Info on platform interrupt sources: NMI, PMI, INIT, etc. */ static void __init -acpi20_platform (char *p) +acpi20_platform (char *p, acpi_madt_t *madt) { + int vector; + u32 irq_base; + char *iosapic_address; + unsigned long polarity =3D 0, trigger =3D 0; acpi20_entry_platform_src_t *plat =3D (acpi20_entry_platform_src_t *) p; =20 printk("PLATFORM: IOSAPIC %x -> Vector %x on CPU %.04u:%.04u\n", plat->iosapic_vector, plat->global_vector, plat->eid, plat->id); + + /* record platform interrupt vectors for generic int routing code */ + + if (!iosapic_register_platform_irq) { + printk("acpi20_platform(): no ACPI platform IRQ support\n"); + return; + } + + /* extract polarity and trigger info from flags */ + switch (plat->flags) { + case 0x5: polarity =3D 1; trigger =3D 1; break; + case 0x7: polarity =3D 0; trigger =3D 1; break; + case 0xd: polarity =3D 1; trigger =3D 0; break; + case 0xf: polarity =3D 0; trigger =3D 0; break; + default: + printk("acpi20_platform(): unknown flags 0x%x\n", plat->flags); + break; + } + + /* which iosapic does this IRQ belong to? */ + if (acpi20_which_iosapic(plat->global_vector, madt, &irq_base, &iosapic_a= ddress)) { + printk("acpi20_platform(): I/O SAPIC not found!\n"); + return; + } + + /* + * get vector assignment for this IRQ, set attributes, and program the IO= SAPIC + * routing table + */ + vector =3D iosapic_register_platform_irq(plat->int_type, + plat->global_vector, + plat->iosapic_vector, + plat->eid, + plat->id, + polarity, + trigger, + irq_base, + iosapic_address); + platform_irq_list[plat->int_type] =3D vector; } =20 /* @@ -173,8 +277,10 @@ static void __init acpi20_parse_madt (acpi_madt_t *madt) { - acpi_entry_iosapic_t *iosapic; + acpi_entry_iosapic_t *iosapic =3D NULL; + acpi20_entry_lsapic_t *lsapic =3D NULL; char *p, *end; + int i; =20 /* Base address of IPI Message Block */ if (madt->lapic_address) { @@ -186,23 +292,27 @@ p =3D (char *) (madt + 1); end =3D p + (madt->header.length - sizeof(acpi_madt_t)); =20 + /* Initialize platform interrupt vector array */ + for (i =3D 0; i < ACPI_MAX_PLATFORM_IRQS; i++) + platform_irq_list[i] =3D -1; + /* - * Splitted entry parsing to ensure ordering. + * Split-up entry parsing to ensure ordering. */ - while (p < end) { switch (*p) { - case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE: + case ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE: printk("ACPI 2.0 MADT: LOCAL APIC Override\n"); acpi20_lapic_addr_override(p); break; =20 - case ACPI20_ENTRY_LOCAL_SAPIC: + case ACPI20_ENTRY_LOCAL_SAPIC: printk("ACPI 2.0 MADT: LOCAL SAPIC\n"); + lsapic =3D (acpi20_entry_lsapic_t *) p; acpi20_lsapic(p); break; =20 - case ACPI20_ENTRY_IO_SAPIC: + case ACPI20_ENTRY_IO_SAPIC: iosapic =3D (acpi_entry_iosapic_t *) p; if (iosapic_init) /* @@ -218,26 +328,25 @@ ); break; =20 - case ACPI20_ENTRY_PLATFORM_INT_SOURCE: + case ACPI20_ENTRY_PLATFORM_INT_SOURCE: printk("ACPI 2.0 MADT: PLATFORM INT SOURCE\n"); - acpi20_platform(p); + acpi20_platform(p, madt); break; =20 - case ACPI20_ENTRY_LOCAL_APIC: + case ACPI20_ENTRY_LOCAL_APIC: printk("ACPI 2.0 MADT: LOCAL APIC entry\n"); break; - case ACPI20_ENTRY_IO_APIC: + case ACPI20_ENTRY_IO_APIC: printk("ACPI 2.0 MADT: IO APIC entry\n"); break; - case ACPI20_ENTRY_NMI_SOURCE: + case ACPI20_ENTRY_NMI_SOURCE: printk("ACPI 2.0 MADT: NMI SOURCE entry\n"); break; - case ACPI20_ENTRY_LOCAL_APIC_NMI: + case ACPI20_ENTRY_LOCAL_APIC_NMI: printk("ACPI 2.0 MADT: LOCAL APIC NMI entry\n"); break; - case ACPI20_ENTRY_INT_SRC_OVERRIDE: + case ACPI20_ENTRY_INT_SRC_OVERRIDE: break; - default: + default: printk("ACPI 2.0 MADT: unknown entry skip\n"); break; break; } - p +=3D p[1]; } =20 @@ -245,16 +354,35 @@ end =3D p + (madt->header.length - sizeof(acpi_madt_t)); =20 while (p < end) { + switch (*p) { + case ACPI20_ENTRY_LOCAL_APIC: + if (lsapic) break; + printk("ACPI 2.0 MADT: LOCAL APIC entry\n"); + /* parse local apic if there's no local Sapic */ + break; + case ACPI20_ENTRY_IO_APIC: + if (iosapic) break; + printk("ACPI 2.0 MADT: IO APIC entry\n"); + /* parse ioapic if there's no ioSapic */ + break; + default: + break; + } + p +=3D p[1]; + } =20 + p =3D (char *) (madt + 1); + end =3D p + (madt->header.length - sizeof(acpi_madt_t)); + + while (p < end) { switch (*p) { - case ACPI20_ENTRY_INT_SRC_OVERRIDE: + case ACPI20_ENTRY_INT_SRC_OVERRIDE: printk("ACPI 2.0 MADT: INT SOURCE Override\n"); acpi_legacy_irq(p); break; - default: + default: break; } - p +=3D p[1]; } =20 diff -urN linux-2.4.13/arch/ia64/kernel/efi.c linux-2.4.13-lia/arch/ia64/ke= rnel/efi.c --- linux-2.4.13/arch/ia64/kernel/efi.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/efi.c Thu Oct 4 00:21:39 2001 @@ -482,5 +482,7 @@ static void __exit efivars_exit(void) { +#ifdef CONFIG_PROC_FS remove_proc_entry(efi_dir->name, NULL); +#endif } diff -urN linux-2.4.13/arch/ia64/kernel/efi_stub.S linux-2.4.13-lia/arch/ia= 64/kernel/efi_stub.S --- linux-2.4.13/arch/ia64/kernel/efi_stub.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/efi_stub.S Thu Oct 4 00:21:39 2001 @@ -1,8 +1,8 @@ /* * EFI call stub. * - * Copyright (C) 1999-2000 Hewlett-Packard Co - * Copyright (C) 1999-2000 David Mosberger + * Copyright (C) 1999-2001 Hewlett-Packard Co + * David Mosberger * * This stub allows us to make EFI calls in physical mode with interrupts * turned off. We need this because we can't call SetVirtualMap() until @@ -68,17 +68,17 @@ ;; andcm r16=3Dloc3,r16 // get psr with IT, DT, and RT bits cleared mov out3=3Din4 - br.call.sptk.few rp=3Dia64_switch_mode + br.call.sptk.many rp=3Dia64_switch_mode .ret0: mov out4=3Din5 mov out5=3Din6 mov out6=3Din7 - br.call.sptk.few rp=B6 // call the EFI function + br.call.sptk.many rp=B6 // call the EFI function .ret1: mov ar.rsc=3D0 // put RSE in enforced lazy, LE mode mov r16=3Dloc3 - br.call.sptk.few rp=3Dia64_switch_mode // return to virtual mode + br.call.sptk.many rp=3Dia64_switch_mode // return to virtual mode .ret2: mov ar.rsc=3Dloc4 // restore RSE configuration mov ar.pfs=3Dloc1 mov rp=3Dloc0 mov gp=3Dloc2 - br.ret.sptk.few rp + br.ret.sptk.many rp END(efi_call_phys) diff -urN linux-2.4.13/arch/ia64/kernel/efivars.c linux-2.4.13-lia/arch/ia6= 4/kernel/efivars.c --- linux-2.4.13/arch/ia64/kernel/efivars.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/efivars.c Wed Oct 10 17:40:37 2001 @@ -65,6 +65,7 @@ =20 MODULE_AUTHOR("Matt Domsch "); MODULE_DESCRIPTION("/proc interface to EFI Variables"); +MODULE_LICENSE("GPL"); =20 #define EFIVARS_VERSION "0.03 2001-Apr-20" =20 @@ -276,21 +277,20 @@ if (!capable(CAP_SYS_ADMIN)) return -EACCES; =20 - spin_lock(&efivars_lock); MOD_INC_USE_COUNT; =20 var_data =3D kmalloc(size, GFP_KERNEL); if (!var_data) { MOD_DEC_USE_COUNT; - spin_unlock(&efivars_lock); return -ENOMEM; } if (copy_from_user(var_data, buffer, size)) { MOD_DEC_USE_COUNT; - spin_unlock(&efivars_lock); + kfree(var_data); return -EFAULT; } =20 + spin_lock(&efivars_lock); =20 /* Since the data ptr we've currently got is probably for a different variable find the right variable. diff -urN linux-2.4.13/arch/ia64/kernel/entry.S linux-2.4.13-lia/arch/ia64/= kernel/entry.S --- linux-2.4.13/arch/ia64/kernel/entry.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/entry.S Wed Oct 24 18:13:32 2001 @@ -4,7 +4,7 @@ * Kernel entry points. * * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * David Mosberger-Tang * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond * Copyright (C) 1999 Asit Mallick @@ -15,7 +15,7 @@ * kernel stack. This allows us to handle interrupts without changing * to physical mode. * - * Jonathan Nickin + * Jonathan Nicklin * Patrick O'Rourke * 11/07/2000 / @@ -55,7 +55,7 @@ mov out1=3Din1 // argv mov out2=3Din2 // envp add out3=16,sp // regs - br.call.sptk.few rp=3Dsys_execve + br.call.sptk.many rp=3Dsys_execve .ret0: cmp4.ge p6,p7=3Dr8,r0 mov ar.pfs=3Dloc1 // restore ar.pfs sxt4 r8=3Dr8 // return 64-bit result @@ -64,7 +64,7 @@ (p6) cmp.ne pKern,pUser=3Dr0,r0 // a successful execve() lands us in user-= mode... mov rp=3Dloc0 (p6) mov ar.pfs=3Dr0 // clear ar.pfs on success -(p7) br.ret.sptk.few rp +(p7) br.ret.sptk.many rp =20 /* * In theory, we'd have to zap this state only to prevent leaking of @@ -85,7 +85,7 @@ ldf.fill f26=3D[sp]; ldf.fill f27=3D[sp]; mov f28=F0 ldf.fill f29=3D[sp]; ldf.fill f30=3D[sp]; mov f31=F0 mov ar.lc=3D0 - br.ret.sptk.few rp + br.ret.sptk.many rp END(ia64_execve) =20 GLOBAL_ENTRY(sys_clone2) @@ -99,7 +99,7 @@ mov out3=3Din2 adds out2=3DIA64_SWITCH_STACK_SIZE+16,sp // out2 =3D ®s mov out0=3Din0 // out0 =3D clone_flags - br.call.sptk.few rp=3Ddo_fork + br.call.sptk.many rp=3Ddo_fork .ret1: .restore sp adds sp=3DIA64_SWITCH_STACK_SIZE,sp // pop the switch stack mov ar.pfs=3Dloc1 @@ -118,7 +118,7 @@ mov out3=3D0 adds out2=3DIA64_SWITCH_STACK_SIZE+16,sp // out2 =3D ®s mov out0=3Din0 // out0 =3D clone_flags - br.call.sptk.few rp=3Ddo_fork + br.call.sptk.many rp=3Ddo_fork .ret2: .restore sp adds sp=3DIA64_SWITCH_STACK_SIZE,sp // pop the switch stack mov ar.pfs=3Dloc1 @@ -143,7 +143,7 @@ shr.u r26=3Dr20,KERNEL_PG_SHIFT mov r16=3DKERNEL_PG_NUM ;; - cmp.ne p6,p7=3Dr26,r16 // check >=3D 64M && < 128M + cmp.ne p6,p7=3Dr26,r16 // check whether r26 !=3D KERNEL_PG_NUM adds r21=3DIA64_TASK_THREAD_KSP_OFFSET,in0 ;; /* @@ -151,12 +151,13 @@ * again. */ (p6) cmp.eq p7,p6=3Dr26,r27 -(p6) br.cond.dpnt.few .map +(p6) br.cond.dpnt .map ;; -.done: ld8 sp=3D[r21] // load kernel stack pointer of new task +.done: (p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!! ;; (p6) srlz.d + ld8 sp=3D[r21] // load kernel stack pointer of new task mov IA64_KR(CURRENT)=3Dr20 // update "current" application register mov r8=3Dr13 // return pointer to previously running task mov r13=3Din0 // set "current" pointer @@ -167,7 +168,7 @@ #ifdef CONFIG_SMP sync.i // ensure "fc"s done by this CPU are visible on other CPUs #endif - br.ret.sptk.few rp // boogie on out in new context + br.ret.sptk.many rp // boogie on out in new context =20 .map: rsm psr.i | psr.ic @@ -184,7 +185,7 @@ mov IA64_KR(CURRENT_STACK)=3Dr26 // remember last page we mapped... ;; itr.d dtr[r25]=3Dr23 // wire in new mapping... - br.cond.sptk.many .done + br.cond.sptk .done END(ia64_switch_to) =20 /* @@ -212,24 +213,18 @@ .save @priunat,r17 mov r17=3Dar.unat // preserve caller's .body -#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPE= CIFIC)) adds r3=80,sp ;; lfetch.fault.excl.nt1 [r3],128 -#endif mov ar.rsc=3D0 // put RSE in mode: enforced lazy, little endian, pl 0 -#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPE= CIFIC)) adds r2=16+128,sp ;; lfetch.fault.excl.nt1 [r2],128 lfetch.fault.excl.nt1 [r3],128 -#endif adds r14=3DSW(R4)+16,sp -#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPE= CIFIC)) ;; lfetch.fault.excl [r2] lfetch.fault.excl [r3] -#endif adds r15=3DSW(R5)+16,sp ;; mov r18=3Dar.fpsr // preserve fpsr @@ -309,7 +304,7 @@ st8 [r2]=3Dr20 // save ar.bspstore st8 [r3]=3Dr21 // save predicate registers mov ar.rsc=3D3 // put RSE back into eager mode, pl 0 - br.cond.sptk.few b7 + br.cond.sptk.many b7 END(save_switch_stack) =20 /* @@ -321,11 +316,9 @@ ENTRY(load_switch_stack) .prologue .altrp b7 - .body -#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPE= CIFIC)) =20 + .body lfetch.fault.nt1 [sp] -#endif adds r2=3DSW(AR_BSPSTORE)+16,sp adds r3=3DSW(AR_UNAT)+16,sp mov ar.rsc=3D0 // put RSE into enforced lazy mode @@ -426,7 +419,7 @@ ;; (p6) st4 [r2]=3Dr8 (p6) mov r8=3D-1 - br.ret.sptk.few rp + br.ret.sptk.many rp END(__ia64_syscall) =20 /* @@ -441,11 +434,11 @@ .body mov loc2=B6 ;; - br.call.sptk.few rp=3Dsyscall_trace + br.call.sptk.many rp=3Dsyscall_trace .ret3: mov rp=3Dloc0 mov ar.pfs=3Dloc1 mov b6=3Dloc2 - br.ret.sptk.few rp + br.ret.sptk.many rp END(invoke_syscall_trace) =20 /* @@ -462,21 +455,21 @@ =20 GLOBAL_ENTRY(ia64_trace_syscall) PT_REGS_UNWIND_INFO(0) - br.call.sptk.few rp=3Dinvoke_syscall_trace // give parent a chance to cat= ch syscall args -.ret6: br.call.sptk.few rp=B6 // do the syscall + br.call.sptk.many rp=3Dinvoke_syscall_trace // give parent a chance to ca= tch syscall args +.ret6: br.call.sptk.many rp=B6 // do the syscall strace_check_retval: cmp.lt p6,p0=3Dr8,r0 // syscall failed? adds r2=3DPT(R8)+16,sp // r2 =3D &pt_regs.r8 adds r3=3DPT(R10)+16,sp // r3 =3D &pt_regs.r10 mov r10=3D0 -(p6) br.cond.sptk.few strace_error // syscall failed -> +(p6) br.cond.sptk strace_error // syscall failed -> ;; // avoid RAW on r10 strace_save_retval: .mem.offset 0,0; st8.spill [r2]=3Dr8 // store return value in slot for r8 .mem.offset 8,0; st8.spill [r3]=3Dr10 // clear error indication in slot fo= r r10 ia64_strace_leave_kernel: - br.call.sptk.few rp=3Dinvoke_syscall_trace // give parent a chance to cat= ch return value -.rety: br.cond.sptk.many ia64_leave_kernel + br.call.sptk.many rp=3Dinvoke_syscall_trace // give parent a chance to ca= tch return value +.rety: br.cond.sptk ia64_leave_kernel =20 strace_error: ld8 r3=3D[r2] // load pt_regs.r8 @@ -487,7 +480,7 @@ ;; (p6) mov r10=3D-1 (p6) mov r8=3Dr9 - br.cond.sptk.few strace_save_retval + br.cond.sptk strace_save_retval END(ia64_trace_syscall) =20 GLOBAL_ENTRY(ia64_ret_from_clone) @@ -497,7 +490,7 @@ * Called by ia64_switch_to after do_fork()->copy_thread(). r8 contains = the * address of the previously executing task. */ - br.call.sptk.few rp=3Dinvoke_schedule_tail + br.call.sptk.many rp=3Dia64_invoke_schedule_tail .ret8: adds r2=3DIA64_TASK_PTRACE_OFFSET,r13 ;; @@ -505,7 +498,7 @@ ;; mov r8=3D0 tbit.nz p6,p0=3Dr2,PT_TRACESYS_BIT -(p6) br strace_check_retval +(p6) br.cond.spnt strace_check_retval ;; // added stop bits to prevent r8 dependency END(ia64_ret_from_clone) // fall through @@ -519,7 +512,7 @@ (p6) st8.spill [r2]=3Dr8 // store return value in slot for r8 and set unat= bit .mem.offset 8,0 (p6) st8.spill [r3]=3Dr0 // clear error indication in slot for r10 and set= unat bit -(p7) br.cond.spnt.few handle_syscall_error // handle potential syscall fai= lure +(p7) br.cond.spnt handle_syscall_error // handle potential syscall failure END(ia64_ret_from_syscall) // fall through GLOBAL_ENTRY(ia64_leave_kernel) @@ -527,22 +520,22 @@ lfetch.fault [sp] movl r14=3D.restart ;; - MOVBR(.ret.sptk,rp,r14,.restart) + mov.ret.sptk rp=3Dr14,.restart .restart: adds r17=3DIA64_TASK_NEED_RESCHED_OFFSET,r13 adds r18=3DIA64_TASK_SIGPENDING_OFFSET,r13 #ifdef CONFIG_PERFMON - adds r19=3DIA64_TASK_PFM_NOTIFY_OFFSET,r13 + adds r19=3DIA64_TASK_PFM_MUST_BLOCK_OFFSET,r13 #endif ;; #ifdef CONFIG_PERFMON - ld8 r19=3D[r19] // load current->task.pfm_notify +(pUser) ld8 r19=3D[r19] // load current->thread.pfm_must_block #endif - ld8 r17=3D[r17] // load current->need_resched - ld4 r18=3D[r18] // load current->sigpending +(pUser) ld8 r17=3D[r17] // load current->need_resched +(pUser) ld4 r18=3D[r18] // load current->sigpending ;; #ifdef CONFIG_PERFMON - cmp.ne p9,p0=3Dr19,r0 // current->task.pfm_notify !=3D 0? +(pUser) cmp.ne.unc p9,p0=3Dr19,r0 // current->thread.pfm_must_block !=3D= 0? #endif (pUser) cmp.ne.unc p7,p0=3Dr17,r0 // current->need_resched !=3D 0? (pUser) cmp.ne.unc p8,p0=3Dr18,r0 // current->sigpending !=3D 0? @@ -550,7 +543,7 @@ adds r2=3DPT(R8)+16,r12 adds r3=3DPT(R9)+16,r12 #ifdef CONFIG_PERFMON -(p9) br.call.spnt.many b7=3Dpfm_overflow_notify +(p9) br.call.spnt.many b7=3Dpfm_block_on_overflow #endif #if __GNUC__ < 3 (p7) br.call.spnt.many b7=3Dinvoke_schedule @@ -650,13 +643,13 @@ movl r17=3DPERCPU_ADDR+IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET ;; ld4 r17=3D[r17] // r17 =3D cpu_data->phys_stacked_size_p8 -(pKern) br.cond.dpnt.few skip_rbs_switch +(pKern) br.cond.dpnt skip_rbs_switch /* * Restore user backing store. * * NOTE: alloc, loadrs, and cover can't be predicated. */ -(pNonSys) br.cond.dpnt.few dont_preserve_current_frame +(pNonSys) br.cond.dpnt dont_preserve_current_frame cover // add current frame into dirty partition ;; mov r19=3Dar.bsp // get new backing store pointer @@ -687,7 +680,7 @@ shladd in0=3Dloc1,3,r17 mov in1=3D0 ;; - .align 32 +// .align 32 // gas-2.11.90 is unable to generate a stop bit after .align rse_clear_invalid: // cycle 0 { .mii @@ -706,7 +699,7 @@ }{ .mib mov loc3=3D0 mov loc4=3D0 -(pRecurse) br.call.sptk.few b6=3Drse_clear_invalid +(pRecurse) br.call.sptk.many b6=3Drse_clear_invalid =20 }{ .mfi // cycle 2 mov loc5=3D0 @@ -715,7 +708,7 @@ }{ .mib mov loc6=3D0 mov loc7=3D0 -(pReturn) br.ret.sptk.few b6 +(pReturn) br.ret.sptk.many b6 } # undef pRecurse # undef pReturn @@ -761,24 +754,24 @@ ;; .mem.offset 0,0; st8.spill [r2]=3Dr9 // store errno in pt_regs.r8 and set = unat bit .mem.offset 8,0; st8.spill [r3]=3Dr10 // store error indication in pt_regs= .r10 and set unat bit - br.cond.sptk.many ia64_leave_kernel + br.cond.sptk ia64_leave_kernel END(handle_syscall_error) =20 /* * Invoke schedule_tail(task) while preserving in0-in7, which may be need= ed * in case a system call gets restarted. */ -ENTRY(invoke_schedule_tail) +GLOBAL_ENTRY(ia64_invoke_schedule_tail) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) alloc loc1=3Dar.pfs,8,2,1,0 mov loc0=3Drp mov out0=3Dr8 // Address of previous task ;; - br.call.sptk.few rp=3Dschedule_tail + br.call.sptk.many rp=3Dschedule_tail .ret11: mov ar.pfs=3Dloc1 mov rp=3Dloc0 br.ret.sptk.many rp -END(invoke_schedule_tail) +END(ia64_invoke_schedule_tail) =20 #if __GNUC__ < 3 =20 @@ -797,7 +790,7 @@ mov loc0=3Drp ;; .body - br.call.sptk.few rp=3Dschedule + br.call.sptk.many rp=3Dschedule .ret14: mov ar.pfs=3Dloc1 mov rp=3Dloc0 br.ret.sptk.many rp @@ -824,7 +817,7 @@ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) st8 [sp]=3Dr9,-16 // allocate space for ar.unat and save it .body - br.call.sptk.few rp=3Dia64_do_signal + br.call.sptk.many rp=3Dia64_do_signal .ret15: .restore sp adds sp=16,sp // pop scratch stack space ;; @@ -849,7 +842,7 @@ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) st8 [sp]=3Dr9,-16 // allocate space for ar.unat and save it .body - br.call.sptk.few rp=3Dia64_rt_sigsuspend + br.call.sptk.many rp=3Dia64_rt_sigsuspend .ret17: .restore sp adds sp=16,sp // pop scratch stack space ;; @@ -871,15 +864,15 @@ cmp.eq pNonSys,pSys=3Dr0,r0 // sigreturn isn't a normal syscall... ;; adds out0=16,sp // out0 =3D &sigscratch - br.call.sptk.few rp=3Dia64_rt_sigreturn + br.call.sptk.many rp=3Dia64_rt_sigreturn .ret19: .restore sp 0 adds sp=16,sp ;; ld8 r9=3D[sp] // load new ar.unat - MOVBR(.sptk,b7,r8,ia64_leave_kernel) + mov.sptk b7=3Dr8,ia64_leave_kernel ;; mov ar.unat=3Dr9 - br b7 + br.many b7 END(sys_rt_sigreturn) =20 GLOBAL_ENTRY(ia64_prepare_handle_unaligned) @@ -890,7 +883,7 @@ mov r16=3Dr0 .prologue DO_SAVE_SWITCH_STACK - br.call.sptk.few rp=3Dia64_handle_unaligned // stack frame setup in ivt + br.call.sptk.many rp=3Dia64_handle_unaligned // stack frame setup in ivt .ret21: .body DO_LOAD_SWITCH_STACK br.cond.sptk.many rp // goes to ia64_leave_kernel @@ -920,14 +913,14 @@ adds out0=16,sp // &info mov out1=3Dr13 // current adds out2=16+EXTRA_FRAME_SIZE,sp // &switch_stack - br.call.sptk.few rp=3Dunw_init_frame_info + br.call.sptk.many rp=3Dunw_init_frame_info 1: adds out0=16,sp // &info mov b6=3Dloc2 mov loc2=3Dgp // save gp across indirect function call ;; ld8 gp=3D[in0] mov out1=3Din1 // arg - br.call.sptk.few rp=B6 // invoke the callback function + br.call.sptk.many rp=B6 // invoke the callback function 1: mov gp=3Dloc2 // restore gp =20 // For now, we don't allow changing registers from within @@ -1026,7 +1019,7 @@ data8 sys_setpriority data8 sys_statfs data8 sys_fstatfs - data8 ia64_ni_syscall // 1105 + data8 sys_gettid // 1105 data8 sys_semget data8 sys_semop data8 sys_semctl @@ -1137,7 +1130,7 @@ data8 sys_clone2 data8 sys_getdents64 data8 sys_getunwind // 1215 - data8 ia64_ni_syscall + data8 sys_readahead data8 ia64_ni_syscall data8 ia64_ni_syscall data8 ia64_ni_syscall diff -urN linux-2.4.13/arch/ia64/kernel/entry.h linux-2.4.13-lia/arch/ia64/= kernel/entry.h --- linux-2.4.13/arch/ia64/kernel/entry.h Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/entry.h Thu Oct 4 00:21:39 2001 @@ -1,12 +1,5 @@ #include =20 -/* XXX fixme */ -#if defined(CONFIG_ITANIUM_B1_SPECIFIC) -# define MOVBR(type,br,gr,lbl) mov br=3Dgr -#else -# define MOVBR(type,br,gr,lbl) mov##type br=3Dgr,lbl -#endif - /* * Preserved registers that are shared between code in ivt.S and entry.S. = Be * careful not to step on these! @@ -62,7 +55,7 @@ ;; \ .fframe IA64_SWITCH_STACK_SIZE; \ adds sp=3D-IA64_SWITCH_STACK_SIZE,sp; \ - MOVBR(.ret.sptk,b7,r28,1f); \ + mov.ret.sptk b7=3Dr28,1f; \ SWITCH_STACK_SAVES(0); \ br.cond.sptk.many save_switch_stack; \ 1: @@ -71,7 +64,7 @@ movl r28=1F; \ ;; \ invala; \ - MOVBR(.ret.sptk,b7,r28,1f); \ + mov.ret.sptk b7=3Dr28,1f; \ br.cond.sptk.many load_switch_stack; \ 1: .restore sp; \ adds sp=3DIA64_SWITCH_STACK_SIZE,sp diff -urN linux-2.4.13/arch/ia64/kernel/fw-emu.c linux-2.4.13-lia/arch/ia64= /kernel/fw-emu.c --- linux-2.4.13/arch/ia64/kernel/fw-emu.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/fw-emu.c Wed Oct 24 18:13:46 2001 @@ -174,6 +174,43 @@ " ;;\n" " mov ar.lc=3Dr9\n" " mov r8=3Dr0\n" +" ;;\n" +"1: cmp.eq p6,p7=15,r28 /* PAL_PERF_MON_INFO */\n" +"(p7) br.cond.sptk.few 1f\n" +" mov r8=3D0 /* status =3D 0 */\n" +" movl r9 =3D0x12082004 /* generic=3D4 width2 retired=3D8 cycles=18 */\n" +" mov r10=3D0 /* reserved */\n" +" mov r11=3D0 /* reserved */\n" +" mov r16=3D0xffff /* implemented PMC */\n" +" mov r17=3D0xffff /* implemented PMD */\n" +" add r18=3D8,r29 /* second index */\n" +" ;;\n" +" st8 [r29]=3Dr16,16 /* store implemented PMC */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" +" st8 [r29]=3Dr0,16 /* store implemented PMC */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" +" st8 [r29]=3Dr17,16 /* store implemented PMD */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" mov r16=3D0xf0 /* cycles count capable PMC */\n" +" ;;\n" +" st8 [r29]=3Dr0,16 /* store implemented PMC */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" mov r17=3D0x10 /* retired bundles capable PMC */\n" +" ;;\n" +" st8 [r29]=3Dr16,16 /* store cycles capable */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" +" st8 [r29]=3Dr0,16 /* store implemented PMC */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" +" st8 [r29]=3Dr17,16 /* store retired bundle capable */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" +" st8 [r29]=3Dr0,16 /* store implemented PMC */\n" +" st8 [r18]=3Dr0,16 /* clear remaining bits */\n" +" ;;\n" "1: br.cond.sptk.few rp\n" "stacked:\n" " br.ret.sptk.few rp\n" @@ -414,11 +451,6 @@ #ifdef CONFIG_IA64_SDV strcpy(sal_systab->oem_id, "Intel"); strcpy(sal_systab->product_id, "SDV"); -#endif - -#ifdef CONFIG_IA64_SGI_SN1_SIM - strcpy(sal_systab->oem_id, "SGI"); - strcpy(sal_systab->product_id, "SN1"); #endif =20 /* fill in an entry point: */ diff -urN linux-2.4.13/arch/ia64/kernel/gate.S linux-2.4.13-lia/arch/ia64/k= ernel/gate.S --- linux-2.4.13/arch/ia64/kernel/gate.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/gate.S Thu Oct 4 00:21:39 2001 @@ -3,7 +3,7 @@ * region. For now, it contains the signal trampoline code only. * * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999-2001 David Mosberger-Tang + * David Mosberger-Tang */ =20 #include @@ -18,7 +18,6 @@ # define ARG0_OFF (16 + IA64_SIGFRAME_ARG0_OFFSET) # define ARG1_OFF (16 + IA64_SIGFRAME_ARG1_OFFSET) # define ARG2_OFF (16 + IA64_SIGFRAME_ARG2_OFFSET) -# define RBS_BASE_OFF (16 + IA64_SIGFRAME_RBS_BASE_OFFSET) # define SIGHANDLER_OFF (16 + IA64_SIGFRAME_HANDLER_OFFSET) # define SIGCONTEXT_OFF (16 + IA64_SIGFRAME_SIGCONTEXT_OFFSET) =20 @@ -32,6 +31,8 @@ # define PR_OFF IA64_SIGCONTEXT_PR_OFFSET # define RP_OFF IA64_SIGCONTEXT_B0_OFFSET # define SP_OFF IA64_SIGCONTEXT_R12_OFFSET +# define RBS_BASE_OFF IA64_SIGCONTEXT_RBS_BASE_OFFSET +# define LOADRS_OFF IA64_SIGCONTEXT_LOADRS_OFFSET # define base0 r2 # define base1 r3 /* @@ -73,34 +74,37 @@ .vframesp SP_OFF+SIGCONTEXT_OFF .body =20 - .prologue + .label_state 1 + adds base0=3DSIGHANDLER_OFF,sp - adds base1=3DRBS_BASE_OFF,sp + adds base1=3DRBS_BASE_OFF+SIGCONTEXT_OFF,sp br.call.sptk.many rp=1F 1: ld8 r17=3D[base0],(ARG0_OFF-SIGHANDLER_OFF) // get pointer to signal hand= ler's plabel - ld8 r15=3D[base1],(ARG1_OFF-RBS_BASE_OFF) // get address of new RBS base= (or NULL) + ld8 r15=3D[base1] // get address of new RBS base (or NULL) cover // push args in interrupted frame onto backing store ;; + cmp.ne p8,p0=3Dr15,r0 // do we need to switch the rbs? + mov.m r9=3Dar.bsp // fetch ar.bsp + .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF +(p8) br.cond.spnt setup_rbs // yup -> (clobbers r14, r15, and r16) +back_from_setup_rbs: + .save ar.pfs, r8 alloc r8=3Dar.pfs,0,0,3,0 // get CFM0, EC0, and CPL0 into r8 ld8 out0=3D[base0],16 // load arg0 (signum) + adds base1=3D(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1 ;; ld8 out1=3D[base1] // load arg1 (siginfop) ld8 r10=3D[r17],8 // get signal handler entry point ;; ld8 out2=3D[base0] // load arg2 (sigcontextp) ld8 gp=3D[r17] // get signal handler's global pointer - cmp.ne p8,p0=3Dr15,r0 // do we need to switch the rbs? =20 - mov.m r17=3Dar.bsp // fetch ar.bsp - .spillsp.p p8, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF -(p8) br.cond.spnt.few setup_rbs // yup -> (clobbers r14 and r16) -back_from_setup_rbs: adds base0=3D(BSP_OFF+SIGCONTEXT_OFF),sp ;; .spillsp ar.bsp, BSP_OFF+SIGCONTEXT_OFF - st8 [base0]=3Dr17,(CFM_OFF-BSP_OFF) // save sc_ar_bsp + st8 [base0]=3Dr9,(CFM_OFF-BSP_OFF) // save sc_ar_bsp dep r8=3D0,r8,38,26 // clear EC0, CPL0 and reserved bits adds base1=3D(FR6_OFF+16+SIGCONTEXT_OFF),sp ;; @@ -123,7 +127,7 @@ ;; stf.spill [base0]=F14,32 stf.spill [base1]=F15,32 - br.call.sptk.few rp=B6 // call the signal handler + br.call.sptk.many rp=B6 // call the signal handler .ret0: adds base0=3D(BSP_OFF+SIGCONTEXT_OFF),sp ;; ld8 r15=3D[base0],(CFM_OFF-BSP_OFF) // fetch sc_ar_bsp and advance to CFM= _OFF @@ -131,7 +135,7 @@ ;; ld8 r8=3D[base0] // restore (perhaps modified) CFM0, EC0, and CPL0 cmp.ne p8,p0=3Dr14,r15 // do we need to restore the rbs? -(p8) br.cond.spnt.few restore_rbs // yup -> (clobbers r14 and r16) +(p8) br.cond.spnt restore_rbs // yup -> (clobbers r14 and r16) ;; back_from_restore_rbs: adds base0=3D(FR6_OFF+SIGCONTEXT_OFF),sp @@ -154,30 +158,52 @@ mov r15=3D__NR_rt_sigreturn break __BREAK_SYSCALL =20 + .body + .copy_state 1 setup_rbs: - flushrs // must be first in insn mov ar.rsc=3D0 // put RSE into enforced lazy mode - adds r16=3D(RNAT_OFF+SIGCONTEXT_OFF),sp ;; - mov r14=3Dar.rnat // get rnat as updated by flushrs - mov ar.bspstore=3Dr15 // set new register backing store area + .save ar.rnat, r16 + mov r16=3Dar.rnat // save RNaT before switching backing store area + adds r14=3D(RNAT_OFF+SIGCONTEXT_OFF),sp + + mov ar.bspstore=3Dr15 // switch over to new register backing store area ;; .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF - st8 [r16]=3Dr14 // save sc_ar_rnat + st8 [r14]=3Dr16 // save sc_ar_rnat + adds r14=3D(LOADRS_OFF+SIGCONTEXT_OFF),sp + + mov.m r16=3Dar.bsp // sc_loadrs <- (new bsp - new bspstore) << 16 + ;; + invala + sub r15=3Dr16,r15 + ;; + shl r15=3Dr15,16 + ;; + st8 [r14]=3Dr15 // save sc_loadrs mov ar.rsc=3D0xf // set RSE into eager mode, pl 3 - invala // invalidate ALAT - br.cond.sptk.many back_from_setup_rbs + br.cond.sptk back_from_setup_rbs =20 + .prologue + .copy_state 1 + .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF + .body restore_rbs: - flushrs - mov ar.rsc=3D0 // put RSE into enforced lazy mode + alloc r2=3Dar.pfs,0,0,0,0 // alloc null frame + adds r16=3D(LOADRS_OFF+SIGCONTEXT_OFF),sp + ;; + ld8 r14=3D[r16] adds r16=3D(RNAT_OFF+SIGCONTEXT_OFF),sp ;; + mov ar.rsc=3Dr14 // put RSE into enforced lazy mode ld8 r14=3D[r16] // get new rnat - mov ar.bspstore=3Dr15 // set old register backing store area ;; - mov ar.rnat=3Dr14 // establish new rnat + loadrs // restore dirty partition + ;; + mov ar.bspstore=3Dr15 // switch back to old register backing store area + ;; + mov ar.rnat=3Dr14 // restore RNaT mov ar.rsc=3D0xf // (will be restored later on from sc_ar_rsc) // invala not necessary as that will happen when returning to user-mode - br.cond.sptk.many back_from_restore_rbs + br.cond.sptk back_from_restore_rbs END(ia64_sigtramp) diff -urN linux-2.4.13/arch/ia64/kernel/head.S linux-2.4.13-lia/arch/ia64/k= ernel/head.S --- linux-2.4.13/arch/ia64/kernel/head.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/head.S Thu Oct 4 00:21:39 2001 @@ -6,8 +6,8 @@ * entry point. * * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang - * Copyright (C) 2001 Stephane Eranian + * David Mosberger-Tang + * Stephane Eranian * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond * Copyright (C) 1999 Intel Corp. @@ -86,7 +86,8 @@ /* * Switch into virtual mode: */ - movl r16=3D(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|= IA64_PSR_BN) + movl r16=3D(IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|= IA64_PSR_BN \ + |IA64_PSR_DI) ;; mov cr.ipsr=3Dr16 movl r17=1F @@ -183,31 +184,31 @@ alloc r2=3Dar.pfs,0,0,2,0 movl out0=3Dalive_msg ;; - br.call.sptk.few rp=EArly_printk + br.call.sptk.many rp=EArly_printk 1: // force new bundle #endif /* CONFIG_IA64_EARLY_PRINTK */ =20 #ifdef CONFIG_SMP -(isAP) br.call.sptk.few rp=3Dstart_secondary +(isAP) br.call.sptk.many rp=3Dstart_secondary .ret0: -(isAP) br.cond.sptk.few self +(isAP) br.cond.sptk self #endif =20 // This is executed by the bootstrap processor (bsp) only: =20 #ifdef CONFIG_IA64_FW_EMU // initialize PAL & SAL emulator: - br.call.sptk.few rp=3Dsys_fw_init + br.call.sptk.many rp=3Dsys_fw_init .ret1: #endif - br.call.sptk.few rp=3Dstart_kernel + br.call.sptk.many rp=3Dstart_kernel .ret2: addl r3=3D@ltoff(halt_msg),gp ;; alloc r2=3Dar.pfs,8,0,2,0 ;; ld8 out0=3D[r3] - br.call.sptk.few b0=3Dconsole_print -self: br.sptk.few self // endless loop + br.call.sptk.many b0=3Dconsole_print +self: br.sptk.many self // endless loop END(_start) =20 GLOBAL_ENTRY(ia64_save_debug_regs) @@ -218,7 +219,7 @@ add r19=3DIA64_NUM_DBG_REGS*8,in0 ;; 1: mov r16=DBr[r18] -#if defined(CONFIG_ITANIUM_C0_SPECIFIC) +#ifdef CONFIG_ITANIUM ;; srlz.d #endif @@ -227,17 +228,15 @@ ;; st8.nta [in0]=3Dr16,8 st8.nta [r19]=3Dr17,8 - br.cloop.sptk.few 1b + br.cloop.sptk.many 1b ;; mov ar.lc=3Dr20 // restore ar.lc - br.ret.sptk.few rp + br.ret.sptk.many rp END(ia64_save_debug_regs) =20 GLOBAL_ENTRY(ia64_load_debug_regs) alloc r16=3Dar.pfs,1,0,0,0 -#if !(defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPE= CIFIC)) lfetch.nta [in0] -#endif mov r20=3Dar.lc // preserve ar.lc add r19=3DIA64_NUM_DBG_REGS*8,in0 mov ar.lc=3DIA64_NUM_DBG_REGS-1 @@ -248,15 +247,15 @@ add r18=3D1,r18 ;; mov dbr[r18]=3Dr16 -#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) || defined(CONFIG_ITANIUM_C0_SP= ECIFIC) +#ifdef CONFIG_ITANIUM ;; - srlz.d + srlz.d // Errata 132 (NoFix status) #endif mov ibr[r18]=3Dr17 - br.cloop.sptk.few 1b + br.cloop.sptk.many 1b ;; mov ar.lc=3Dr20 // restore ar.lc - br.ret.sptk.few rp + br.ret.sptk.many rp END(ia64_load_debug_regs) =20 GLOBAL_ENTRY(__ia64_save_fpu) @@ -406,7 +405,7 @@ ;; stf.spill.nta [in0]=F126,32 stf.spill.nta [ r3]=F127,32 - br.ret.sptk.few rp + br.ret.sptk.many rp END(__ia64_save_fpu) =20 GLOBAL_ENTRY(__ia64_load_fpu) @@ -556,7 +555,7 @@ ;; ldf.fill.nta f126=3D[in0],32 ldf.fill.nta f127=3D[ r3],32 - br.ret.sptk.few rp + br.ret.sptk.many rp END(__ia64_load_fpu) =20 GLOBAL_ENTRY(__ia64_init_fpu) @@ -690,7 +689,7 @@ ;; ldf.fill f126=3D[sp] mov f127=F0 - br.ret.sptk.few rp + br.ret.sptk.many rp END(__ia64_init_fpu) =20 /* @@ -738,7 +737,7 @@ rfi // must be last insn in group ;; 1: mov rp=3Dr14 - br.ret.sptk.few rp + br.ret.sptk.many rp END(ia64_switch_mode) =20 #ifdef CONFIG_IA64_BRL_EMU @@ -752,7 +751,7 @@ alloc r16=3Dar.pfs,1,0,0,0; \ mov reg=3Dr32; \ ;; \ - br.ret.sptk rp; \ + br.ret.sptk.many rp; \ END(ia64_set_##reg) =20 SET_REG(b1); @@ -816,12 +815,11 @@ ;; cmp.ne p15,p0=3Dtmp,r0 mov tmp=3Dar.itc -(p15) br.cond.sptk.few .retry // lock is still busy +(p15) br.cond.sptk .retry // lock is still busy ;; // try acquiring lock (we know ar.ccv is still zero!): mov tmp=3D1 ;; - IA64_SEMFIX_INSN cmpxchg4.acq tmp=3D[r31],tmp,ar.ccv ;; cmp.eq p15,p0=3Dtmp,r0 diff -urN linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c linux-2.4.13-lia/arch/= ia64/kernel/ia64_ksyms.c --- linux-2.4.13/arch/ia64/kernel/ia64_ksyms.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/ia64_ksyms.c Thu Oct 4 00:21:39 2001 @@ -145,4 +145,3 @@ #include extern struct proc_dir_entry *efi_dir; EXPORT_SYMBOL(efi_dir); - diff -urN linux-2.4.13/arch/ia64/kernel/iosapic.c linux-2.4.13-lia/arch/ia6= 4/kernel/iosapic.c --- linux-2.4.13/arch/ia64/kernel/iosapic.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/iosapic.c Thu Oct 4 00:21:39 2001 @@ -53,6 +53,7 @@ #include #include #include +#include #include #include #include @@ -325,7 +326,7 @@ set_affinity: iosapic_set_affinity }; =20 -static unsigned int +unsigned int iosapic_version (char *addr) { /* @@ -342,6 +343,113 @@ } =20 /* + * ACPI can describe IOSAPIC interrupts via static tables and namespace + * methods. This provides an interface to register those interrupts and + * program the IOSAPIC RTE. + */ +int +iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned = long + edge_triggered, u32 base_irq, char *iosapic_address) +{ + irq_desc_t *idesc; + struct hw_interrupt_type *irq_type; + int vector; + + vector =3D iosapic_irq_to_vector(global_vector); + if (vector < 0) + vector =3D ia64_alloc_irq(); + + /* fill in information from this vector's IOSAPIC */ + iosapic_irq[vector].addr =3D iosapic_address; + iosapic_irq[vector].base_irq =3D base_irq; + iosapic_irq[vector].pin =3D global_vector - iosapic_irq[vector].base_irq; + iosapic_irq[vector].polarity =3D polarity ? IOSAPIC_POL_HIGH : IOSAPIC_PO= L_LOW; + iosapic_irq[vector].dmode =3D IOSAPIC_LOWEST_PRIORITY; + + if (edge_triggered) { + iosapic_irq[vector].trigger =3D IOSAPIC_EDGE; + irq_type =3D &irq_type_iosapic_edge; + } else { + iosapic_irq[vector].trigger =3D IOSAPIC_LEVEL; + irq_type =3D &irq_type_iosapic_level; + } + + idesc =3D irq_desc(vector); + if (idesc->handler !=3D irq_type) { + if (idesc->handler !=3D &no_irq_type) + printk("iosapic_register_irq(): changing vector 0x%02x from" + "%s to %s\n", vector, idesc->handler->typename, irq_type->typena= me); + idesc->handler =3D irq_type; + } + + printk("IOSAPIC %x(%s,%s) -> Vector %x\n", global_vector, + (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), = vector); + + /* program the IOSAPIC routing table */ + set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); + return vector; +} + +/* + * ACPI calls this when it finds an entry for a platform interrupt. + * Note that the irq_base and IOSAPIC address must be set in iosapic_init(= ). + */ +int +iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapi= c_vector, + u16 eid, u16 id, unsigned long polarity, + unsigned long edge_triggered, u32 base_irq, char *iosapic_addres= s) +{ + struct hw_interrupt_type *irq_type; + irq_desc_t *idesc; + int vector; + + switch (int_type) { + case ACPI20_ENTRY_PIS_CPEI: + vector =3D IA64_PCE_VECTOR; + iosapic_irq[vector].dmode =3D IOSAPIC_LOWEST_PRIORITY; + break; + case ACPI20_ENTRY_PIS_INIT: + vector =3D ia64_alloc_irq(); + iosapic_irq[vector].dmode =3D IOSAPIC_INIT; + break; + default: + printk("iosapic_register_platform_irq(): invalid int type\n"); + return -1; + } + + /* fill in information from this vector's IOSAPIC */ + iosapic_irq[vector].addr =3D iosapic_address; + iosapic_irq[vector].base_irq =3D base_irq; + iosapic_irq[vector].pin =3D global_vector - iosapic_irq[vector].base_irq; + iosapic_irq[vector].polarity =3D polarity ? IOSAPIC_POL_HIGH : IOSAPIC_PO= L_LOW; + + if (edge_triggered) { + iosapic_irq[vector].trigger =3D IOSAPIC_EDGE; + irq_type =3D &irq_type_iosapic_edge; + } else { + iosapic_irq[vector].trigger =3D IOSAPIC_LEVEL; + irq_type =3D &irq_type_iosapic_level; + } + + idesc =3D irq_desc(vector); + if (idesc->handler !=3D irq_type) { + if (idesc->handler !=3D &no_irq_type) + printk("iosapic_register_platform_irq(): changing vector 0x%02x from" + "%s to %s\n", vector, idesc->handler->typename, irq_type->typena= me); + idesc->handler =3D irq_type; + } + + printk("PLATFORM int %x: IOSAPIC %x(%s,%s) -> Vector %x CPU %.02u:%.02u\n= ", + int_type, global_vector, (polarity ? "high" : "low"), + (edge_triggered ? "edge" : "level"), vector, eid, id); + + /* program the IOSAPIC routing table */ + set_rte(vector, ((id << 8) | eid) & 0xffff); + return vector; +} + + +/* * ACPI calls this when it finds an entry for a legacy ISA interrupt. Not= e that the * irq_base and IOSAPIC address must be set in iosapic_init(). */ @@ -436,7 +544,7 @@ /* the interrupt route is for another controller... */ continue; =20 - if (irq < 16) + if (pcat_compat && (irq < 16)) vector =3D isa_irq_to_vector(irq); else { vector =3D iosapic_irq_to_vector(irq); @@ -515,6 +623,23 @@ printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n", dev->bus->number, PCI_SLOT(dev->devfn), pin, vector); dev->irq =3D vector; + +#ifdef CONFIG_SMP + /* + * For platforms that do not support interrupt redirect + * via the XTP interface, we can round-robin the PCI + * device interrupts to the processors + */ + if (!(smp_int_redirect & SMP_IRQ_REDIRECTION)) { + static int cpu_index =3D 0; + + set_rte(vector, cpu_physical_id(cpu_index) & 0xffff); + + cpu_index++; + if (cpu_index >=3D smp_num_cpus) + cpu_index =3D 0; + } +#endif } } /* diff -urN linux-2.4.13/arch/ia64/kernel/irq.c linux-2.4.13-lia/arch/ia64/ke= rnel/irq.c --- linux-2.4.13/arch/ia64/kernel/irq.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/irq.c Thu Oct 4 00:21:39 2001 @@ -33,6 +33,7 @@ #include #include =20 +#include #include #include #include @@ -121,7 +122,10 @@ end_none }; =20 -volatile unsigned long irq_err_count; +atomic_t irq_err_count; +#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MIS= MATCH_DEBUG) +atomic_t irq_mis_count; +#endif =20 /* * Generic, controller-independent functions: @@ -164,14 +168,17 @@ p +=3D sprintf(p, "%10u ", nmi_count(cpu_logical_map(j))); p +=3D sprintf(p, "\n"); -#if defined(CONFIG_SMP) && defined(__i386__) +#if defined(CONFIG_SMP) && defined(CONFIG_X86) p +=3D sprintf(p, "LOC: "); for (j =3D 0; j < smp_num_cpus; j++) p +=3D sprintf(p, "%10u ", apic_timer_irqs[cpu_logical_map(j)]); p +=3D sprintf(p, "\n"); #endif - p +=3D sprintf(p, "ERR: %10lu\n", irq_err_count); + p +=3D sprintf(p, "ERR: %10u\n", atomic_read(&irq_err_count)); +#if defined(CONFIG_X86) && defined(CONFIG_X86_IO_APIC) && defined(APIC_MIS= MATCH_DEBUG) + p +=3D sprintf(p, "MIS: %10u\n", atomic_read(&irq_mis_count)); +#endif return p - buf; } =20 @@ -183,7 +190,7 @@ =20 #ifdef CONFIG_SMP unsigned int global_irq_holder =3D NO_PROC_ID; -volatile unsigned long global_irq_lock; /* long for set_bit --RR */ +unsigned volatile long global_irq_lock; /* pedantic: long for set_bit --RR= */ =20 extern void show_stack(unsigned long* esp); =20 @@ -201,14 +208,14 @@ printk(" %d",bh_count(i)); =20 printk(" ]\nStack dumps:"); -#if defined(__ia64__) +#if defined(CONFIG_IA64) /* * We can't unwind the stack of another CPU without access to * the registers of that CPU. And sending an IPI when we're * in a potentially wedged state doesn't sound like a smart * idea. */ -#elif defined(__i386__) +#elif defined(CONFIG_X86) for(i=3D0;i< smp_num_cpus;i++) { unsigned long esp; if(i=3Dcpu) @@ -261,7 +268,7 @@ /* * We have to allow irqs to arrive between __sti and __cli */ -# ifdef __ia64__ +# ifdef CONFIG_IA64 # define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop 0") # else # define SYNC_OTHER_CORES(x) __asm__ __volatile__ ("nop") @@ -331,6 +338,9 @@ /* Uhhuh.. Somebody else got it. Wait.. */ do { do { +#ifdef CONFIG_X86 + rep_nop(); +#endif } while (test_bit(0,&global_irq_lock)); } while (test_and_set_bit(0,&global_irq_lock)); } @@ -364,7 +374,7 @@ { unsigned int flags; =20 -#ifdef __ia64__ +#ifdef CONFIG_IA64 __save_flags(flags); if (flags & IA64_PSR_I) { __cli(); @@ -403,7 +413,7 @@ int cpu =3D smp_processor_id(); =20 __save_flags(flags); -#ifdef __ia64__ +#ifdef CONFIG_IA64 local_enabled =3D (flags & IA64_PSR_I) !=3D 0; #else local_enabled =3D (flags >> EFLAGS_IF_SHIFT) & 1; @@ -476,13 +486,19 @@ return status; } =20 -/* - * Generic enable/disable code: this just calls - * down into the PIC-specific version for the actual - * hardware disable after having gotten the irq - * controller lock. +/** + * disable_irq_nosync - disable an irq without waiting + * @irq: Interrupt to disable + * + * Disable the selected interrupt line. Disables and Enables are + * nested. + * Unlike disable_irq(), this function does not ensure existing + * instances of the IRQ handler have completed before returning. + * + * This function may be called from IRQ context. */ -void inline disable_irq_nosync(unsigned int irq) + +inline void disable_irq_nosync(unsigned int irq) { irq_desc_t *desc =3D irq_desc(irq); unsigned long flags; @@ -495,10 +511,19 @@ spin_unlock_irqrestore(&desc->lock, flags); } =20 -/* - * Synchronous version of the above, making sure the IRQ is - * no longer running on any other IRQ.. +/** + * disable_irq - disable an irq and wait for completion + * @irq: Interrupt to disable + * + * Disable the selected interrupt line. Enables and Disables are + * nested. + * This function waits for any pending IRQ handlers for this interrupt + * to complete before returning. If you use this function while + * holding a resource the IRQ handler may need you will deadlock. + * + * This function may be called - with care - from IRQ context. */ + void disable_irq(unsigned int irq) { disable_irq_nosync(irq); @@ -512,6 +537,17 @@ #endif } =20 +/** + * enable_irq - enable handling of an irq + * @irq: Interrupt to enable + * + * Undoes the effect of one call to disable_irq(). If this + * matches the last disable, processing of interrupts on this + * IRQ line is re-enabled. + * + * This function may be called from IRQ context. + */ + void enable_irq(unsigned int irq) { irq_desc_t *desc =3D irq_desc(irq); @@ -533,7 +569,8 @@ desc->depth--; break; case 0: - printk("enable_irq() unbalanced from %p\n", (void *) __builtin_return_ad= dress(0)); + printk("enable_irq(%u) unbalanced from %p\n", + irq, (void *) __builtin_return_address(0)); } spin_unlock_irqrestore(&desc->lock, flags); } @@ -626,11 +663,41 @@ desc->handler->end(irq); spin_unlock(&desc->lock); } - if (local_softirq_pending()) - do_softirq(); return 1; } =20 +/** + * request_irq - allocate an interrupt line + * @irq: Interrupt line to allocate + * @handler: Function to be called when the IRQ occurs + * @irqflags: Interrupt type flags + * @devname: An ascii name for the claiming device + * @dev_id: A cookie passed back to the handler function + * + * This call allocates interrupt resources and enables the + * interrupt line and IRQ handling. From the point this + * call is made your handler function may be invoked. Since + * your handler function must clear any interrupt the board=20 + * raises, you must take care both to initialise your hardware + * and to set up the interrupt handler in the right order. + * + * Dev_id must be globally unique. Normally the address of the + * device data structure is used as the cookie. Since the handler + * receives this value it makes sense to use it. + * + * If your interrupt is shared you must pass a non NULL dev_id + * as this is required when freeing the interrupt. + * + * Flags: + * + * SA_SHIRQ Interrupt is shared + * + * SA_INTERRUPT Disable local interrupts while processing + * + * SA_SAMPLE_RANDOM The interrupt can be used for entropy + * + */ + int request_irq(unsigned int irq, void (*handler)(int, void *, struct pt_regs *), unsigned long irqflags, @@ -676,6 +743,24 @@ return retval; } =20 +/** + * free_irq - free an interrupt + * @irq: Interrupt line to free + * @dev_id: Device identity to free + * + * Remove an interrupt handler. The handler is removed and if the + * interrupt line is no longer in use by any driver it is disabled. + * On a shared IRQ the caller must ensure the interrupt is disabled + * on the card it drives before calling this function. The function + * does not return until any executing interrupts for this IRQ + * have completed. + * + * This function may be called from interrupt context.=20 + * + * Bugs: Attempting to free an irq in a handler for the same irq hangs + * the machine. + */ + void free_irq(unsigned int irq, void *dev_id) { irq_desc_t *desc; @@ -726,6 +811,17 @@ * with "IRQ_WAITING" cleared and the interrupt * disabled. */ + +static DECLARE_MUTEX(probe_sem); + +/** + * probe_irq_on - begin an interrupt autodetect + * + * Commence probing for an interrupt. The interrupts are scanned + * and a mask of potential interrupt lines is returned. + * + */ + unsigned long probe_irq_on(void) { unsigned int i; @@ -733,6 +829,7 @@ unsigned long val; unsigned long delay; =20 + down(&probe_sem); /* * something may have generated an irq long ago and we want to * flush such a longstanding irq before considering it as spurious. @@ -799,10 +896,19 @@ return val; } =20 -/* - * Return a mask of triggered interrupts (this - * can handle only legacy ISA interrupts). +/** + * probe_irq_mask - scan a bitmap of interrupt lines + * @val: mask of interrupts to consider + * + * Scan the ISA bus interrupt lines and return a bitmap of + * active interrupts. The interrupt probe logic state is then + * returned to its previous value. + * + * Note: we need to scan all the irq's even though we will + * only return ISA irq numbers - just so that we reset them + * all to a known state. */ + unsigned int probe_irq_mask(unsigned long val) { int i; @@ -825,14 +931,29 @@ } spin_unlock_irq(&desc->lock); } + up(&probe_sem); =20 return mask & val; } =20 -/* - * Return the one interrupt that triggered (this can - * handle any interrupt source) +/** + * probe_irq_off - end an interrupt autodetect + * @val: mask of potential interrupts (unused) + * + * Scans the unused interrupt lines and returns the line which + * appears to have triggered the interrupt. If no interrupt was + * found then zero is returned. If more than one interrupt is + * found then minus the first candidate is returned to indicate + * their is doubt. + * + * The interrupt probe logic state is returned to its previous + * value. + * + * BUGS: When used in a module (which arguably shouldnt happen) + * nothing prevents two IRQ probe callers from overlapping. The + * results of this are non-optimal. */ + int probe_irq_off(unsigned long val) { int i, irq_found, nr_irqs; @@ -857,6 +978,7 @@ } spin_unlock_irq(&desc->lock); } + up(&probe_sem); =20 if (nr_irqs > 1) irq_found =3D -irq_found; @@ -911,7 +1033,7 @@ =20 if (!shared) { desc->depth =3D 0; - desc->status &=3D ~IRQ_DISABLED; + desc->status &=3D ~(IRQ_DISABLED | IRQ_AUTODETECT | IRQ_WAITING); desc->handler->startup(irq); } spin_unlock_irqrestore(&desc->lock,flags); @@ -922,20 +1044,9 @@ =20 static struct proc_dir_entry * root_irq_dir; static struct proc_dir_entry * irq_dir [NR_IRQS]; -static struct proc_dir_entry * smp_affinity_entry [NR_IRQS]; - -static unsigned long irq_affinity [NR_IRQS] =3D { [0 ... NR_IRQS-1] =3D ~0= UL }; =20 #define HEX_DIGITS 8 =20 -static int irq_affinity_read_proc (char *page, char **start, off_t off, - int count, int *eof, void *data) -{ - if (count < HEX_DIGITS+1) - return -EINVAL; - return sprintf (page, "%08lx\n", irq_affinity[(long)data]); -} - static unsigned int parse_hex_value (const char *buffer, unsigned long count, unsigned long *ret) { @@ -973,6 +1084,20 @@ return 0; } =20 +#if CONFIG_SMP + +static struct proc_dir_entry * smp_affinity_entry [NR_IRQS]; + +static unsigned long irq_affinity [NR_IRQS] =3D { [0 ... NR_IRQS-1] =3D ~0= UL }; + +static int irq_affinity_read_proc (char *page, char **start, off_t off, + int count, int *eof, void *data) +{ + if (count < HEX_DIGITS+1) + return -EINVAL; + return sprintf (page, "%08lx\n", irq_affinity[(long)data]); +} + static int irq_affinity_write_proc (struct file *file, const char *buffer, unsigned long count, void *data) { @@ -984,7 +1109,6 @@ =20 err =3D parse_hex_value(buffer, count, &new_value); =20 -#if CONFIG_SMP /* * Do not allow disabling IRQs completely - it's a too easy * way to make the system unusable accidentally :-) At least @@ -992,7 +1116,6 @@ */ if (!(new_value & cpu_online_map)) return -EINVAL; -#endif =20 irq_affinity[irq] =3D new_value; irq_desc(irq)->handler->set_affinity(irq, new_value); @@ -1000,6 +1123,8 @@ return full_count; } =20 +#endif /* CONFIG_SMP */ + static int prof_cpu_mask_read_proc (char *page, char **start, off_t off, int count, int *eof, void *data) { @@ -1027,7 +1152,6 @@ =20 static void register_irq_proc (unsigned int irq) { - struct proc_dir_entry *entry; char name [MAX_NAMELEN]; =20 if (!root_irq_dir || (irq_desc(irq)->handler =3D &no_irq_type)) @@ -1039,15 +1163,22 @@ /* create /proc/irq/1234 */ irq_dir[irq] =3D proc_mkdir(name, root_irq_dir); =20 - /* create /proc/irq/1234/smp_affinity */ - entry =3D create_proc_entry("smp_affinity", 0600, irq_dir[irq]); - - entry->nlink =3D 1; - entry->data =3D (void *)(long)irq; - entry->read_proc =3D irq_affinity_read_proc; - entry->write_proc =3D irq_affinity_write_proc; +#if CONFIG_SMP + { + struct proc_dir_entry *entry; + /* create /proc/irq/1234/smp_affinity */ + entry =3D create_proc_entry("smp_affinity", 0600, irq_dir[irq]); + + if (entry) { + entry->nlink =3D 1; + entry->data =3D (void *)(long)irq; + entry->read_proc =3D irq_affinity_read_proc; + entry->write_proc =3D irq_affinity_write_proc; + } =20 - smp_affinity_entry[irq] =3D entry; + smp_affinity_entry[irq] =3D entry; + } +#endif } =20 unsigned long prof_cpu_mask =3D -1; @@ -1062,6 +1193,9 @@ =20 /* create /proc/irq/prof_cpu_mask */ entry =3D create_proc_entry("prof_cpu_mask", 0600, root_irq_dir); + + if (!entry) + return; =20 entry->nlink =3D 1; entry->data =3D (void *)&prof_cpu_mask; diff -urN linux-2.4.13/arch/ia64/kernel/irq_ia64.c linux-2.4.13-lia/arch/ia= 64/kernel/irq_ia64.c --- linux-2.4.13/arch/ia64/kernel/irq_ia64.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/irq_ia64.c Thu Oct 4 00:21:39 2001 @@ -1,9 +1,9 @@ /* * linux/arch/ia64/kernel/irq.c * - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998, 1999 Stephane Eranian - * Copyright (C) 1999-2000 David Mosberger-Tang + * Copyright (C) 1998-2001 Hewlett-Packard Co + * Stephane Eranian + * David Mosberger-Tang * * 6/10/99: Updated to bring in sync with x86 version to facilitate * support for SMP and different interrupt controllers. @@ -131,6 +131,13 @@ ia64_eoi(); vector =3D ia64_get_ivr(); } + /* + * This must be done *after* the ia64_eoi(). For example, the keyboard s= oftirq + * handler needs to be able to wait for further keyboard interrupts, whic= h can't + * come through until ia64_eoi() has been done. + */ + if (local_softirq_pending()) + do_softirq(); } =20 #ifdef CONFIG_SMP diff -urN linux-2.4.13/arch/ia64/kernel/ivt.S linux-2.4.13-lia/arch/ia64/ke= rnel/ivt.S --- linux-2.4.13/arch/ia64/kernel/ivt.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/ivt.S Wed Oct 10 17:58:45 2001 @@ -2,8 +2,8 @@ * arch/ia64/kernel/ivt.S * * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998, 1999 Stephane Eranian - * Copyright (C) 1998-2001 David Mosberger + * Stephane Eranian + * David Mosberger * * 00/08/23 Asit Mallick TLB handling for SMP * 00/12/20 David Mosberger-Tang DTLB/ITLB handler now= uses virtual PT. @@ -157,7 +157,7 @@ ;; (p10) itc.i r18 // insert the instruction TLB entry (p11) itc.d r18 // insert the data TLB entry -(p6) br.spnt.many page_fault // handle bad address/page not present (pag= e fault) +(p6) br.cond.spnt.many page_fault // handle bad address/page not present = (page fault) mov cr.ifa=3Dr22 =20 /* @@ -213,7 +213,7 @@ ;; mov b0=3Dr29 tbit.z p6,p0=3Dr18,_PAGE_P_BIT // page present bit cleared? -(p6) br.cond.spnt.many page_fault +(p6) br.cond.spnt page_fault ;; itc.i r18 ;; @@ -251,7 +251,7 @@ ;; mov b0=3Dr29 tbit.z p6,p0=3Dr18,_PAGE_P_BIT // page present bit cleared? -(p6) br.cond.spnt.many page_fault +(p6) br.cond.spnt page_fault ;; itc.d r18 ;; @@ -286,7 +286,7 @@ ;; (p8) mov cr.iha=3Dr17 (p8) mov r29=B0 // save b0 -(p8) br.cond.dptk.many itlb_fault +(p8) br.cond.dptk itlb_fault #endif extr.u r23=3Dr21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl shr.u r18=3Dr16,57 // move address bit 61 to bit 4 @@ -297,7 +297,7 @@ dep r19=3Dr17,r19,0,12 // insert PTE control bits into r19 ;; or r19=3Dr19,r18 // set bit 4 (uncached) if the access was to region 6 -(p8) br.cond.spnt.many page_fault +(p8) br.cond.spnt page_fault ;; itc.i r19 // insert the TLB entry mov pr=3Dr31,-1 @@ -324,7 +324,7 @@ ;; (p8) mov cr.iha=3Dr17 (p8) mov r29=B0 // save b0 -(p8) br.cond.dptk.many dtlb_fault +(p8) br.cond.dptk dtlb_fault #endif extr.u r23=3Dr21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl tbit.nz p6,p7=3Dr20,IA64_ISR_SP_BIT // is speculation bit on? @@ -333,7 +333,7 @@ ;; andcm r18=3D0x10,r18 // bit 4=3D~address-bit(61) cmp.ne p8,p0=3Dr0,r23 -(p8) br.cond.spnt.many page_fault +(p8) br.cond.spnt page_fault =20 dep r21=3D-1,r21,IA64_PSR_ED_BIT,1 dep r19=3Dr17,r19,0,12 // insert PTE control bits into r19 @@ -429,7 +429,7 @@ ;; (p7) cmp.eq.or.andcm p6,p7=3Dr17,r0 // was L2 entry NULL? dep r17=3Dr19,r17,3,(PAGE_SHIFT-3) // compute address of L3 page table en= try -(p6) br.cond.spnt.many page_fault +(p6) br.cond.spnt page_fault mov b0=3Dr30 br.sptk.many b0 // return to continuation point END(nested_dtlb_miss) @@ -534,15 +534,6 @@ ;; 1: ld8 r18=3D[r17] ;; -# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC) - /* - * Erratum 85 (Access bit fault could be reported before page not present= fault) - * If the PTE is indicates the page is not present, then just turn this= into a - * page fault. - */ - tbit.z p6,p0=3Dr18,_PAGE_P_BIT // page present bit cleared? -(p6) br.sptk page_fault // page wasn't present -# endif mov ar.ccv=3Dr18 // set compare value for cmpxchg or r25=3D_PAGE_A,r18 // set the accessed bit ;; @@ -564,15 +555,6 @@ ;; 1: ld8 r18=3D[r17] ;; -# if defined(CONFIG_IA32_SUPPORT) && defined(CONFIG_ITANIUM_B0_SPECIFIC) - /* - * Erratum 85 (Access bit fault could be reported before page not present= fault) - * If the PTE is indicates the page is not present, then just turn this= into a - * page fault. - */ - tbit.z p6,p0=3Dr18,_PAGE_P_BIT // page present bit cleared? -(p6) br.sptk page_fault // page wasn't present -# endif or r18=3D_PAGE_A,r18 // set the accessed bit mov b0=3Dr29 // restore b0 ;; @@ -640,7 +622,7 @@ mov r31=3Dpr // prepare to save predicates ;; cmp.eq p0,p7=3Dr16,r17 // is this a system call? (p7 <- false, if so) -(p7) br.cond.spnt.many non_syscall +(p7) br.cond.spnt non_syscall =20 SAVE_MIN // uses r31; defines r2: =20 @@ -656,7 +638,7 @@ adds r3=3D8,r2 // set up second base pointer for SAVE_REST ;; SAVE_REST - br.call.sptk rp=DEmine_args // clear NaT bits in (potential) syscall args + br.call.sptk.many rp=DEmine_args // clear NaT bits in (potential) syscall= args =20 mov r3%5 adds r15=3D-1024,r15 // r15 contains the syscall number---subtract 1024 @@ -698,7 +680,7 @@ st8 [r16]=3Dr18 // store new value for cr.isr =20 (p8) br.call.sptk.many b6=B6 // ignore this return addr - br.cond.sptk.many ia64_trace_syscall + br.cond.sptk ia64_trace_syscall // NOT REACHED END(break_fault) =20 @@ -811,8 +793,8 @@ mov b6=3Dr8 ;; cmp.ne p6,p0=3D0,r8 -(p6) br.call.dpnt b6=B6 // call returns to ia64_leave_kernel - br.sptk ia64_leave_kernel +(p6) br.call.dpnt.many b6=B6 // call returns to ia64_leave_kernel + br.sptk.many ia64_leave_kernel END(dispatch_illegal_op_fault) =20 .align 1024 @@ -855,30 +837,30 @@ adds r15=3DIA64_PT_REGS_R1_OFFSET + 16,sp ;; cmp.eq pSys,pNonSys=3Dr0,r0 // set pSys=3D1, pNonSys=3D0 - st8 [r15]=3Dr8 // save orignal EAX in r1 (IA32 procs don't use the GP) + st8 [r15]=3Dr8 // save original EAX in r1 (IA32 procs don't use the GP) ;; alloc r15=3Dar.pfs,0,0,6,0 // must first in an insn group ;; - ld4 r8=3D[r14],8 // r8 =3D EAX (syscall number) - mov r15"2 // sys_vfork - last implemented system call + ld4 r8=3D[r14],8 // r8 =3D eax (syscall number) + mov r15#0 // number of entries in ia32 system call table ;; - cmp.leu.unc p6,p7=3Dr8,r15 - ld4 out1=3D[r14],8 // r9 =3D ecx + cmp.ltu.unc p6,p7=3Dr8,r15 + ld4 out1=3D[r14],8 // r9 =3D ecx ;; - ld4 out2=3D[r14],8 // r10 =3D edx + ld4 out2=3D[r14],8 // r10 =3D edx ;; - ld4 out0=3D[r14] // r11 =3D ebx + ld4 out0=3D[r14] // r11 =3D ebx adds r14=3D(IA64_PT_REGS_R8_OFFSET-(8*3)) + 16,sp ;; - ld4 out5=3D[r14],8 // r13 =3D ebp + ld4 out5=3D[r14],8 // r13 =3D ebp ;; - ld4 out3=3D[r14],8 // r14 =3D esi + ld4 out3=3D[r14],8 // r14 =3D esi adds r2=3DIA64_TASK_PTRACE_OFFSET,r13 // r2 =3D ¤t->ptrace ;; - ld4 out4=3D[r14] // R15 =3D edi + ld4 out4=3D[r14] // r15 =3D edi movl r16=3Dia32_syscall_table ;; -(p6) shladd r16=3Dr8,3,r16 // Force ni_syscall if not valid syscall= number +(p6) shladd r16=3Dr8,3,r16 // force ni_syscall if not valid syscall num= ber ld8 r2=3D[r2] // r2 =3D current->ptrace ;; ld8 r16=3D[r16] @@ -889,12 +871,12 @@ ;; mov rp=3Dr15 (p8) br.call.sptk.many b6=B6 - br.cond.sptk.many ia32_trace_syscall + br.cond.sptk ia32_trace_syscall =20 non_ia32_syscall: alloc r15=3Dar.pfs,0,0,2,0 - mov out0=3Dr14 // interrupt # - add out1=16,sp // pointer to pt_regs + mov out0=3Dr14 // interrupt # + add out1=16,sp // pointer to pt_regs ;; // avoid WAW on CFM br.call.sptk.many rp=3Dia32_bad_interrupt .ret1: movl r15=3Dia64_leave_kernel @@ -1085,7 +1067,7 @@ mov r31=3Dpr ;; cmp4.eq p6,p0=3D0,r16 -(p6) br.sptk dispatch_illegal_op_fault +(p6) br.sptk.many dispatch_illegal_op_fault ;; mov r19$ // fault number br.sptk.many dispatch_to_fault_handler diff -urN linux-2.4.13/arch/ia64/kernel/mca.c linux-2.4.13-lia/arch/ia64/ke= rnel/mca.c --- linux-2.4.13/arch/ia64/kernel/mca.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/mca.c Wed Oct 10 17:42:06 2001 @@ -3,12 +3,20 @@ * Purpose: Generic MCA handling layer * * Updated for latest kernel + * Copyright (C) 2001 Intel + * Copyright (C) Fred Lewis (frederick.v.lewis@intel.com) + * * Copyright (C) 2000 Intel * Copyright (C) Chuck Fleckenstein (cfleck@co.intel.com) * * Copyright (C) 1999 Silicon Graphics, Inc. * Copyright (C) Vijay Chander(vijay@engr.sgi.com) * + * 01/01/03 F. Lewis Added setup of CMCI and CPEI IRQs, logging of corr= ected + * platform errors, completed code for logging of + * corrected & uncorrected machine check errors, and + * updated for conformance with Nov. 2000 revision of= the + * SAL 3.0 spec. * 00/03/29 C. Fleckenstein Fixed PAL/SAL update issues, began MCA bug fi= xes, logging issues, * added min save state dump, added INIT handler. */ @@ -16,6 +24,7 @@ #include #include #include +#include #include #include =20 @@ -27,8 +36,10 @@ #include =20 #include -#include +#include +#include =20 +#undef MCA_PRT_XTRA_DATA =20 typedef struct ia64_fptr { unsigned long fp; @@ -38,22 +49,67 @@ ia64_mc_info_t ia64_mc_info; ia64_mca_sal_to_os_state_t ia64_sal_to_os_handoff_state; ia64_mca_os_to_sal_state_t ia64_os_to_sal_handoff_state; -u64 ia64_mca_proc_state_dump[256]; +u64 ia64_mca_proc_state_dump[512]; u64 ia64_mca_stack[1024]; u64 ia64_mca_stackframe[32]; u64 ia64_mca_bspstore[1024]; u64 ia64_init_stack[INIT_TASK_SIZE] __attribute__((aligned(16))); =20 -static void ia64_mca_cmc_vector_setup(int enable, - int_vector_t cmc_vector); static void ia64_mca_wakeup_ipi_wait(void); static void ia64_mca_wakeup(int cpu); static void ia64_mca_wakeup_all(void); -static void ia64_log_init(int,int); -static void ia64_log_get(int,int, prfunc_t); -static void ia64_log_clear(int,int,int, prfunc_t); +static void ia64_log_init(int); extern void ia64_monarch_init_handler (void); extern void ia64_slave_init_handler (void); +extern struct hw_interrupt_type irq_type_iosapic_level; + +static struct irqaction cmci_irqaction =3D { + handler: ia64_mca_cmc_int_handler, + flags: SA_INTERRUPT, + name: "cmc_hndlr" +}; + +static struct irqaction mca_rdzv_irqaction =3D { + handler: ia64_mca_rendez_int_handler, + flags: SA_INTERRUPT, + name: "mca_rdzv" +}; + +static struct irqaction mca_wkup_irqaction =3D { + handler: ia64_mca_wakeup_int_handler, + flags: SA_INTERRUPT, + name: "mca_wkup" +}; + +static struct irqaction mca_cpe_irqaction =3D { + handler: ia64_mca_cpe_int_handler, + flags: SA_INTERRUPT, + name: "cpe_hndlr" +}; + +/* + * ia64_mca_log_sal_error_record + * + * This function retrieves a specified error record type from SAL, sends = it to + * the system log, and notifies SALs to clear the record from its non-vol= atile + * memory. + * + * Inputs : sal_info_type (Type of error record MCA/CMC/CPE/INIT) + * Outputs : None + */ +void +ia64_mca_log_sal_error_record(int sal_info_type) +{ + /* Get the MCA error record */ + if (!ia64_log_get(sal_info_type, (prfunc_t)printk)) + return; // no record retrieved + + /* Log the error record */ + ia64_log_print(sal_info_type, (prfunc_t)printk); + + /* Clear the CMC SAL logs now that they have been logged */ + ia64_sal_clear_state_info(sal_info_type); +} =20 /* * hack for now, add platform dependent handlers @@ -67,10 +123,14 @@ } =20 void -cmci_handler_platform (int cmc_irq, void *arg, struct pt_regs *ptregs) +ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs) { + IA64_MCA_DEBUG("ia64_mca_cpe_int_handler: received interrupt. vector =3D = %#x\n", cpe_irq); =20 + /* Get the CMC error record and log it */ + ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE); } + /* * This routine will be used to deal with platform specific handling * of the init, i.e. drop into the kernel debugger on server machine, @@ -81,17 +141,72 @@ init_handler_platform (struct pt_regs *regs) { /* if a kernel debugger is available call it here else just dump the regi= sters */ + show_regs(regs); /* dump the state info */ + while (1); /* hang city if no debugger */ } =20 +/* + * ia64_mca_init_platform + * + * External entry for platform specific MCA initialization. + * + * Inputs + * None + * + * Outputs + * None + */ void -log_print_platform ( void *cur_buff_ptr, prfunc_t prfunc) +ia64_mca_init_platform (void) { + } =20 +/* + * ia64_mca_check_errors + * + * External entry to check for error records which may have been posted b= y SAL + * for a prior failure which resulted in a machine shutdown before an the + * error could be logged. This function must be called after the filesys= tem + * is initialized. + * + * Inputs : None + * + * Outputs : None + */ void -ia64_mca_init_platform (void) +ia64_mca_check_errors (void) { + /* + * If there is an MCA error record pending, get it and log it. + */ + ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA); +} + +/* + * ia64_mca_register_cpev + * + * Register the corrected platform error vector with SAL. + * + * Inputs + * cpev Corrected Platform Error Vector number + * + * Outputs + * None + */ +static void +ia64_mca_register_cpev (int cpev) +{ + /* Register the CPE interrupt vector with SAL */ + if (ia64_sal_mc_set_params(SAL_MC_PARAM_CPE_INT, SAL_MC_PARAM_MECHANISM_I= NT, cpev, 0, 0)) { + printk("ia64_mca_platform_init: failed to register Corrected " + "Platform Error interrupt vector with SAL.\n"); + return; + } + + IA64_MCA_DEBUG("ia64_mca_platform_init: corrected platform error " + "vector %#x setup and enabled\n", cpev); } =20 #endif /* PLATFORM_MCA_HANDLERS */ @@ -140,30 +255,36 @@ && !ia64_pmss_dump_bank0)) printk("\n"); } - /* hang city for now, until we include debugger or copy to ptregs to show= : */ - while (1); } =20 /* * ia64_mca_cmc_vector_setup - * Setup the correctable machine check vector register in the processor + * + * Setup the corrected machine check vector register in the processor and + * unmask interrupt. This function is invoked on a per-processor basis. + * * Inputs - * Enable (1 - enable cmc interrupt , 0 - disable) - * CMC handler entry point (if enabled) + * None * * Outputs * None */ -static void -ia64_mca_cmc_vector_setup(int enable, - int_vector_t cmc_vector) +void +ia64_mca_cmc_vector_setup (void) { cmcv_reg_t cmcv; =20 cmcv.cmcv_regval =3D 0; - cmcv.cmcv_mask =3D enable; - cmcv.cmcv_vector =3D cmc_vector; + cmcv.cmcv_mask =3D 0; /* Unmask/enable interrupt */ + cmcv.cmcv_vector =3D IA64_CMC_VECTOR; ia64_set_cmcv(cmcv.cmcv_regval); + + IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected " + "machine check vector %#x setup and enabled.\n", + smp_processor_id(), IA64_CMC_VECTOR); + + IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d CMCV =3D %#016lx\n", + smp_processor_id(), ia64_get_cmcv()); } =20 =20 @@ -174,26 +295,58 @@ void mca_test(void) { - slpi_buf.slpi_valid.slpi_psi =3D 1; - slpi_buf.slpi_valid.slpi_cache_check =3D 1; - slpi_buf.slpi_valid.slpi_tlb_check =3D 1; - slpi_buf.slpi_valid.slpi_bus_check =3D 1; - slpi_buf.slpi_valid.slpi_minstate =3D 1; - slpi_buf.slpi_valid.slpi_bank1_gr =3D 1; - slpi_buf.slpi_valid.slpi_br =3D 1; - slpi_buf.slpi_valid.slpi_cr =3D 1; - slpi_buf.slpi_valid.slpi_ar =3D 1; - slpi_buf.slpi_valid.slpi_rr =3D 1; - slpi_buf.slpi_valid.slpi_fr =3D 1; + slpi_buf.valid.psi_static_struct =3D 1; + slpi_buf.valid.num_cache_check =3D 1; + slpi_buf.valid.num_tlb_check =3D 1; + slpi_buf.valid.num_bus_check =3D 1; + slpi_buf.valid.processor_static_info.minstate =3D 1; + slpi_buf.valid.processor_static_info.br =3D 1; + slpi_buf.valid.processor_static_info.cr =3D 1; + slpi_buf.valid.processor_static_info.ar =3D 1; + slpi_buf.valid.processor_static_info.rr =3D 1; + slpi_buf.valid.processor_static_info.fr =3D 1; =20 ia64_os_mca_dispatch(); } =20 #endif /* #if defined(MCA_TEST) */ =20 + +/* + * verify_guid + * + * Compares a test guid to a target guid and returns result. + * + * Inputs + * test_guid * (ptr to guid to be verified) + * target_guid * (ptr to standard guid to be verified against) + * + * Outputs + * 0 (test verifies against target) + * non-zero (test guid does not verify) + */ +static int +verify_guid (efi_guid_t *test, efi_guid_t *target) +{ + int rc; + + if ((rc =3D memcmp((void *)test, (void *)target, sizeof(efi_guid_t)))) { + IA64_MCA_DEBUG("ia64_mca_print: invalid guid =3D " + "{ %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, " + "%#02x, %#02x, %#02x, %#02x, } } \n ", + test->data1, test->data2, test->data3, test->data4[0], + test->data4[1], test->data4[2], test->data4[3], + test->data4[4], test->data4[5], test->data4[6], + test->data4[7]); + } + + return rc; +} + /* * ia64_mca_init - * Do all the mca specific initialization on a per-processor basis. + * + * Do all the system level mca specific initialization. * * 1. Register spinloop and wakeup request interrupt vectors * @@ -201,77 +354,80 @@ * * 3. Register OS_INIT handler entry point * - * 4. Initialize CMCV register to enable/disable CMC interrupt on the - * processor and hook a handler in the platform-specific ia64_mca_init. + * 4. Initialize MCA/CMC/INIT related log buffers maintained by the OS. * - * 5. Initialize MCA/CMC/INIT related log buffers maintained by the OS. + * Note that this initialization is done very early before some kernel + * services are available. * - * Inputs - * None - * Outputs - * None + * Inputs : None + * + * Outputs : None */ void __init ia64_mca_init(void) { ia64_fptr_t *mon_init_ptr =3D (ia64_fptr_t *)ia64_monarch_init_handler; ia64_fptr_t *slave_init_ptr =3D (ia64_fptr_t *)ia64_slave_init_handler; + ia64_fptr_t *mca_hldlr_ptr =3D (ia64_fptr_t *)ia64_os_mca_dispatch; int i; + s64 rc; =20 - IA64_MCA_DEBUG("ia64_mca_init : begin\n"); + IA64_MCA_DEBUG("ia64_mca_init: begin\n"); =20 /* Clear the Rendez checkin flag for all cpus */ - for(i =3D 0 ; i < IA64_MAXCPUS; i++) + for(i =3D 0 ; i < NR_CPUS; i++) ia64_mc_info.imi_rendez_checkin[i] =3D IA64_MCA_RENDEZ_CHECKIN_NOTDONE; =20 - /* NOTE : The actual irqs for the rendez, wakeup and - * cmc interrupts are requested in the platform-specific - * mca initialization code. - */ /* * Register the rendezvous spinloop and wakeup mechanism with SAL */ =20 /* Register the rendezvous interrupt vector with SAL */ - if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT, - SAL_MC_PARAM_MECHANISM_INT, - IA64_MCA_RENDEZ_VECTOR, - IA64_MCA_RENDEZ_TIMEOUT, - 0)) + if ((rc =3D ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_INT, + SAL_MC_PARAM_MECHANISM_INT, + IA64_MCA_RENDEZ_VECTOR, + IA64_MCA_RENDEZ_TIMEOUT, + 0))) + { + printk("ia64_mca_init: Failed to register rendezvous interrupt " + "with SAL. rc =3D %ld\n", rc); return; + } =20 /* Register the wakeup interrupt vector with SAL */ - if (ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP, - SAL_MC_PARAM_MECHANISM_INT, - IA64_MCA_WAKEUP_VECTOR, - 0, - 0)) + if ((rc =3D ia64_sal_mc_set_params(SAL_MC_PARAM_RENDEZ_WAKEUP, + SAL_MC_PARAM_MECHANISM_INT, + IA64_MCA_WAKEUP_VECTOR, + 0, 0))) + { + printk("ia64_mca_init: Failed to register wakeup interrupt with SAL. rc= =3D %ld\n", + rc); return; + } =20 - IA64_MCA_DEBUG("ia64_mca_init : registered mca rendezvous spinloop and wa= keup mech.\n"); - /* - * Setup the correctable machine check vector - */ - ia64_mca_cmc_vector_setup(IA64_CMC_INT_ENABLE, IA64_CMC_VECTOR); - - IA64_MCA_DEBUG("ia64_mca_init : correctable mca vector setup done\n"); + IA64_MCA_DEBUG("ia64_mca_init: registered mca rendezvous spinloop and wak= eup mech.\n"); =20 - ia64_mc_info.imi_mca_handler =3D __pa(ia64_os_mca_dispatch); + ia64_mc_info.imi_mca_handler =3D __pa(mca_hldlr_ptr->fp); /* * XXX - disable SAL checksum by setting size to 0; should be * __pa(ia64_os_mca_dispatch_end) - __pa(ia64_os_mca_dispatch); */ ia64_mc_info.imi_mca_handler_size =3D 0; - /* Register the os mca handler with SAL */ - if (ia64_sal_set_vectors(SAL_VECTOR_OS_MCA, - ia64_mc_info.imi_mca_handler, - __pa(ia64_get_gp()), - ia64_mc_info.imi_mca_handler_size, - 0,0,0)) =20 + /* Register the os mca handler with SAL */ + if ((rc =3D ia64_sal_set_vectors(SAL_VECTOR_OS_MCA, + ia64_mc_info.imi_mca_handler, + mca_hldlr_ptr->gp, + ia64_mc_info.imi_mca_handler_size, + 0, 0, 0))) + { + printk("ia64_mca_init: Failed to register os mca handler with SAL. rc = =3D %ld\n", + rc); return; + } =20 - IA64_MCA_DEBUG("ia64_mca_init : registered os mca handler with SAL\n"); + IA64_MCA_DEBUG("ia64_mca_init: registered os mca handler with SAL at 0x%l= x, gp =3D 0x%lx\n", + ia64_mc_info.imi_mca_handler, mca_hldlr_ptr->gp); =20 /* * XXX - disable SAL checksum by setting size to 0, should be @@ -282,53 +438,87 @@ ia64_mc_info.imi_slave_init_handler =3D __pa(slave_init_ptr->fp); ia64_mc_info.imi_slave_init_handler_size =3D 0; =20 - IA64_MCA_DEBUG("ia64_mca_init : os init handler at %lx\n",ia64_mc_info.im= i_monarch_init_handler); + IA64_MCA_DEBUG("ia64_mca_init: os init handler at %lx\n", + ia64_mc_info.imi_monarch_init_handler); =20 /* Register the os init handler with SAL */ - if (ia64_sal_set_vectors(SAL_VECTOR_OS_INIT, - ia64_mc_info.imi_monarch_init_handler, - __pa(ia64_get_gp()), - ia64_mc_info.imi_monarch_init_handler_size, - ia64_mc_info.imi_slave_init_handler, - __pa(ia64_get_gp()), - ia64_mc_info.imi_slave_init_handler_size)) + if ((rc =3D ia64_sal_set_vectors(SAL_VECTOR_OS_INIT, + ia64_mc_info.imi_monarch_init_handler, + __pa(ia64_get_gp()), + ia64_mc_info.imi_monarch_init_handler_size, + ia64_mc_info.imi_slave_init_handler, + __pa(ia64_get_gp()), + ia64_mc_info.imi_slave_init_handler_size))) + { + printk("ia64_mca_init: Failed to register m/s init handlers with SAL. rc= =3D %ld\n", + rc); + return; + } =20 + IA64_MCA_DEBUG("ia64_mca_init: registered os init handler with SAL\n"); =20 - return; + /* + * Configure the CMCI vector and handler. Interrupts for CMC are + * per-processor, so AP CMC interrupts are setup in smp_callin() (smp.c). + */ + register_percpu_irq(IA64_CMC_VECTOR, &cmci_irqaction); + ia64_mca_cmc_vector_setup(); /* Setup vector on BSP & enable */ =20 - IA64_MCA_DEBUG("ia64_mca_init : registered os init handler with SAL\n"); + /* Setup the MCA rendezvous interrupt vector */ + register_percpu_irq(IA64_MCA_RENDEZ_VECTOR, &mca_rdzv_irqaction); + + /* Setup the MCA wakeup interrupt vector */ + register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction); + + /* Setup the CPE interrupt vector */ + { + irq_desc_t *desc; + unsigned int irq; + int cpev =3D acpi_request_vector(ACPI20_ENTRY_PIS_CPEI); + + if (cpev >=3D 0) { + for (irq =3D 0; irq < NR_IRQS; ++irq) + if (irq_to_vector(irq) =3D cpev) { + desc =3D irq_desc(irq); + desc->status |=3D IRQ_PER_CPU; + desc->handler =3D &irq_type_iosapic_level; + setup_irq(irq, &mca_cpe_irqaction); + } + ia64_mca_register_cpev(cpev); + } else + printk("ia64_mca_init: Failed to get routed CPEI vector from ACPI.\n"); + } =20 /* Initialize the areas set aside by the OS to buffer the * platform/processor error states for MCA/INIT/CMC * handling. */ - ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR); - ia64_log_init(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM); - ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR); - ia64_log_init(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM); - ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR); - ia64_log_init(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM); - - ia64_mca_init_platform(); - - IA64_MCA_DEBUG("ia64_mca_init : platform-specific mca handling setup done= \n"); + ia64_log_init(SAL_INFO_TYPE_MCA); + ia64_log_init(SAL_INFO_TYPE_INIT); + ia64_log_init(SAL_INFO_TYPE_CMC); + ia64_log_init(SAL_INFO_TYPE_CPE); =20 #if defined(MCA_TEST) mca_test(); #endif /* #if defined(MCA_TEST) */ =20 printk("Mca related initialization done\n"); + +#if 0 // Too early in initialization -- error log is lost + /* Do post-failure MCA error logging */ + ia64_mca_check_errors(); +#endif // Too early in initialization -- error log is lost } =20 /* * ia64_mca_wakeup_ipi_wait + * * Wait for the inter-cpu interrupt to be sent by the * monarch processor once it is done with handling the * MCA. - * Inputs - * None - * Outputs - * None + * + * Inputs : None + * Outputs : None */ void ia64_mca_wakeup_ipi_wait(void) @@ -339,16 +529,16 @@ =20 do { switch(irr_num) { - case 0: + case 0: irr =3D ia64_get_irr0(); break; - case 1: + case 1: irr =3D ia64_get_irr1(); break; - case 2: + case 2: irr =3D ia64_get_irr2(); break; - case 3: + case 3: irr =3D ia64_get_irr3(); break; } @@ -357,26 +547,28 @@ =20 /* * ia64_mca_wakeup + * * Send an inter-cpu interrupt to wake-up a particular cpu * and mark that cpu to be out of rendez. - * Inputs - * cpuid - * Outputs - * None + * + * Inputs : cpuid + * Outputs : None */ void ia64_mca_wakeup(int cpu) { platform_send_ipi(cpu, IA64_MCA_WAKEUP_VECTOR, IA64_IPI_DM_INT, 0); ia64_mc_info.imi_rendez_checkin[cpu] =3D IA64_MCA_RENDEZ_CHECKIN_NOTDONE; + } + /* * ia64_mca_wakeup_all + * * Wakeup all the cpus which have rendez'ed previously. - * Inputs - * None - * Outputs - * None + * + * Inputs : None + * Outputs : None */ void ia64_mca_wakeup_all(void) @@ -389,15 +581,16 @@ ia64_mca_wakeup(cpu); =20 } + /* * ia64_mca_rendez_interrupt_handler + * * This is handler used to put slave processors into spinloop * while the monarch processor does the mca handling and later * wake each slave up once the monarch is done. - * Inputs - * None - * Outputs - * None + * + * Inputs : None + * Outputs : None */ void ia64_mca_rendez_int_handler(int rendez_irq, void *arg, struct pt_regs *ptr= egs) @@ -423,23 +616,22 @@ =20 /* Enable all interrupts */ restore_flags(flags); - - } =20 =20 /* * ia64_mca_wakeup_int_handler + * * The interrupt handler for processing the inter-cpu interrupt to the * slave cpu which was spinning in the rendez loop. * Since this spinning is done by turning off the interrupts and * polling on the wakeup-interrupt bit in the IRR, there is * nothing useful to be done in the handler. - * Inputs - * wakeup_irq (Wakeup-interrupt bit) + * + * Inputs : wakeup_irq (Wakeup-interrupt bit) * arg (Interrupt handler specific argument) * ptregs (Exception frame at the time of the interrupt) - * Outputs + * Outputs : None * */ void @@ -450,16 +642,16 @@ =20 /* * ia64_return_to_sal_check + * * This is function called before going back from the OS_MCA handler * to the OS_MCA dispatch code which finally takes the control back * to the SAL. * The main purpose of this routine is to setup the OS_MCA to SAL * return state which can be used by the OS_MCA dispatch code * just before going back to SAL. - * Inputs - * None - * Outputs - * None + * + * Inputs : None + * Outputs : None */ =20 void @@ -474,11 +666,13 @@ ia64_os_to_sal_handoff_state.imots_sal_check_ra ia64_sal_to_os_handoff= _state.imsto_sal_check_ra; =20 - /* For now ignore the MCA */ - ia64_os_to_sal_handoff_state.imots_os_status =3D IA64_MCA_CORRECTED; + /* Cold Boot for uncorrectable MCA */ + ia64_os_to_sal_handoff_state.imots_os_status =3D IA64_MCA_COLD_BOOT; } + /* * ia64_mca_ucmc_handler + * * This is uncorrectable machine check handler called from OS_MCA * dispatch code which is in turn called from SAL_CHECK(). * This is the place where the core of OS MCA handling is done. @@ -487,93 +681,92 @@ * monarch processor. Once the monarch is done with MCA handling * further MCA logging is enabled by clearing logs. * Monarch also has the duty of sending wakeup-IPIs to pull the - * slave processors out of rendez. spinloop. - * Inputs - * None - * Outputs - * None + * slave processors out of rendezvous spinloop. + * + * Inputs : None + * Outputs : None */ void ia64_mca_ucmc_handler(void) { +#if 0 /* stubbed out @FVL */ + /* + * Attempting to log a DBE error Causes "reserved register/field panic" + * in printk. + */ =20 - /* Get the MCA processor log */ - ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)pr= intk); - /* Get the MCA platform log */ - ia64_log_get(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)pri= ntk); - - ia64_log_print(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)= printk); + /* Get the MCA error record and log it */ + ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA); +#endif /* stubbed out @FVL */ =20 /* - * Do some error handling - Platform-specific mca handler is called at th= is point + * Do Platform-specific mca error handling if required. */ - mca_handler_platform() ; =20 - /* Clear the SAL MCA logs */ - ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PROCESSOR, 1, printk); - ia64_log_clear(SAL_INFO_TYPE_MCA, SAL_SUB_INFO_TYPE_PLATFORM, 1, printk); - - /* Wakeup all the processors which are spinning in the rendezvous - * loop. + /* + * Wakeup all the processors which are spinning in the rendezvous + * loop. */ ia64_mca_wakeup_all(); + + /* Return to SAL */ ia64_return_to_sal_check(); } =20 /* * ia64_mca_cmc_int_handler - * This is correctable machine check interrupt handler. + * + * This is corrected machine check interrupt handler. * Right now the logs are extracted and displayed in a well-defined * format. + * * Inputs - * None + * interrupt number + * client data arg ptr + * saved registers ptr + * * Outputs * None */ void ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs) { - /* Get the CMC processor log */ - ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)pr= intk); - /* Get the CMC platform log */ - ia64_log_get(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)pri= ntk); - - - ia64_log_print(SAL_INFO_TYPE_CMC, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)= printk); - cmci_handler_platform(cmc_irq, arg, ptregs); + IA64_MCA_DEBUG("ia64_mca_cmc_int_handler: received interrupt vector =3D %= #x on CPU %d\n", + cmc_irq, smp_processor_id()); =20 - /* Clear the CMC SAL logs now that they have been saved in the OS buffer = */ - ia64_sal_clear_state_info(SAL_INFO_TYPE_CMC); + /* Get the CMC error record and log it */ + ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CMC); } =20 /* * IA64_MCA log support */ #define IA64_MAX_LOGS 2 /* Double-buffering for nested MCAs */ -#define IA64_MAX_LOG_TYPES 3 /* MCA, CMC, INIT */ -#define IA64_MAX_LOG_SUBTYPES 2 /* Processor, Platform */ +#define IA64_MAX_LOG_TYPES 4 /* MCA, INIT, CMC, CPE */ =20 -typedef struct ia64_state_log_s { +typedef struct ia64_state_log_s +{ spinlock_t isl_lock; int isl_index; - ia64_psilog_t isl_log[IA64_MAX_LOGS]; /* need space to store header + err= or log */ + ia64_err_rec_t isl_log[IA64_MAX_LOGS]; /* need space to store header + e= rror log */ } ia64_state_log_t; =20 -static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES][IA64_MAX_LOG_SU= BTYPES]; +static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES]; =20 -#define IA64_LOG_LOCK_INIT(it, sit) spin_lock_init(&ia64_state_log[it][sit= ].isl_lock) -#define IA64_LOG_LOCK(it, sit) spin_lock_irqsave(&ia64_state_log[it][sit]= .isl_lock, s) -#define IA64_LOG_UNLOCK(it, sit) spin_unlock_irqrestore(&ia64_state_log[it= ][sit].isl_lock,\ - s) -#define IA64_LOG_NEXT_INDEX(it, sit) ia64_state_log[it][sit].isl_index -#define IA64_LOG_CURR_INDEX(it, sit) 1 - ia64_state_log[it][sit].isl_index -#define IA64_LOG_INDEX_INC(it, sit) \ - ia64_state_log[it][sit].isl_index =3D 1 - ia64_state_log[it][sit].isl_ind= ex -#define IA64_LOG_INDEX_DEC(it, sit) \ - ia64_state_log[it][sit].isl_index =3D 1 - ia64_state_log[it][sit].isl_ind= ex -#define IA64_LOG_NEXT_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].i= sl_log[IA64_LOG_NEXT_INDEX(it,sit)])) -#define IA64_LOG_CURR_BUFFER(it, sit) (void *)(&(ia64_state_log[it][sit].i= sl_log[IA64_LOG_CURR_INDEX(it,sit)])) +/* Note: Some of these macros assume IA64_MAX_LOGS is always 2. Should b= e */ +/* fixed. @FVL = */ +#define IA64_LOG_LOCK_INIT(it) spin_lock_init(&ia64_state_log[it].isl_lock) +#define IA64_LOG_LOCK(it) spin_lock_irqsave(&ia64_state_log[it].isl_l= ock, s) +#define IA64_LOG_UNLOCK(it) spin_unlock_irqrestore(&ia64_state_log[it].= isl_lock,s) +#define IA64_LOG_NEXT_INDEX(it) ia64_state_log[it].isl_index +#define IA64_LOG_CURR_INDEX(it) 1 - ia64_state_log[it].isl_index +#define IA64_LOG_INDEX_INC(it) \ + ia64_state_log[it].isl_index =3D 1 - ia64_state_log[it].isl_index +#define IA64_LOG_INDEX_DEC(it) \ + ia64_state_log[it].isl_index =3D 1 - ia64_state_log[it].isl_index +#define IA64_LOG_NEXT_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[I= A64_LOG_NEXT_INDEX(it)])) +#define IA64_LOG_CURR_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[I= A64_LOG_CURR_INDEX(it)])) =20 /* * C portion of the OS INIT handler @@ -584,123 +777,217 @@ * * Returns: * 0 if SAL must warm boot the System - * 1 if SAL must retrun to interrupted context using PAL_MC_RESUME + * 1 if SAL must return to interrupted context using PAL_MC_RESUME * */ - void ia64_init_handler (struct pt_regs *regs) { sal_log_processor_info_t *proc_ptr; - ia64_psilog_t *plog_ptr; + ia64_err_rec_t *plog_ptr; =20 printk("Entered OS INIT handler\n"); =20 /* Get the INIT processor log */ - ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t)p= rintk); - /* Get the INIT platform log */ - ia64_log_get(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PLATFORM, (prfunc_t)pr= intk); + if (!ia64_log_get(SAL_INFO_TYPE_INIT, (prfunc_t)printk)) + return; // no record retrieved =20 #ifdef IA64_DUMP_ALL_PROC_INFO - ia64_log_print(SAL_INFO_TYPE_INIT, SAL_SUB_INFO_TYPE_PROCESSOR, (prfunc_t= )printk); + ia64_log_print(SAL_INFO_TYPE_INIT, (prfunc_t)printk); #endif =20 /* * get pointer to min state save area * */ - plog_ptr=3D(ia64_psilog_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT, - SAL_SUB_INFO_TYPE_PROCESSOR); - proc_ptr =3D &plog_ptr->devlog.proclog; + plog_ptr=3D(ia64_err_rec_t *)IA64_LOG_CURR_BUFFER(SAL_INFO_TYPE_INIT); + proc_ptr =3D &plog_ptr->proc_err; =20 - ia64_process_min_state_save(&proc_ptr->slpi_min_state_area,regs); - - init_handler_platform(regs); /* call platform specific routi= nes */ + ia64_process_min_state_save(&proc_ptr->processor_static_info.min_state_ar= ea, + regs); =20 /* Clear the INIT SAL logs now that they have been saved in the OS buffer= */ ia64_sal_clear_state_info(SAL_INFO_TYPE_INIT); + + init_handler_platform(regs); /* call platform specific routi= nes */ +} + +/* + * ia64_log_prt_guid + * + * Print a formatted GUID. + * + * Inputs : p_guid (ptr to the GUID) + * prfunc (print function) + * Outputs : None + * + */ +void +ia64_log_prt_guid (efi_guid_t *p_guid, prfunc_t prfunc) +{ + printk("GUID =3D { %08x, %04x, %04x, { %#02x, %#02x, %#02x, %#02x, " + "%#02x, %#02x, %#02x, %#02x, } } \n ", p_guid->data1, + p_guid->data2, p_guid->data3, p_guid->data4[0], p_guid->data4[1], + p_guid->data4[2], p_guid->data4[3], p_guid->data4[4], + p_guid->data4[5], p_guid->data4[6], p_guid->data4[7]); +} + +static void +ia64_log_hexdump(unsigned char *p, unsigned long n_ch, prfunc_t prfunc) +{ + int i, j; + + if (!p) + return; + + for (i =3D 0; i < n_ch;) { + prfunc("%p ", (void *)p); + for (j =3D 0; (j < 16) && (i < n_ch); i++, j++, p++) { + prfunc("%02x ", *p); + } + prfunc("\n"); + } +} + +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + +static void +ia64_log_prt_record_header (sal_log_record_header_t *rh, prfunc_t prfunc) +{ + prfunc("SAL RECORD HEADER: Record buffer =3D %p, header size =3D %ld\n", + (void *)rh, sizeof(sal_log_record_header_t)); + ia64_log_hexdump((unsigned char *)rh, sizeof(sal_log_record_header_t), + (prfunc_t)prfunc); + prfunc("Total record length =3D %d\n", rh->len); + ia64_log_prt_guid(&rh->platform_guid, prfunc); + prfunc("End of SAL RECORD HEADER\n"); +} + +static void +ia64_log_prt_section_header (sal_log_section_hdr_t *sh, prfunc_t prfunc) +{ + prfunc("SAL SECTION HEADER: Record buffer =3D %p, header size =3D %ld\n= ", + (void *)sh, sizeof(sal_log_section_hdr_t)); + ia64_log_hexdump((unsigned char *)sh, sizeof(sal_log_section_hdr_t), + (prfunc_t)prfunc); + prfunc("Length of section & header =3D %d\n", sh->len); + ia64_log_prt_guid(&sh->guid, prfunc); + prfunc("End of SAL SECTION HEADER\n"); } +#endif // MCA_PRT_XTRA_DATA for test only @FVL =20 /* * ia64_log_init * Reset the OS ia64 log buffer - * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC}) - * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM}) + * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE}) * Outputs : None */ void -ia64_log_init(int sal_info_type, int sal_sub_info_type) +ia64_log_init(int sal_info_type) { - IA64_LOG_LOCK_INIT(sal_info_type, sal_sub_info_type); - IA64_LOG_NEXT_INDEX(sal_info_type, sal_sub_info_type) =3D 0; - memset(IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type), 0, - sizeof(ia64_psilog_t) * IA64_MAX_LOGS); + IA64_LOG_LOCK_INIT(sal_info_type); + IA64_LOG_NEXT_INDEX(sal_info_type) =3D 0; + memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0, + sizeof(ia64_err_rec_t) * IA64_MAX_LOGS); } =20 /* * ia64_log_get + * * Get the current MCA log from SAL and copy it into the OS log buffer. - * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC}) - * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM}) - * Outputs : None + * + * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE}) + * prfunc (fn ptr of log output function) + * Outputs : size (total record length) * */ -void -ia64_log_get(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc) +u64 +ia64_log_get(int sal_info_type, prfunc_t prfunc) { - sal_log_header_t *log_buffer; - int s,total_len=3D0; - - IA64_LOG_LOCK(sal_info_type, sal_sub_info_type); + sal_log_record_header_t *log_buffer; + u64 total_len =3D 0; + int s; =20 + IA64_LOG_LOCK(sal_info_type); =20 /* Get the process state information */ - log_buffer =3D IA64_LOG_NEXT_BUFFER(sal_info_type, sal_sub_info_type); - - if (!(total_len=3Dia64_sal_get_state_info(sal_info_type,(u64 *)log_buffer= ))) - prfunc("ia64_mca_log_get : Getting processor log failed\n"); - - IA64_MCA_DEBUG("ia64_log_get: retrieved %d bytes of error information\n",= total_len); + log_buffer =3D IA64_LOG_NEXT_BUFFER(sal_info_type); =20 - IA64_LOG_INDEX_INC(sal_info_type, sal_sub_info_type); - - IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type); + total_len =3D ia64_sal_get_state_info(sal_info_type, (u64 *)log_buffer); =20 + if (total_len) { + IA64_LOG_INDEX_INC(sal_info_type); + IA64_LOG_UNLOCK(sal_info_type); + IA64_MCA_DEBUG("ia64_log_get: SAL error record type %d retrieved. " + "Record length =3D %ld\n", sal_info_type, total_len); + return total_len; + } else { + IA64_LOG_UNLOCK(sal_info_type); + prfunc("ia64_log_get: Failed to retrieve SAL error record type %d\n", + sal_info_type); + return 0; + } } =20 /* - * ia64_log_clear - * Clear the current MCA log from SAL and dpending on the clear_os_buffer = flags - * clear the OS log buffer also - * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC}) - * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM}) - * clear_os_buffer + * ia64_log_prt_oem_data + * + * Print OEM specific data if included. + * + * Inputs : header_len (length passed in section header) + * sect_len (default length of section type) + * p_data (ptr to data) * prfunc (print function) * Outputs : None * */ void -ia64_log_clear(int sal_info_type, int sal_sub_info_type, int clear_os_buff= er, prfunc_t prfunc) +ia64_log_prt_oem_data (int header_len, int sect_len, u8 *p_data, prfunc_t = prfunc) { - if (ia64_sal_clear_state_info(sal_info_type)) - prfunc("ia64_mca_log_get : Clearing processor log failed\n"); - - if (clear_os_buffer) { - sal_log_header_t *log_buffer; - int s; - - IA64_LOG_LOCK(sal_info_type, sal_sub_info_type); + int oem_data_len, i; =20 - /* Get the process state information */ - log_buffer =3D IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type); - - memset(log_buffer, 0, sizeof(ia64_psilog_t)); - - IA64_LOG_INDEX_DEC(sal_info_type, sal_sub_info_type); - - IA64_LOG_UNLOCK(sal_info_type, sal_sub_info_type); + if ((oem_data_len =3D header_len - sect_len) > 0) { + prfunc(" OEM Specific Data:"); + for (i =3D 0; i < oem_data_len; i++, p_data++) + prfunc(" %02x", *p_data); } + prfunc("\n"); +} =20 +/* + * ia64_log_rec_header_print + * + * Log info from the SAL error record header. + * + * Inputs : lh * (ptr to SAL log error record header) + * prfunc (fn ptr of log output function to use) + * Outputs : None + */ +void +ia64_log_rec_header_print (sal_log_record_header_t *lh, prfunc_t prfunc) +{ + char str_buf[32]; + + sprintf(str_buf, "%2d.%02d", + (lh->revision.major >> 4) * 10 + (lh->revision.major & 0xf), + (lh->revision.minor >> 4) * 10 + (lh->revision.minor & 0xf)); + prfunc("+Err Record ID: %d SAL Rev: %s\n", lh->id, str_buf); + sprintf(str_buf, "%02d/%02d/%04d/ %02d:%02d:%02d", + (lh->timestamp.slh_month >> 4) * 10 + + (lh->timestamp.slh_month & 0xf), + (lh->timestamp.slh_day >> 4) * 10 + + (lh->timestamp.slh_day & 0xf), + (lh->timestamp.slh_century >> 4) * 1000 + + (lh->timestamp.slh_century & 0xf) * 100 + + (lh->timestamp.slh_year >> 4) * 10 + + (lh->timestamp.slh_year & 0xf), + (lh->timestamp.slh_hour >> 4) * 10 + + (lh->timestamp.slh_hour & 0xf), + (lh->timestamp.slh_minute >> 4) * 10 + + (lh->timestamp.slh_minute & 0xf), + (lh->timestamp.slh_second >> 4) * 10 + + (lh->timestamp.slh_second & 0xf)); + prfunc("+Time: %s Severity %d\n", str_buf, lh->severity); } =20 /* @@ -729,6 +1016,33 @@ prfunc("+ %s[%d] 0x%lx\n", reg_prefix, i, regs[i]); } =20 +/* + * ia64_log_processor_fp_regs_print + * Print the contents of the saved floating page register(s) in the format + * [] + * + * Inputs: ia64_fpreg (Register save buffer) + * reg_num (# of registers) + * reg_class (application/banked/control/bank1_general) + * reg_prefix (ar/br/cr/b1_gr) + * Outputs: None + * + */ +void +ia64_log_processor_fp_regs_print (struct ia64_fpreg *regs, + int reg_num, + char *reg_class, + char *reg_prefix, + prfunc_t prfunc) +{ + int i; + + prfunc("+%s Registers\n", reg_class); + for (i =3D 0; i < reg_num; i++) + prfunc("+ %s[%d] 0x%lx%016lx\n", reg_prefix, i, regs[i].u.bits[1], + regs[i].u.bits[0]); +} + static char *pal_mesi_state[] =3D { "Invalid", "Shared", @@ -754,69 +1068,91 @@ /* * ia64_log_cache_check_info_print * Display the machine check information related to cache error(s). - * Inputs : i (Multiple errors are logged, i - index of logged error) - * info (Machine check info logged by the PAL and later + * Inputs: i (Multiple errors are logged, i - index of logged e= rror) + * cc_info * (Ptr to cache check info logged by the PAL and lat= er * captured by the SAL) - * target_addr (Address which caused the cache error) - * Outputs : None + * prfunc (fn ptr of print function to be used for output) + * Outputs: None */ void -ia64_log_cache_check_info_print(int i, - pal_cache_check_info_t info, - u64 target_addr, - prfunc_t prfunc) +ia64_log_cache_check_info_print (int i, + sal_log_mod_error_info_t *cache_check_inf= o, + prfunc_t prfunc) { + pal_cache_check_info_t *info; + u64 target_addr; + + if (!cache_check_info->valid.check_info) { + IA64_MCA_DEBUG("ia64_mca_log_print: invalid cache_check_info[%d]\n",i); + return; /* If check info data not valid, skip it */ + } + + info =3D (pal_cache_check_info_t *)&cache_check_info->check_info; + target_addr =3D cache_check_info->target_identifier; + prfunc("+ Cache check info[%d]\n+", i); - prfunc(" Level: L%d",info.level); - if (info.mv) - prfunc(" ,Mesi: %s",pal_mesi_state[info.mesi]); - prfunc(" ,Index: %d,", info.index); - if (info.ic) - prfunc(" ,Cache: Instruction"); - if (info.dc) - prfunc(" ,Cache: Data"); - if (info.tl) - prfunc(" ,Line: Tag"); - if (info.dl) - prfunc(" ,Line: Data"); - prfunc(" ,Operation: %s,", pal_cache_op[info.op]); - if (info.wv) - prfunc(" ,Way: %d,", info.way); - if (info.tv) - prfunc(" ,Target Addr: 0x%lx", target_addr); - if (info.mc) - prfunc(" ,MC: Corrected"); + prfunc(" Level: L%d,",info->level); + if (info->mv) + prfunc(" Mesi: %s,",pal_mesi_state[info->mesi]); + prfunc(" Index: %d,", info->index); + if (info->ic) + prfunc(" Cache: Instruction,"); + if (info->dc) + prfunc(" Cache: Data,"); + if (info->tl) + prfunc(" Line: Tag,"); + if (info->dl) + prfunc(" Line: Data,"); + prfunc(" Operation: %s,", pal_cache_op[info->op]); + if (info->wv) + prfunc(" Way: %d,", info->way); + if (cache_check_info->valid.target_identifier) + /* Hope target address is saved in target_identifier */ + if (info->tv) + prfunc(" Target Addr: 0x%lx,", target_addr); + if (info->mc) + prfunc(" MC: Corrected"); prfunc("\n"); } =20 /* * ia64_log_tlb_check_info_print * Display the machine check information related to tlb error(s). - * Inputs : i (Multiple errors are logged, i - index of logged error) - * info (Machine check info logged by the PAL and later + * Inputs: i (Multiple errors are logged, i - index of logged e= rror) + * tlb_info * (Ptr to machine check info logged by the PAL and l= ater * captured by the SAL) - * Outputs : None + * prfunc (fn ptr of print function to be used for output) + * Outputs: None */ - void -ia64_log_tlb_check_info_print(int i, - pal_tlb_check_info_t info, - prfunc_t prfunc) +ia64_log_tlb_check_info_print (int i, + sal_log_mod_error_info_t *tlb_check_info, + prfunc_t prfunc) + { + pal_tlb_check_info_t *info; + + if (!tlb_check_info->valid.check_info) { + IA64_MCA_DEBUG("ia64_mca_log_print: invalid tlb_check_info[%d]\n", i); + return; /* If check info data not valid, skip it */ + } + + info =3D (pal_tlb_check_info_t *)&tlb_check_info->check_info; + prfunc("+ TLB Check Info [%d]\n+", i); - if (info.itc) + if (info->itc) prfunc(" Failure: Instruction Translation Cache"); - if (info.dtc) + if (info->dtc) prfunc(" Failure: Data Translation Cache"); - if (info.itr) { + if (info->itr) { prfunc(" Failure: Instruction Translation Register"); - prfunc(" ,Slot: %d", info.tr_slot); + prfunc(" ,Slot: %d", info->tr_slot); } - if (info.dtr) { + if (info->dtr) { prfunc(" Failure: Data Translation Register"); - prfunc(" ,Slot: %d", info.tr_slot); + prfunc(" ,Slot: %d", info->tr_slot); } - if (info.mc) + if (info->mc) prfunc(" ,MC: Corrected"); prfunc("\n"); } @@ -824,159 +1160,719 @@ /* * ia64_log_bus_check_info_print * Display the machine check information related to bus error(s). - * Inputs : i (Multiple errors are logged, i - index of logged error) - * info (Machine check info logged by the PAL and later + * Inputs: i (Multiple errors are logged, i - index of logged e= rror) + * bus_info * (Ptr to machine check info logged by the PAL and l= ater * captured by the SAL) - * req_addr (Address of the requestor of the transaction) - * resp_addr (Address of the responder of the transaction) - * target_addr (Address where the data was to be delivered to or - * obtained from) - * Outputs : None + * prfunc (fn ptr of print function to be used for output) + * Outputs: None */ void -ia64_log_bus_check_info_print(int i, - pal_bus_check_info_t info, - u64 req_addr, - u64 resp_addr, - u64 targ_addr, - prfunc_t prfunc) -{ +ia64_log_bus_check_info_print (int i, + sal_log_mod_error_info_t *bus_check_info, + prfunc_t prfunc) +{ + pal_bus_check_info_t *info; + u64 req_addr; /* Address of the requestor of the transaction */ + u64 resp_addr; /* Address of the responder of the transaction */ + u64 targ_addr; /* Address where the data was to be delivered to = */ + /* or obtained from */ + + if (!bus_check_info->valid.check_info) { + IA64_MCA_DEBUG("ia64_mca_log_print: invalid bus_check_info[%d]\n", i); + return; /* If check info data not valid, skip it */ + } + + info =3D (pal_bus_check_info_t *)&bus_check_info->check_info; + req_addr =3D bus_check_info->requestor_identifier; + resp_addr =3D bus_check_info->responder_identifier; + targ_addr =3D bus_check_info->target_identifier; + prfunc("+ BUS Check Info [%d]\n+", i); - prfunc(" Status Info: %d", info.bsi); - prfunc(" ,Severity: %d", info.sev); - prfunc(" ,Transaction Type: %d", info.type); - prfunc(" ,Transaction Size: %d", info.size); - if (info.cc) + prfunc(" Status Info: %d", info->bsi); + prfunc(" ,Severity: %d", info->sev); + prfunc(" ,Transaction Type: %d", info->type); + prfunc(" ,Transaction Size: %d", info->size); + if (info->cc) prfunc(" ,Cache-cache-transfer"); - if (info.ib) + if (info->ib) prfunc(" ,Error: Internal"); - if (info.eb) + if (info->eb) prfunc(" ,Error: External"); - if (info.mc) + if (info->mc) prfunc(" ,MC: Corrected"); - if (info.tv) + if (info->tv) prfunc(" ,Target Address: 0x%lx", targ_addr); - if (info.rq) + if (info->rq) prfunc(" ,Requestor Address: 0x%lx", req_addr); - if (info.tv) + if (info->tv) prfunc(" ,Responder Address: 0x%lx", resp_addr); prfunc("\n"); } =20 /* + * ia64_log_mem_dev_err_info_print + * + * Format and log the platform memory device error record section data. + * + * Inputs: mem_dev_err_info * (Ptr to memory device error record section + * returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_mem_dev_err_info_print (sal_log_mem_dev_err_info_t *mdei, + prfunc_t prfunc) +{ + prfunc("+ Mem Error Detail: "); + + if (mdei->valid.error_status) + prfunc(" Error Status: %#lx,", mdei->error_status); + if (mdei->valid.physical_addr) + prfunc(" Physical Address: %#lx,", mdei->physical_addr); + if (mdei->valid.addr_mask) + prfunc(" Address Mask: %#lx,", mdei->addr_mask); + if (mdei->valid.node) + prfunc(" Node: %d,", mdei->node); + if (mdei->valid.card) + prfunc(" Card: %d,", mdei->card); + if (mdei->valid.module) + prfunc(" Module: %d,", mdei->module); + if (mdei->valid.bank) + prfunc(" Bank: %d,", mdei->bank); + if (mdei->valid.device) + prfunc(" Device: %d,", mdei->device); + if (mdei->valid.row) + prfunc(" Row: %d,", mdei->row); + if (mdei->valid.column) + prfunc(" Column: %d,", mdei->column); + if (mdei->valid.bit_position) + prfunc(" Bit Position: %d,", mdei->bit_position); + if (mdei->valid.target_id) + prfunc(" ,Target Address: %#lx,", mdei->target_id); + if (mdei->valid.requestor_id) + prfunc(" ,Requestor Address: %#lx,", mdei->requestor_id); + if (mdei->valid.responder_id) + prfunc(" ,Responder Address: %#lx,", mdei->responder_id); + if (mdei->valid.bus_spec_data) + prfunc(" Bus Specific Data: %#lx,", mdei->bus_spec_data); + prfunc("\n"); + + if (mdei->valid.oem_id) { + u8 *p_data =3D &(mdei->oem_id[0]); + int i; + + prfunc(" OEM Memory Controller ID:"); + for (i =3D 0; i < 16; i++, p_data++) + prfunc(" %02x", *p_data); + prfunc("\n"); + } + + if (mdei->valid.oem_data) { + ia64_log_prt_oem_data((int)mdei->header.len, + (int)sizeof(sal_log_mem_dev_err_info_t) - 1, + &(mdei->oem_data[0]), prfunc); + } +} + +/* + * ia64_log_sel_dev_err_info_print + * + * Format and log the platform SEL device error record section data. + * + * Inputs: sel_dev_err_info * (Ptr to the SEL device error record section + * returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_sel_dev_err_info_print (sal_log_sel_dev_err_info_t *sdei, + prfunc_t prfunc) +{ + int i; + + prfunc("+ SEL Device Error Detail: "); + + if (sdei->valid.record_id) + prfunc(" Record ID: %#x", sdei->record_id); + if (sdei->valid.record_type) + prfunc(" Record Type: %#x", sdei->record_type); + prfunc(" Time Stamp: "); + for (i =3D 0; i < 4; i++) + prfunc("%1d", sdei->timestamp[i]); + if (sdei->valid.generator_id) + prfunc(" Generator ID: %#x", sdei->generator_id); + if (sdei->valid.evm_rev) + prfunc(" Message Format Version: %#x", sdei->evm_rev); + if (sdei->valid.sensor_type) + prfunc(" Sensor Type: %#x", sdei->sensor_type); + if (sdei->valid.sensor_num) + prfunc(" Sensor Number: %#x", sdei->sensor_num); + if (sdei->valid.event_dir) + prfunc(" Event Direction Type: %#x", sdei->event_dir); + if (sdei->valid.event_data1) + prfunc(" Data1: %#x", sdei->event_data1); + if (sdei->valid.event_data2) + prfunc(" Data2: %#x", sdei->event_data2); + if (sdei->valid.event_data3) + prfunc(" Data3: %#x", sdei->event_data3); + prfunc("\n"); + +} + +/* + * ia64_log_pci_bus_err_info_print + * + * Format and log the platform PCI bus error record section data. + * + * Inputs: pci_bus_err_info * (Ptr to the PCI bus error record section + * returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_pci_bus_err_info_print (sal_log_pci_bus_err_info_t *pbei, + prfunc_t prfunc) +{ + prfunc("+ PCI Bus Error Detail: "); + + if (pbei->valid.err_status) + prfunc(" Error Status: %#lx", pbei->err_status); + if (pbei->valid.err_type) + prfunc(" Error Type: %#x", pbei->err_type); + if (pbei->valid.bus_id) + prfunc(" Bus ID: %#x", pbei->bus_id); + if (pbei->valid.bus_address) + prfunc(" Bus Address: %#lx", pbei->bus_address); + if (pbei->valid.bus_data) + prfunc(" Bus Data: %#lx", pbei->bus_data); + if (pbei->valid.bus_cmd) + prfunc(" Bus Command: %#lx", pbei->bus_cmd); + if (pbei->valid.requestor_id) + prfunc(" Requestor ID: %#lx", pbei->requestor_id); + if (pbei->valid.responder_id) + prfunc(" Responder ID: %#lx", pbei->responder_id); + if (pbei->valid.target_id) + prfunc(" Target ID: %#lx", pbei->target_id); + if (pbei->valid.oem_data) + prfunc("\n"); + + if (pbei->valid.oem_data) { + ia64_log_prt_oem_data((int)pbei->header.len, + (int)sizeof(sal_log_pci_bus_err_info_t) - 1, + &(pbei->oem_data[0]), prfunc); + } +} + +/* + * ia64_log_smbios_dev_err_info_print + * + * Format and log the platform SMBIOS device error record section data. + * + * Inputs: smbios_dev_err_info * (Ptr to the SMBIOS device error record + * section returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_smbios_dev_err_info_print (sal_log_smbios_dev_err_info_t *sdei, + prfunc_t prfunc) +{ + u8 i; + + prfunc("+ SMBIOS Device Error Detail: "); + + if (sdei->valid.event_type) + prfunc(" Event Type: %#x", sdei->event_type); + if (sdei->valid.time_stamp) { + prfunc(" Time Stamp: "); + for (i =3D 0; i < 6; i++) + prfunc("%d", sdei->time_stamp[i]); + } + if ((sdei->valid.data) && (sdei->valid.length)) { + prfunc(" Data: "); + for (i =3D 0; i < sdei->length; i++) + prfunc(" %02x", sdei->data[i]); + } + prfunc("\n"); +} + +/* + * ia64_log_pci_comp_err_info_print + * + * Format and log the platform PCI component error record section data. + * + * Inputs: pci_comp_err_info * (Ptr to the PCI component error record se= ction + * returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_pci_comp_err_info_print(sal_log_pci_comp_err_info_t *pcei, + prfunc_t prfunc) +{ + u32 n_mem_regs, n_io_regs; + u64 i, n_pci_data; + u64 *p_reg_data; + u8 *p_oem_data; + + prfunc("+ PCI Component Error Detail: "); + + if (pcei->valid.err_status) + prfunc(" Error Status: %#lx\n", pcei->err_status); + if (pcei->valid.comp_info) + prfunc(" Component Info: Vendor Id =3D %#x, Device Id =3D %#x," + " Class Code =3D %#x, Seg/Bus/Dev/Func =3D %d/%d/%d/%d\n", + pcei->comp_info.vendor_id, pcei->comp_info.device_id, + pcei->comp_info.class_code, pcei->comp_info.seg_num, + pcei->comp_info.bus_num, pcei->comp_info.dev_num, + pcei->comp_info.func_num); + + n_mem_regs =3D (pcei->valid.num_mem_regs) ? pcei->num_mem_regs : 0; + n_io_regs =3D (pcei->valid.num_io_regs) ? pcei->num_io_regs : 0; + p_reg_data =3D &(pcei->reg_data_pairs[0]); + p_oem_data =3D (u8 *)p_reg_data + + (n_mem_regs + n_io_regs) * 2 * sizeof(u64); + n_pci_data =3D p_oem_data - (u8 *)pcei; + + if (n_pci_data > pcei->header.len) { + prfunc(" Invalid PCI Component Error Record format: length =3D %ld, " + " Size PCI Data =3D %d, Num Mem-Map/IO-Map Regs =3D %ld/%ld\n", + pcei->header.len, n_pci_data, n_mem_regs, n_io_regs); + return; + } + + if (n_mem_regs) { + prfunc(" Memory Mapped Registers\n Address \tValue\n"); + for (i =3D 0; i < pcei->num_mem_regs; i++) { + prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]); + p_reg_data +=3D 2; + } + } + if (n_io_regs) { + prfunc(" I/O Mapped Registers\n Address \tValue\n"); + for (i =3D 0; i < pcei->num_io_regs; i++) { + prfunc(" %#lx %#lx\n", p_reg_data[0], p_reg_data[1]); + p_reg_data +=3D 2; + } + } + if (pcei->valid.oem_data) { + ia64_log_prt_oem_data((int)pcei->header.len, n_pci_data, + p_oem_data, prfunc); + prfunc("\n"); + } +} + +/* + * ia64_log_plat_specific_err_info_print + * + * Format and log the platform specifie error record section data. + * + * Inputs: sel_dev_err_info * (Ptr to the platform specific error record + * section returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_plat_specific_err_info_print (sal_log_plat_specific_err_info_t *p= sei, + prfunc_t pr= func) +{ + prfunc("+ Platform Specific Error Detail: "); + + if (psei->valid.err_status) + prfunc(" Error Status: %#lx", psei->err_status); + if (psei->valid.guid) { + prfunc(" GUID: "); + ia64_log_prt_guid(&psei->guid, prfunc); + } + if (psei->valid.oem_data) { + ia64_log_prt_oem_data((int)psei->header.len, + (int)sizeof(sal_log_plat_specific_err_info_t) - 1, + &(psei->oem_data[0]), prfunc); + } + prfunc("\n"); +} + +/* + * ia64_log_host_ctlr_err_info_print + * + * Format and log the platform host controller error record section data. + * + * Inputs: host_ctlr_err_info * (Ptr to the host controller error record + * section returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_host_ctlr_err_info_print (sal_log_host_ctlr_err_info_t *hcei, + prfunc_t prfunc) +{ + prfunc("+ Host Controller Error Detail: "); + + if (hcei->valid.err_status) + prfunc(" Error Status: %#lx", hcei->err_status); + if (hcei->valid.requestor_id) + prfunc(" Requestor ID: %#lx", hcei->requestor_id); + if (hcei->valid.responder_id) + prfunc(" Responder ID: %#lx", hcei->responder_id); + if (hcei->valid.target_id) + prfunc(" Target ID: %#lx", hcei->target_id); + if (hcei->valid.bus_spec_data) + prfunc(" Bus Specific Data: %#lx", hcei->bus_spec_data); + if (hcei->valid.oem_data) { + ia64_log_prt_oem_data((int)hcei->header.len, + (int)sizeof(sal_log_host_ctlr_err_info_t) - 1, + &(hcei->oem_data[0]), prfunc); + } + prfunc("\n"); +} + +/* + * ia64_log_plat_bus_err_info_print + * + * Format and log the platform bus error record section data. + * + * Inputs: plat_bus_err_info * (Ptr to the platform bus error record sec= tion + * returned by SAL) + * prfunc (fn ptr of print function to be used for o= utput) + * Outputs: None + */ +void +ia64_log_plat_bus_err_info_print (sal_log_plat_bus_err_info_t *pbei, + prfunc_t prfunc) +{ + prfunc("+ Platform Bus Error Detail: "); + + if (pbei->valid.err_status) + prfunc(" Error Status: %#lx", pbei->err_status); + if (pbei->valid.requestor_id) + prfunc(" Requestor ID: %#lx", pbei->requestor_id); + if (pbei->valid.responder_id) + prfunc(" Responder ID: %#lx", pbei->responder_id); + if (pbei->valid.target_id) + prfunc(" Target ID: %#lx", pbei->target_id); + if (pbei->valid.bus_spec_data) + prfunc(" Bus Specific Data: %#lx", pbei->bus_spec_data); + if (pbei->valid.oem_data) { + ia64_log_prt_oem_data((int)pbei->header.len, + (int)sizeof(sal_log_plat_bus_err_info_t) - 1, + &(pbei->oem_data[0]), prfunc); + } + prfunc("\n"); +} + +/* + * ia64_log_proc_dev_err_info_print + * + * Display the processor device error record. + * + * Inputs: sal_log_processor_info_t * (Ptr to processor device error rec= ord + * section body). + * prfunc (fn ptr of print function to be us= ed + * for output). + * Outputs: None + */ +void +ia64_log_proc_dev_err_info_print (sal_log_processor_info_t *slpi, + prfunc_t prfunc) +{ + size_t d_len =3D slpi->header.len - sizeof(sal_log_section_hdr_t); + sal_processor_static_info_t *spsi; + int i; + sal_log_mod_error_info_t *p_data; + + prfunc("+Processor Device Error Info Section\n"); + +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + { + char *p_data =3D (char *)&slpi->valid; + + prfunc("SAL_PROC_DEV_ERR SECTION DATA: Data buffer =3D %p, " + "Data size =3D %ld\n", (void *)p_data, d_len); + ia64_log_hexdump(p_data, d_len, prfunc); + prfunc("End of SAL_PROC_DEV_ERR SECTION DATA\n"); + } +#endif // MCA_PRT_XTRA_DATA for test only @FVL + + if (slpi->valid.proc_error_map) + prfunc(" Processor Error Map: %#lx\n", slpi->proc_error_map); + + if (slpi->valid.proc_state_param) + prfunc(" Processor State Param: %#lx\n", slpi->proc_state_parameter); + + if (slpi->valid.proc_cr_lid) + prfunc(" Processor LID: %#lx\n", slpi->proc_cr_lid); + + /* + * Note: March 2001 SAL spec states that if the number of elements in any + * of the MOD_ERROR_INFO_STRUCT arrays is zero, the entire array is + * absent. Also, current implementations only allocate space for number = of + * elements used. So we walk the data pointer from here on. + */ + p_data =3D &slpi->cache_check_info[0]; + + /* Print the cache check information if any*/ + for (i =3D 0 ; i < slpi->valid.num_cache_check; i++, p_data++) + ia64_log_cache_check_info_print(i, p_data, prfunc); + + /* Print the tlb check information if any*/ + for (i =3D 0 ; i < slpi->valid.num_tlb_check; i++, p_data++) + ia64_log_tlb_check_info_print(i, p_data, prfunc); + + /* Print the bus check information if any*/ + for (i =3D 0 ; i < slpi->valid.num_bus_check; i++, p_data++) + ia64_log_bus_check_info_print(i, p_data, prfunc); + + /* Print the reg file check information if any*/ + for (i =3D 0 ; i < slpi->valid.num_reg_file_check; i++, p_data++) + ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t), + prfunc); /* Just hex dump for now */ + + /* Print the ms check information if any*/ + for (i =3D 0 ; i < slpi->valid.num_ms_check; i++, p_data++) + ia64_log_hexdump((u8 *)p_data, sizeof(sal_log_mod_error_info_t), + prfunc); /* Just hex dump for now */ + + /* Print CPUID registers if any*/ + if (slpi->valid.cpuid_info) { + u64 *p =3D (u64 *)p_data; + + prfunc(" CPUID Regs: %#lx %#lx %#lx %#lx\n", p[0], p[1], p[2], p[3]); + p_data++; + } + + /* Print processor static info if any */ + if (slpi->valid.psi_static_struct) { + spsi =3D (sal_processor_static_info_t *)p_data; + + /* Print branch register contents if valid */ + if (spsi->valid.br) + ia64_log_processor_regs_print(spsi->br, 8, "Branch", "br", + prfunc); + + /* Print control register contents if valid */ + if (spsi->valid.cr) + ia64_log_processor_regs_print(spsi->cr, 128, "Control", "cr", + prfunc); + + /* Print application register contents if valid */ + if (spsi->valid.ar) + ia64_log_processor_regs_print(spsi->ar, 128, "Application", + "ar", prfunc); + + /* Print region register contents if valid */ + if (spsi->valid.rr) + ia64_log_processor_regs_print(spsi->rr, 8, "Region", "rr", + prfunc); + + /* Print floating-point register contents if valid */ + if (spsi->valid.fr) + ia64_log_processor_fp_regs_print(spsi->fr, 128, "Floating-point", "fr", + prfunc); + } +} + +/* * ia64_log_processor_info_print + * * Display the processor-specific information logged by PAL as a part * of MCA or INIT or CMC. - * Inputs : lh (Pointer of the sal log header which specifies the format - * of SAL state info as specified by the SAL spec). + * + * Inputs : lh (Pointer of the sal log header which specifies the + * format of SAL state info as specified by the SAL = spec). + * prfunc (fn ptr of print function to be used for output). * Outputs : None */ void -ia64_log_processor_info_print(sal_log_header_t *lh, prfunc_t prfunc) +ia64_log_processor_info_print(sal_log_record_header_t *lh, prfunc_t prfunc) { - sal_log_processor_info_t *slpi; - int i; + sal_log_section_hdr_t *slsh; + int n_sects; + int ercd_pos; =20 if (!lh) return; =20 - if (lh->slh_log_type !=3D SAL_SUB_INFO_TYPE_PROCESSOR) +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + ia64_log_prt_record_header(lh, prfunc); +#endif // MCA_PRT_XTRA_DATA for test only @FVL + + if ((ercd_pos =3D sizeof(sal_log_record_header_t)) >=3D lh->len) { + IA64_MCA_DEBUG("ia64_mca_log_print: " + "truncated SAL CMC error record. len =3D %d\n", + lh->len); return; + } =20 - slpi =3D (sal_log_processor_info_t *)((char *)lh+sizeof(sal_log_header_t)= ); /* point to proc info */ + /* Print record header info */ + ia64_log_rec_header_print(lh, prfunc); =20 - if (!slpi) { - prfunc("No Processor Error Log found\n"); - return; + for (n_sects =3D 0; (ercd_pos < lh->len); n_sects++, ercd_pos +=3D slsh->= len) { + /* point to next section header */ + slsh =3D (sal_log_section_hdr_t *)((char *)lh + ercd_pos); + +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + ia64_log_prt_section_header(slsh, prfunc); +#endif // MCA_PRT_XTRA_DATA for test only @FVL + + if (verify_guid((void *)&slsh->guid, (void *)&(SAL_PROC_DEV_ERR_SECT_GUI= D))) { + IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n"); + continue; + } + + /* + * Now process processor device error record section + */ + ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh, + printk); } =20 - /* Print branch register contents if valid */ - if (slpi->slpi_valid.slpi_br) - ia64_log_processor_regs_print(slpi->slpi_br, 8, "Branch", "br", prfunc); + IA64_MCA_DEBUG("ia64_mca_log_print: " + "found %d sections in SAL CMC error record. len =3D %d\n", + n_sects, lh->len); + if (!n_sects) { + prfunc("No Processor Device Error Info Section found\n"); + return; + } +} =20 - /* Print control register contents if valid */ - if (slpi->slpi_valid.slpi_cr) - ia64_log_processor_regs_print(slpi->slpi_cr, 128, "Control", "cr", prfun= c); +/* + * ia64_log_platform_info_print + * + * Format and Log the SAL Platform Error Record. + * + * Inputs : lh (Pointer to the sal error record header with format + * specified by the SAL spec). + * prfunc (fn ptr of log output function to use) + * Outputs : None + */ +void +ia64_log_platform_info_print (sal_log_record_header_t *lh, prfunc_t prfunc) +{ + sal_log_section_hdr_t *slsh; + int n_sects; + int ercd_pos; =20 - /* Print application register contents if valid */ - if (slpi->slpi_valid.slpi_ar) - ia64_log_processor_regs_print(slpi->slpi_br, 128, "Application", "ar", p= rfunc); + if (!lh) + return; =20 - /* Print region register contents if valid */ - if (slpi->slpi_valid.slpi_rr) - ia64_log_processor_regs_print(slpi->slpi_rr, 8, "Region", "rr", prfunc); +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + ia64_log_prt_record_header(lh, prfunc); +#endif // MCA_PRT_XTRA_DATA for test only @FVL + + if ((ercd_pos =3D sizeof(sal_log_record_header_t)) >=3D lh->len) { + IA64_MCA_DEBUG("ia64_mca_log_print: " + "truncated SAL error record. len =3D %d\n", + lh->len); + return; + } =20 - /* Print floating-point register contents if valid */ - if (slpi->slpi_valid.slpi_fr) - ia64_log_processor_regs_print(slpi->slpi_fr, 128, "Floating-point", "fr", - prfunc); + /* Print record header info */ + ia64_log_rec_header_print(lh, prfunc); =20 - /* Print the cache check information if any*/ - for (i =3D 0 ; i < MAX_CACHE_ERRORS; i++) - ia64_log_cache_check_info_print(i, - slpi->slpi_cache_check_info[i].slpi_cache_check, - slpi->slpi_cache_check_info[i].slpi_target_address, - prfunc); - /* Print the tlb check information if any*/ - for (i =3D 0 ; i < MAX_TLB_ERRORS; i++) - ia64_log_tlb_check_info_print(i,slpi->slpi_tlb_check_info[i], prfunc); + for (n_sects =3D 0; (ercd_pos < lh->len); n_sects++, ercd_pos +=3D slsh->= len) { + /* point to next section header */ + slsh =3D (sal_log_section_hdr_t *)((char *)lh + ercd_pos); + +#ifdef MCA_PRT_XTRA_DATA // for test only @FVL + ia64_log_prt_section_header(slsh, prfunc); + + if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) !=3D 0) { + size_t d_len =3D slsh->len - sizeof(sal_log_section_hdr_t); + char *p_data =3D (char *)&((sal_log_mem_dev_err_info_t *)slsh)->vali= d; + + prfunc("Start of Platform Err Data Section: Data buffer =3D %p, " + "Data size =3D %ld\n", (void *)p_data, d_len); + ia64_log_hexdump(p_data, d_len, prfunc); + prfunc("End of Platform Err Data Section\n"); + } +#endif // MCA_PRT_XTRA_DATA for test only @FVL =20 - /* Print the bus check information if any*/ - for (i =3D 0 ; i < MAX_BUS_ERRORS; i++) - ia64_log_bus_check_info_print(i, - slpi->slpi_bus_check_info[i].slpi_bus_check, - slpi->slpi_bus_check_info[i].slpi_requestor_addr, - slpi->slpi_bus_check_info[i].slpi_responder_addr, - slpi->slpi_bus_check_info[i].slpi_target_addr, - prfunc); + /* + * Now process CPE error record section + */ + if (efi_guidcmp(slsh->guid, SAL_PROC_DEV_ERR_SECT_GUID) =3D 0) { + ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_MEM_DEV_ERR_SECT_GUID) =3D 0= ) { + prfunc("+Platform Memory Device Error Info Section\n"); + ia64_log_mem_dev_err_info_print((sal_log_mem_dev_err_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SEL_DEV_ERR_SECT_GUID) =3D 0= ) { + prfunc("+Platform SEL Device Error Info Section\n"); + ia64_log_sel_dev_err_info_print((sal_log_sel_dev_err_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_BUS_ERR_SECT_GUID) =3D 0= ) { + prfunc("+Platform PCI Bus Error Info Section\n"); + ia64_log_pci_bus_err_info_print((sal_log_pci_bus_err_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID) = =3D 0) { + prfunc("+Platform SMBIOS Device Error Info Section\n"); + ia64_log_smbios_dev_err_info_print((sal_log_smbios_dev_err_info_t *)sls= h, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_COMP_ERR_SECT_GUID) =3D = 0) { + prfunc("+Platform PCI Component Error Info Section\n"); + ia64_log_pci_comp_err_info_print((sal_log_pci_comp_err_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SPECIFIC_ERR_SECT_GUID) =3D = 0) { + prfunc("+Platform Specific Error Info Section\n"); + ia64_log_plat_specific_err_info_print((sal_log_plat_specific_err_info_t= *) + slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_HOST_CTLR_ERR_SECT_GUID) =3D= 0) { + prfunc("+Platform Host Controller Error Info Section\n"); + ia64_log_host_ctlr_err_info_print((sal_log_host_ctlr_err_info_t *)slsh, + prfunc); + } else if (efi_guidcmp(slsh->guid, SAL_PLAT_BUS_ERR_SECT_GUID) =3D 0) { + prfunc("+Platform Bus Error Info Section\n"); + ia64_log_plat_bus_err_info_print((sal_log_plat_bus_err_info_t *)slsh, + prfunc); + } else { + IA64_MCA_DEBUG("ia64_mca_log_print: unsupported record section\n"); + continue; + } + } =20 + IA64_MCA_DEBUG("ia64_mca_log_print: found %d sections in SAL error record= . len =3D %d\n", + n_sects, lh->len); + if (!n_sects) { + prfunc("No Platform Error Info Sections found\n"); + return; + } } =20 /* * ia64_log_print - * Display the contents of the OS error log information - * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC}) - * sub_info_type (SAL_SUB_INFO_TYPE_{PROCESSOR,PLATFORM}) + * + * Displays the contents of the OS error log information + * + * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE}) + * prfunc (fn ptr of log output function to use) * Outputs : None */ void -ia64_log_print(int sal_info_type, int sal_sub_info_type, prfunc_t prfunc) +ia64_log_print(int sal_info_type, prfunc_t prfunc) { - char *info_type, *sub_info_type; - switch(sal_info_type) { - case SAL_INFO_TYPE_MCA: - info_type =3D "MCA"; + case SAL_INFO_TYPE_MCA: + prfunc("+BEGIN HARDWARE ERROR STATE AT MCA\n"); + ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc= ); + prfunc("+END HARDWARE ERROR STATE AT MCA\n"); break; - case SAL_INFO_TYPE_INIT: - info_type =3D "INIT"; + case SAL_INFO_TYPE_INIT: + prfunc("+MCA INIT ERROR LOG (UNIMPLEMENTED)\n"); break; - case SAL_INFO_TYPE_CMC: - info_type =3D "CMC"; + case SAL_INFO_TYPE_CMC: + prfunc("+BEGIN HARDWARE ERROR STATE AT CMC\n"); + ia64_log_processor_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfun= c); + prfunc("+END HARDWARE ERROR STATE AT CMC\n"); break; - default: - info_type =3D "UNKNOWN"; + case SAL_INFO_TYPE_CPE: + prfunc("+BEGIN HARDWARE ERROR STATE AT CPE\n"); + ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc= ); + prfunc("+END HARDWARE ERROR STATE AT CPE\n"); break; - } - - switch(sal_sub_info_type) { - case SAL_SUB_INFO_TYPE_PROCESSOR: - sub_info_type =3D "PROCESSOR"; - break; - case SAL_SUB_INFO_TYPE_PLATFORM: - sub_info_type =3D "PLATFORM"; - break; - default: - sub_info_type =3D "UNKNOWN"; + default: + prfunc("+MCA UNKNOWN ERROR LOG (UNIMPLEMENTED)\n"); break; } - - prfunc("+BEGIN HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type); - if (sal_sub_info_type =3D SAL_SUB_INFO_TYPE_PROCESSOR) - ia64_log_processor_info_print( - IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type), - prfunc); - else - log_print_platform(IA64_LOG_CURR_BUFFER(sal_info_type, sal_sub_info_type= ),prfunc); - prfunc("+END HARDWARE ERROR STATE [%s %s]\n", info_type, sub_info_type); } diff -urN linux-2.4.13/arch/ia64/kernel/mca_asm.S linux-2.4.13-lia/arch/ia6= 4/kernel/mca_asm.S --- linux-2.4.13/arch/ia64/kernel/mca_asm.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/mca_asm.S Thu Oct 4 00:21:39 2001 @@ -9,6 +9,7 @@ // #include =20 +#include #include #include #include @@ -23,7 +24,7 @@ #include "minstate.h" =20 /* - * SAL_TO_OS_MCA_HANDOFF_STATE + * SAL_TO_OS_MCA_HANDOFF_STATE (SAL 3.0 spec) * 1. GR1 =3D OS GP * 2. GR8 =3D PAL_PROC physical address * 3. GR9 =3D SAL_PROC physical address @@ -33,6 +34,7 @@ */ #define SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(_tmp) \ movl _tmp=3Dia64_sal_to_os_handoff_state;; \ + DATA_VA_TO_PA(_tmp);; \ st8 [_tmp]=3Dr1,0x08;; \ st8 [_tmp]=3Dr8,0x08;; \ st8 [_tmp]=3Dr9,0x08;; \ @@ -41,47 +43,29 @@ st8 [_tmp]=3Dr12,0x08;; =20 /* - * OS_MCA_TO_SAL_HANDOFF_STATE - * 1. GR8 =3D OS_MCA status - * 2. GR9 =3D SAL GP (physical) - * 3. GR22 =3D New min state save area pointer + * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec) + * 1. GR8 =3D OS_MCA return status + * 2. GR9 =3D SAL GP (physical) + * 3. GR10 =3D 0/1 returning same/new context + * 4. GR22 =3D New min state save area pointer + * returns ptr to SAL rtn save loc in _tmp */ -#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \ - movl _tmp=3Dia64_os_to_sal_handoff_state;; \ - DATA_VA_TO_PA(_tmp);; \ - ld8 r8=3D[_tmp],0x08;; \ - ld8 r9=3D[_tmp],0x08;; \ - ld8 r22=3D[_tmp],0x08;; - -/* - * BRANCH - * Jump to the instruction referenced by - * "to_label". - * Branch is taken only if the predicate - * register "p" is true. - * "ip" is the address of the instruction - * located at "from_label". - * "temp" is a scratch register like r2 - * "adjust" needed for HP compiler. - * A screwup somewhere with constant arithmetic. - */ -#define BRANCH(to_label, temp, p, adjust) \ -100: (p) mov temp=3Dip; \ - ;; \ - (p) adds temp=3Dto_label-100b,temp;\ - ;; \ - (p) adds temp=ADjust,temp; \ - ;; \ - (p) mov b1=3Dtemp ; \ - (p) br b1 +#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \ + movl _tmp=3Dia64_os_to_sal_handoff_state;; \ + DATA_VA_TO_PA(_tmp);; \ + ld8 r8=3D[_tmp],0x08;; \ + ld8 r9=3D[_tmp],0x08;; \ + ld8 r10=3D[_tmp],0x08;; \ + ld8 r22=3D[_tmp],0x08;; \ + movl _tmp=3Dia64_sal_to_os_handoff_state;; \ + DATA_VA_TO_PA(_tmp);; \ + add _tmp=3D0x28,_tmp;; // point to SAL rtn save location =20 .global ia64_os_mca_dispatch .global ia64_os_mca_dispatch_end .global ia64_sal_to_os_handoff_state .global ia64_os_to_sal_handoff_state - .global ia64_os_mca_ucmc_handler .global ia64_mca_proc_state_dump - .global ia64_mca_proc_state_restore .global ia64_mca_stack .global ia64_mca_stackframe .global ia64_mca_bspstore @@ -100,7 +84,7 @@ #endif /* #if defined(MCA_TEST) */ =20 // Save the SAL to OS MCA handoff state as defined - // by SAL SPEC 2.5 + // by SAL SPEC 3.0 // NOTE : The order in which the state gets saved // is dependent on the way the C-structure // for ia64_mca_sal_to_os_state_t has been @@ -110,15 +94,20 @@ // LOG PROCESSOR STATE INFO FROM HERE ON.. ;; begin_os_mca_dump: - BRANCH(ia64_os_mca_proc_state_dump, r2, p0, 0x0) - ;; + br ia64_os_mca_proc_state_dump;; + ia64_os_mca_done_dump: =20 // Setup new stack frame for OS_MCA handling - movl r2=3Dia64_mca_bspstore // local bspstore area location in r2 - movl r3=3Dia64_mca_stackframe // save stack frame to memory in r3 + movl r2=3Dia64_mca_bspstore;; // local bspstore area location in = r2 + DATA_VA_TO_PA(r2);; + movl r3=3Dia64_mca_stackframe;; // save stack frame to memory in r3 + DATA_VA_TO_PA(r3);; rse_switch_context(r6,r3,r2);; // RSC management in this= new context movl r12=3Dia64_mca_stack;; + mov r2=3D8*1024;; // stack size must be same as c arr= ay + add r12=3Dr2,r12;; // stack base @ bottom of array + DATA_VA_TO_PA(r12);; =20 // Enter virtual mode from physical mode VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4) @@ -127,7 +116,7 @@ // call our handler movl r2=3Dia64_mca_ucmc_handler;; mov b6=3Dr2;; - br.call.sptk.few b0=B6 + br.call.sptk.many b0=B6;; .ret0: // Revert back to physical mode before going back to SAL PHYSICAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_end, r4) @@ -135,9 +124,9 @@ =20 #if defined(MCA_TEST) // Pretend that we are in interrupt context - mov r2=3Dpsr - dep r2=3D0, r2, PSR_IC, 2; - mov psr.l =3D r2 + mov r2=3Dpsr;; + dep r2=3D0, r2, PSR_IC, 2;; + mov psr.l =3D r2;; #endif /* #if defined(MCA_TEST) */ =20 // restore the original stack frame here @@ -152,15 +141,14 @@ mov r8=3Dgp ;; begin_os_mca_restore: - BRANCH(ia64_os_mca_proc_state_restore, r2, p0, 0x0) - ;; + br ia64_os_mca_proc_state_restore;; =20 ia64_os_mca_done_restore: ;; // branch back to SALE_CHECK OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2) ld8 r3=3D[r2];; - mov b0=3Dr3 // SAL_CHECK return address + mov b0=3Dr3;; // SAL_CHECK return address br b0 ;; ia64_os_mca_dispatch_end: @@ -178,8 +166,10 @@ //-- =20 ia64_os_mca_proc_state_dump: -// Get and save GR0-31 from Proc. Min. State Save Area to SAL PSI +// Save bank 1 GRs 16-31 which will be used by c-language code when we swi= tch +// to virtual addressing mode. movl r2=3Dia64_mca_proc_state_dump;; // Os state dump area + DATA_VA_TO_PA(r2) // convert to to physical addr= ess =20 // save ar.NaT mov r5=3Dar.unat // ar.unat @@ -250,16 +240,16 @@ // if PSR.ic=3D0, reading interruption registers causes an illegal operati= on fault mov r3=3Dpsr;; tbit.nz.unc p6,p0=3Dr3,PSR_IC;; // PSI Valid Log bit pos. test -(p6) st8 [r2]=3Dr0,9*8+160 // increment by 168 byte inc. +(p6) st8 [r2]=3Dr0,9*8+160 // increment by 232 byte inc. begin_skip_intr_regs: - BRANCH(SkipIntrRegs, r9, p6, 0x0) - ;; +(p6) br SkipIntrRegs;; + add r4=3D8,r2 // duplicate r2 in r4 add r6=3D2*8,r2 // duplicate r2 in r6 =20 mov r3=3Dcr16 // cr.ipsr mov r5=3Dcr17 // cr.isr - mov r7=3Dr0;; // cr.ida =3D> cr18 + mov r7=3Dr0;; // cr.ida =3D> cr18 (reserve= d) st8 [r2]=3Dr3,3*8 st8 [r4]=3Dr5,3*8 st8 [r6]=3Dr7,3*8;; @@ -394,8 +384,7 @@ br.cloop.sptk.few cStRR ;; end_os_mca_dump: - BRANCH(ia64_os_mca_done_dump, r2, p0, -0x10) - ;; + br ia64_os_mca_done_dump;; =20 //EndStub/////////////////////////////////////////////////////////////////= ///// =20 @@ -484,11 +473,10 @@ // if PSR.ic=3D1, reading interruption registers causes an illegal operati= on fault mov r3=3Dpsr;; tbit.nz.unc p6,p0=3Dr3,PSR_IC;; // PSI Valid Log bit pos. test -(p6) st8 [r2]=3Dr0,9*8+160 // increment by 160 byte inc. +(p6) st8 [r2]=3Dr0,9*8+160 // increment by 232 byte inc. =20 begin_rskip_intr_regs: - BRANCH(rSkipIntrRegs, r9, p6, 0x0) - ;; +(p6) br rSkipIntrRegs;; =20 add r4=3D8,r2 // duplicate r2 in r4 add r6=3D2*8,r2;; // duplicate r2 in r4 @@ -498,7 +486,7 @@ ld8 r7=3D[r6],3*8;; mov cr16=3Dr3 // cr.ipsr mov cr17=3Dr5 // cr.isr is read only -// mov cr18=3Dr7;; // cr.ida +// mov cr18=3Dr7;; // cr.ida (reserved - don't = restore) =20 ld8 r3=3D[r2],3*8 ld8 r5=3D[r4],3*8 @@ -629,8 +617,8 @@ mov ar.lc=3Dr5 ;; end_os_mca_restore: - BRANCH(ia64_os_mca_done_restore, r2, p0, -0x20) - ;; + br ia64_os_mca_done_restore;; + //EndStub/////////////////////////////////////////////////////////////////= ///// =20 // ok, the issue here is that we need to save state information so @@ -660,12 +648,7 @@ // 6. GR12 =3D Return address to location within SAL_INIT procedure =20 =20 - .text - .align 16 -.global ia64_monarch_init_handler -.proc ia64_monarch_init_handler -ia64_monarch_init_handler: - +GLOBAL_ENTRY(ia64_monarch_init_handler) #if defined(CONFIG_SMP) && defined(SAL_MPINIT_WORKAROUND) // // work around SAL bug that sends all processors to monarch entry @@ -741,13 +724,12 @@ adds out0=16,sp // out0 =3D pointer to pt_regs ;; =20 - br.call.sptk.few rp=3Dia64_init_handler + br.call.sptk.many rp=3Dia64_init_handler .ret1: =20 return_from_init: br.sptk return_from_init - - .endp +END(ia64_monarch_init_handler) =20 // // SAL to OS entry point for INIT on the slave processor @@ -755,14 +737,6 @@ // as a part of ia64_mca_init. // =20 - .text - .align 16 -.global ia64_slave_init_handler -.proc ia64_slave_init_handler -ia64_slave_init_handler: - - -slave_init_spin_me: - br.sptk slave_init_spin_me - ;; - .endp +GLOBAL_ENTRY(ia64_slave_init_handler) +1: br.sptk 1b +END(ia64_slave_init_handler) diff -urN linux-2.4.13/arch/ia64/kernel/pal.S linux-2.4.13-lia/arch/ia64/ke= rnel/pal.S --- linux-2.4.13/arch/ia64/kernel/pal.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/pal.S Thu Oct 4 00:21:39 2001 @@ -4,8 +4,9 @@ * * Copyright (C) 1999 Don Dugger * Copyright (C) 1999 Walt Drummond - * Copyright (C) 1999-2000 David Mosberger - * Copyright (C) 2000 Stephane Eranian + * Copyright (C) 1999-2001 Hewlett-Packard Co + * David Mosberger + * Stephane Eranian * * 05/22/2000 eranian Added support for stacked register calls * 05/24/2000 eranian Added support for physical mode static calls @@ -31,7 +32,7 @@ movl r2=3Dpal_entry_point ;; st8 [r2]=3Din0 - br.ret.sptk.few rp + br.ret.sptk.many rp END(ia64_pal_handler_init) =20 /* @@ -41,7 +42,7 @@ */ GLOBAL_ENTRY(ia64_pal_default_handler) mov r8=3D-1 - br.cond.sptk.few rp + br.cond.sptk.many rp END(ia64_pal_default_handler) =20 /* @@ -79,13 +80,13 @@ ;; (p6) srlz.i mov rp =3D r8 - br.cond.sptk.few b7 + br.cond.sptk.many b7 1: mov psr.l =3D loc3 mov ar.pfs =3D loc1 mov rp =3D loc0 ;; srlz.d // seralize restoration of psr.l - br.ret.sptk.few b0 + br.ret.sptk.many b0 END(ia64_pal_call_static) =20 /* @@ -120,7 +121,7 @@ mov rp =3D loc0 ;; srlz.d // serialize restoration of psr.l - br.ret.sptk.few b0 + br.ret.sptk.many b0 END(ia64_pal_call_stacked) =20 /* @@ -173,13 +174,13 @@ or loc3=3Dloc3,r17 // add in psr the bits to set ;; andcm r16=3Dloc3,r16 // removes bits to clear from psr - br.call.sptk.few rp=3Dia64_switch_mode + br.call.sptk.many rp=3Dia64_switch_mode .ret1: mov rp =3D r8 // install return address (physical) - br.cond.sptk.few b7 + br.cond.sptk.many b7 1: mov ar.rsc=3D0 // put RSE in enforced lazy, LE mode mov r16=3Dloc3 // r16=3D original psr - br.call.sptk.few rp=3Dia64_switch_mode // return to virtual mode + br.call.sptk.many rp=3Dia64_switch_mode // return to virtual mode .ret2: mov psr.l =3D loc3 // restore init PSR =20 @@ -188,7 +189,7 @@ ;; mov ar.rsc=3Dloc4 // restore RSE configuration srlz.d // seralize restoration of psr.l - br.ret.sptk.few b0 + br.ret.sptk.many b0 END(ia64_pal_call_phys_static) =20 /* @@ -227,13 +228,13 @@ mov b7 =3D loc2 // install target to branch reg ;; andcm r16=3Dloc3,r16 // removes bits to clear from psr - br.call.sptk.few rp=3Dia64_switch_mode + br.call.sptk.many rp=3Dia64_switch_mode .ret6: br.call.sptk.many rp=B7 // now make the call .ret7: mov ar.rsc=3D0 // put RSE in enforced lazy, LE mode mov r16=3Dloc3 // r16=3D original psr - br.call.sptk.few rp=3Dia64_switch_mode // return to virtual mode + br.call.sptk.many rp=3Dia64_switch_mode // return to virtual mode =20 .ret8: mov psr.l =3D loc3 // restore init PSR mov ar.pfs =3D loc1 @@ -241,6 +242,6 @@ ;; mov ar.rsc=3Dloc4 // restore RSE configuration srlz.d // seralize restoration of psr.l - br.ret.sptk.few b0 + br.ret.sptk.many b0 END(ia64_pal_call_phys_stacked) =20 diff -urN linux-2.4.13/arch/ia64/kernel/palinfo.c linux-2.4.13-lia/arch/ia6= 4/kernel/palinfo.c --- linux-2.4.13/arch/ia64/kernel/palinfo.c Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/palinfo.c Wed Oct 24 18:14:08 2001 @@ -6,12 +6,13 @@ * Intel IA-64 Architecture Software Developer's Manual v1.0. * * - * Copyright (C) 2000 Hewlett-Packard Co - * Copyright (C) 2000 Stephane Eranian + * Copyright (C) 2000-2001 Hewlett-Packard Co + * Stephane Eranian * * 05/26/2000 S.Eranian initial release * 08/21/2000 S.Eranian updated to July 2000 PAL specs * 02/05/2001 S.Eranian fixed module support + * 10/23/2001 S.Eranian updated pal_perf_mon_info bug fixes */ #include #include @@ -32,8 +33,9 @@ =20 MODULE_AUTHOR("Stephane Eranian "); MODULE_DESCRIPTION("/proc interface to IA-64 PAL"); +MODULE_LICENSE("GPL"); =20 -#define PALINFO_VERSION "0.4" +#define PALINFO_VERSION "0.5" =20 #ifdef CONFIG_SMP #define cpu_is_online(i) (cpu_online_map & (1UL << i)) @@ -606,15 +608,6 @@ =20 if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) !=3D 0) return 0; =20 -#ifdef IA64_PAL_PERF_MON_INFO_BUG - /* - * This bug has been fixed in PAL 2.2.9 and higher - */ - pm_buffer[5]=3D0x3; - pm_info.pal_perf_mon_info_s.cycles =3D 0x12; - pm_info.pal_perf_mon_info_s.retired =3D 0x08; -#endif - p +=3D sprintf(p, "PMC/PMD pairs : %d\n" \ "Counter width : %d bits\n" \ "Cycle event number : %d\n" \ @@ -636,6 +629,14 @@ p =3D bitregister_process(p, pm_buffer+8, 256); =20 p +=3D sprintf(p, "\nRetired bundles count capable : "); + +#ifdef CONFIG_ITANIUM + /* + * PAL_PERF_MON_INFO reports that only PMC4 can be used to count CPU_CYCL= ES + * which is wrong, both PMC4 and PMD5 support it. + */ + if (pm_buffer[12] =3D 0x10) pm_buffer[12]=3D0x30; +#endif =20 p =3D bitregister_process(p, pm_buffer+12, 256); =20 diff -urN linux-2.4.13/arch/ia64/kernel/pci.c linux-2.4.13-lia/arch/ia64/ke= rnel/pci.c --- linux-2.4.13/arch/ia64/kernel/pci.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/pci.c Thu Oct 4 00:21:39 2001 @@ -38,6 +38,10 @@ #define DBG(x...) #endif =20 +#ifdef CONFIG_IA64_MCA +extern void ia64_mca_check_errors( void ); +#endif + /* * This interrupt-safe spinlock protects all accesses to PCI * configuration space. @@ -122,6 +126,10 @@ # define PCI_BUSES_TO_SCAN 255 int i; =20 +#ifdef CONFIG_IA64_MCA + ia64_mca_check_errors(); /* For post-failure MCA error logging */ +#endif + platform_pci_fixup(0); /* phase 0 initialization (before PCI bus has been= scanned) */ =20 printk("PCI: Probing PCI hardware\n"); @@ -194,4 +202,40 @@ pcibios_setup (char *str) { return NULL; +} + +int +pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma, + enum pci_mmap_state mmap_state, int write_combine) +{ + /* + * I/O space cannot be accessed via normal processor loads and stores on = this + * platform. + */ + if (mmap_state =3D pci_mmap_io) + /* + * XXX we could relax this for I/O spaces for which ACPI indicates that + * the space is 1-to-1 mapped. But at the moment, we don't support + * multiple PCI address spaces and the legacy I/O space is not 1-to-1 + * mapped, so this is moot. + */ + return -EINVAL; + + /* + * Leave vm_pgoff as-is, the PCI space address is the physical address on= this + * platform. + */ + vma->vm_flags |=3D (VM_SHM | VM_LOCKED | VM_IO); + + if (write_combine) + vma->vm_page_prot =3D pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); + + if (remap_page_range(vma->vm_start, vma->vm_pgoff << PAGE_SHIFT, + vma->vm_end - vma->vm_start, + vma->vm_page_prot)) + return -EAGAIN; + + return 0; } diff -urN linux-2.4.13/arch/ia64/kernel/perfmon.c linux-2.4.13-lia/arch/ia6= 4/kernel/perfmon.c --- linux-2.4.13/arch/ia64/kernel/perfmon.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/perfmon.c Thu Oct 4 00:21:39 2001 @@ -38,7 +38,7 @@ =20 #ifdef CONFIG_PERFMON =20 -#define PFM_VERSION "0.2" +#define PFM_VERSION "0.3" #define PFM_SMPL_HDR_VERSION 1 =20 #define PMU_FIRST_COUNTER 4 /* first generic counter */ @@ -52,6 +52,7 @@ #define PFM_DISABLE 0xa6 /* freeze only */ #define PFM_RESTART 0xcf #define PFM_CREATE_CONTEXT 0xa7 +#define PFM_DESTROY_CONTEXT 0xa8 /* * Those 2 are just meant for debugging. I considered using sysctl() for * that but it is a little bit too pervasive. This solution is at least @@ -60,6 +61,8 @@ #define PFM_DEBUG_ON 0xe0 #define PFM_DEBUG_OFF 0xe1 =20 +#define PFM_DEBUG_BASE PFM_DEBUG_ON + =20 /* * perfmon API flags @@ -68,7 +71,8 @@ #define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fo= rk() */ #define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork()= */ #define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer o= verflow */ -#define PFM_FL_SYSTEMWIDE 0x08 /* create a systemwide context */ +#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */ +#define PFM_FL_EXCL_INTR 0x10 /* exclude interrupt from system wide monit= oring */ =20 /* * PMC API flags @@ -87,7 +91,7 @@ #endif =20 #define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6]= & (1<< (i&~(64-1)))) -#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>= >6)] & (1<< (i&~(64-1)))) +#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i= >>6)] & (1<< (i&~(64-1)))) #define PMD_IS_COUNTER(i) (i>=3DPMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTE= R+pmu_conf.max_counters)) #define PMC_IS_COUNTER(i) (i>=3DPMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTE= R+pmu_conf.max_counters)) =20 @@ -197,7 +201,8 @@ unsigned int noblock:1; /* block/don't block on overflow with notificatio= n */ unsigned int system:1; /* do system wide monitoring */ unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */ - unsigned int reserved:27; + unsigned int exclintr:1;/* exlcude interrupts from system wide monitoring= */ + unsigned int reserved:26; } pfm_context_flags_t; =20 typedef struct pfm_context { @@ -207,26 +212,33 @@ unsigned long ctx_iear_counter; /* which PMD holds I-EAR */ unsigned long ctx_btb_counter; /* which PMD holds BTB */ =20 - pid_t ctx_notify_pid; /* who to notify on overflow */ - int ctx_notify_sig; /* XXX: SIGPROF or other */ - pfm_context_flags_t ctx_flags; /* block/noblock */ - pid_t ctx_creator; /* pid of creator (debug) */ - unsigned long ctx_ovfl_regs; /* which registers just overflowed (notific= ation) */ - unsigned long ctx_smpl_regs; /* which registers to record on overflow */ + spinlock_t ctx_notify_lock; + pfm_context_flags_t ctx_flags; /* block/noblock */ + int ctx_notify_sig; /* XXX: SIGPROF or other */ + struct task_struct *ctx_notify_task; /* who to notify on overflow */ + struct task_struct *ctx_creator; /* pid of creator (debug) */ + + unsigned long ctx_ovfl_regs; /* which registers just overflowed (notifi= cation) */ + unsigned long ctx_smpl_regs; /* which registers to record on overflow */ + + struct semaphore ctx_restart_sem; /* use for blocking notification mode = */ =20 - struct semaphore ctx_restart_sem; /* use for blocking notification mode */ + unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) = */ + unsigned long ctx_used_pmcs[4]; /* bitmask of used PMC (speedup ctxsw) = */ =20 pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dy= namic */ + } pfm_context_t; =20 +#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |=3D 1<< ((n) % 6= 4) +#define CTX_USED_PMC(ctx,n) (ctx)->ctx_used_pmcs[(n)>>6] |=3D 1<< ((n) % 6= 4) + #define ctx_fl_inherit ctx_flags.inherit #define ctx_fl_noblock ctx_flags.noblock #define ctx_fl_system ctx_flags.system #define ctx_fl_frozen ctx_flags.frozen +#define ctx_fl_exclintr ctx_flags.exclintr =20 -#define CTX_IS_DEAR(c,n) ((c)->ctx_dear_counter =3D (n)) -#define CTX_IS_IEAR(c,n) ((c)->ctx_iear_counter =3D (n)) -#define CTX_IS_BTB(c,n) ((c)->ctx_btb_counter =3D (n)) #define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock =3D 1) #define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit) #define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf !=3D NULL) @@ -234,17 +246,15 @@ static pmu_config_t pmu_conf; =20 /* for debug only */ -static unsigned long pfm_debug=3D0; /* 0=3D nodebug, >0=3D debug output on= */ +static int pfm_debug=3D0; /* 0=3D nodebug, >0=3D debug output on */ + #define DBprintk(a) \ do { \ - if (pfm_debug >0) { printk(__FUNCTION__" "); printk a; } \ + if (pfm_debug >0) { printk(__FUNCTION__" %d: ", __LINE__); printk a; } \ } while (0); =20 -static void perfmon_softint(unsigned long ignored); static void ia64_reset_pmu(void); =20 -DECLARE_TASKLET(pfm_tasklet, perfmon_softint, 0); - /* * structure used to pass information between the interrupt handler * and the tasklet. @@ -256,26 +266,42 @@ unsigned long bitvect; /* which counters have overflowed */ } notification_info_t; =20 -#define notification_is_invalid(i) (i->to_pid < 2) =20 -/* will need to be cache line padded */ -static notification_info_t notify_info[NR_CPUS]; +typedef struct { + unsigned long pfs_proc_sessions; + unsigned long pfs_sys_session; /* can only be 0/1 */ + unsigned long pfs_dfl_dcr; /* XXX: hack */ + unsigned int pfs_pp; +} pfm_session_t; =20 -/* - * We force cache line alignment to avoid false sharing - * given that we have one entry per CPU. - */ -static struct { +struct { struct task_struct *owner; } ____cacheline_aligned pmu_owners[NR_CPUS]; -/* helper macros */ + + +/*=20 + * helper macros + */ #define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner =3D (t)= ; } while(0); #define PMU_OWNER() pmu_owners[smp_processor_id()].owner =20 +#ifdef CONFIG_SMP +#define PFM_CAN_DO_LAZY() (smp_num_cpus=3D1 && pfs_info.pfs_sys_session=3D= 0) +#else +#define PFM_CAN_DO_LAZY() (pfs_info.pfs_sys_session=3D0) +#endif + +static void pfm_lazy_save_regs (struct task_struct *ta); + /* for debug only */ static struct proc_dir_entry *perfmon_dir; =20 /* + * XXX: hack to indicate that a system wide monitoring session is active + */ +static pfm_session_t pfs_info; + +/* * finds the number of PM(C|D) registers given * the bitvector returned by PAL */ @@ -339,8 +365,7 @@ static inline unsigned long kvirt_to_pa(unsigned long adr) { - __u64 pa; - __asm__ __volatile__ ("tpa %0 =3D %1" : "=3Dr"(pa) : "r"(adr) : "memory"); + __u64 pa =3D ia64_tpa(adr); DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa)); return pa; } @@ -568,25 +593,44 @@ static int pfx_is_sane(pfreq_context_t *pfx) { + int ctx_flags; + /* valid signal */ - if (pfx->notify_sig < 1 || pfx->notify_sig >=3D _NSIG) return 0; + //if (pfx->notify_sig < 1 || pfx->notify_sig >=3D _NSIG) return -EINVAL; + if (pfx->notify_sig !=3D0 && pfx->notify_sig !=3D SIGPROF) return -EINVAL; =20 /* cannot send to process 1, 0 means do not notify */ - if (pfx->notify_pid < 0 || pfx->notify_pid =3D 1) return 0; + if (pfx->notify_pid < 0 || pfx->notify_pid =3D 1) return -EINVAL; + + ctx_flags =3D pfx->flags; =20 + if (ctx_flags & PFM_FL_SYSTEM_WIDE) { +#ifdef CONFIG_SMP + if (smp_num_cpus > 1) { + printk("perfmon: system wide monitoring on SMP not yet supported\n"); + return -EINVAL; + } +#endif + if ((ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) =3D 0) { + printk("perfmon: system wide monitoring cannot use blocking notificatio= n mode\n"); + return -EINVAL; + } + } /* probably more to add here */ =20 - return 1; + return 0; } =20 static int -pfm_context_create(struct task_struct *task, int flags, perfmon_req_t *req) +pfm_context_create(int flags, perfmon_req_t *req) { pfm_context_t *ctx; + struct task_struct *task =3D NULL; perfmon_req_t tmp; void *uaddr =3D NULL; - int ret =3D -EFAULT; + int ret; int ctx_flags; + pid_t pid; =20 /* to go away */ if (flags) { @@ -595,48 +639,156 @@ =20 if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; =20 + ret =3D pfx_is_sane(&tmp.pfr_ctx); + if (ret < 0) return ret; + ctx_flags =3D tmp.pfr_ctx.flags; =20 - /* not yet supported */ - if (ctx_flags & PFM_FL_SYSTEMWIDE) return -EINVAL; + if (ctx_flags & PFM_FL_SYSTEM_WIDE) { + /* + * XXX: This is not AT ALL SMP safe + */ + if (pfs_info.pfs_proc_sessions > 0) return -EBUSY; + if (pfs_info.pfs_sys_session > 0) return -EBUSY; + + pfs_info.pfs_sys_session =3D 1; =20 - if (!pfx_is_sane(&tmp.pfr_ctx)) return -EINVAL; + } else if (pfs_info.pfs_sys_session >0) { + /* no per-process monitoring while there is a system wide session */ + return -EBUSY; + } else + pfs_info.pfs_proc_sessions++; =20 ctx =3D pfm_context_alloc(); - if (!ctx) return -ENOMEM; + if (!ctx) goto error; + + /* record the creator (debug only) */ + ctx->ctx_creator =3D current; + + pid =3D tmp.pfr_ctx.notify_pid; + + spin_lock_init(&ctx->ctx_notify_lock); + + if (pid =3D current->pid) { + ctx->ctx_notify_task =3D task =3D current; + current->thread.pfm_context =3D ctx; + + atomic_set(¤t->thread.pfm_notifiers_check, 1); + + } else if (pid!=3D0) { + read_lock(&tasklist_lock); + + task =3D find_task_by_pid(pid); + if (task) { + /* + * record who to notify + */ + ctx->ctx_notify_task =3D task; + + /*=20 + * make visible + * must be done inside critical section + * + * if the initialization does not go through it is still + * okay because child will do the scan for nothing which + * won't hurt. + */ + current->thread.pfm_context =3D ctx; + + /* + * will cause task to check on exit for monitored + * processes that would notify it. see release_thread() + * Note: the scan MUST be done in release thread, once the + * task has been detached from the tasklist otherwise you are + * exposed to race conditions. + */ + atomic_add(1, &task->thread.pfm_notifiers_check); + } + read_unlock(&tasklist_lock); + } =20 - /* record who the creator is (for debug) */ - ctx->ctx_creator =3D task->pid; + /* + * notification process does not exist + */ + if (pid !=3D 0 && task =3D NULL) { + ret =3D -EINVAL; + goto buffer_error; + } =20 - ctx->ctx_notify_pid =3D tmp.pfr_ctx.notify_pid; ctx->ctx_notify_sig =3D SIGPROF; /* siginfo imposes a fixed signal */ =20 if (tmp.pfr_ctx.smpl_entries) { DBprintk((" sampling entries=3D%ld\n",tmp.pfr_ctx.smpl_entries)); - if ((ret=3Dpfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, tmp.pfr_ctx= .smpl_entries, &uaddr)) ) goto buffer_error; + + ret =3D pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs,=20 + tmp.pfr_ctx.smpl_entries, &uaddr); + if (ret<0) goto buffer_error; + tmp.pfr_ctx.smpl_vaddr =3D uaddr; } /* initialization of context's flags */ - ctx->ctx_fl_inherit =3D ctx_flags & PFM_FL_INHERIT_MASK; - ctx->ctx_fl_noblock =3D (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0; - ctx->ctx_fl_system =3D (ctx_flags & PFM_FL_SYSTEMWIDE) ? 1: 0; - ctx->ctx_fl_frozen =3D 0; + ctx->ctx_fl_inherit =3D ctx_flags & PFM_FL_INHERIT_MASK; + ctx->ctx_fl_noblock =3D (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0; + ctx->ctx_fl_system =3D (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0; + ctx->ctx_fl_exclintr =3D (ctx_flags & PFM_FL_EXCL_INTR) ? 1: 0; + ctx->ctx_fl_frozen =3D 0; + + /*=20 + * Keep track of the pmds we want to sample + * XXX: may be we don't need to save/restore the DEAR/IEAR pmds + * but we do need the BTB for sure. This is because of a hardware + * buffer of 1 only for non-BTB pmds. + */ + ctx->ctx_used_pmds[0] =3D tmp.pfr_ctx.smpl_regs; + ctx->ctx_used_pmcs[0] =3D 1; /* always save/restore PMC[0] */ =20 sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */ =20 - if (copy_to_user(req, &tmp, sizeof(tmp))) goto buffer_error; =20 - DBprintk((" context=3D%p, pid=3D%d notify_sig %d notify_pid=3D%d\n",(void= *)ctx, task->pid, ctx->ctx_notify_sig, ctx->ctx_notify_pid)); - DBprintk((" context=3D%p, pid=3D%d flags=3D0x%x inherit=3D%d noblock=3D%d= system=3D%d\n",(void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, ctx= ->ctx_fl_noblock, ctx->ctx_fl_system)); + if (copy_to_user(req, &tmp, sizeof(tmp))) { + ret =3D -EFAULT; + goto buffer_error; + } + + DBprintk((" context=3D%p, pid=3D%d notify_sig %d notify_task=3D%p\n",(voi= d *)ctx, current->pid, ctx->ctx_notify_sig, ctx->ctx_notify_task)); + DBprintk((" context=3D%p, pid=3D%d flags=3D0x%x inherit=3D%d noblock=3D%d= system=3D%d\n",(void *)ctx, current->pid, ctx_flags, ctx->ctx_fl_inherit, = ctx->ctx_fl_noblock, ctx->ctx_fl_system)); + + /* + * when no notification is required, we can make this visible at the last= moment + */ + if (pid =3D 0) current->thread.pfm_context =3D ctx; + + /* + * by default, we always include interrupts for system wide + * DCR.pp is set by default to zero by kernel in cpu_init() + */ + if (ctx->ctx_fl_system) { + if (ctx->ctx_fl_exclintr =3D 0) { + unsigned long dcr =3D ia64_get_dcr(); + + ia64_set_dcr(dcr|IA64_DCR_PP); + /* + * keep track of the kernel default value + */ + pfs_info.pfs_dfl_dcr =3D dcr; =20 - /* link with task */ - task->thread.pfm_context =3D ctx; + DBprintk((" dcr.pp is set\n")); + } + }=20 =20 return 0; =20 buffer_error: - vfree(ctx); - + pfm_context_free(ctx); +error: + /* + * undo session reservation + */ + if (ctx_flags & PFM_FL_SYSTEM_WIDE) { + pfs_info.pfs_sys_session =3D 0; + } else { + pfs_info.pfs_proc_sessions--; + } return ret; } =20 @@ -656,8 +808,20 @@ =20 /* upper part is ignored on rval */ ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval); + + /* + * we must reset BTB index (clears pmd16.full to make + * sure we do not report the same branches twice. + * The non-blocking case in handled in update_counters() + */ + if (cnum =3D ctx->ctx_btb_counter) { + DBprintk(("reseting PMD16\n")); + ia64_set_pmd(16, 0); + } } } + /* just in case ! */ + ctx->ctx_ovfl_regs =3D 0; } =20 static int @@ -695,20 +859,23 @@ } else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) { ctx->ctx_btb_counter =3D cnum; } - +#if 0 if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY) ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |=3D PFM_REGFL_OVFL_NOTI= FY; +#endif } - + /* keep track of what we use */ + CTX_USED_PMC(ctx, cnum); ia64_set_pmc(cnum, tmp.pfr_reg.reg_value); - DBprintk((" setting PMC[%ld]=3D0x%lx flags=3D0x%x\n", cnum, tmp.pfr_reg= .reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags)); + + DBprintk((" setting PMC[%ld]=3D0x%lx flags=3D0x%x used_pmcs=3D0%lx\n", c= num, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags, = ctx->ctx_used_pmcs[0])); =20 } /* * we have to set this here event hough we haven't necessarily started mo= nitoring * because we may be context switched out */ - th->flags |=3D IA64_THREAD_PM_VALID; + if (ctx->ctx_fl_system=3D0) th->flags |=3D IA64_THREAD_PM_VALID; =20 return 0; } @@ -741,25 +908,32 @@ ctx->ctx_pmds[k].val =3D tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_v= al; ctx->ctx_pmds[k].smpl_rval =3D tmp.pfr_reg.reg_smpl_reset; ctx->ctx_pmds[k].ovfl_rval =3D tmp.pfr_reg.reg_ovfl_reset; + + if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY) + ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |=3D PFM_REGFL_OVFL_NOTI= FY; } + /* keep track of what we use */ + CTX_USED_PMD(ctx, cnum); =20 /* writes to unimplemented part is ignored, so this is safe */ ia64_set_pmd(cnum, tmp.pfr_reg.reg_value); =20 /* to go away */ ia64_srlz_d(); - DBprintk((" setting PMD[%ld]: pmd.val=3D0x%lx pmd.ovfl_rval=3D0x%lx pmd= .smpl_rval=3D0x%lx pmd=3D%lx\n", + DBprintk((" setting PMD[%ld]: ovfl_notify=3D%d pmd.val=3D0x%lx pmd.ovfl= _rval=3D0x%lx pmd.smpl_rval=3D0x%lx pmd=3D%lx used_pmds=3D0%lx\n", cnum, + PMD_OVFL_NOTIFY(ctx, cnum - PMU_FIRST_COUNTER), ctx->ctx_pmds[k].val, ctx->ctx_pmds[k].ovfl_rval, ctx->ctx_pmds[k].smpl_rval, - ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val)); + ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val, + ctx->ctx_used_pmds[0])); } /* * we have to set this here event hough we haven't necessarily started mo= nitoring * because we may be context switched out */ - th->flags |=3D IA64_THREAD_PM_VALID; + if (ctx->ctx_fl_system=3D0) th->flags |=3D IA64_THREAD_PM_VALID; =20 return 0; } @@ -783,6 +957,8 @@ /* XXX: ctx locking may be required here */ =20 for (i =3D 0; i < count; i++, req++) { + unsigned long reg_val =3D ~0, ctx_val =3D ~0; + if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; =20 if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL; @@ -791,23 +967,25 @@ if (ta =3D current){ val =3D ia64_get_pmd(tmp.pfr_reg.reg_num); } else { - val =3D th->pmd[tmp.pfr_reg.reg_num]; + val =3D reg_val =3D th->pmd[tmp.pfr_reg.reg_num]; } val &=3D pmu_conf.perf_ovfl_val; /* * lower part of .val may not be zero, so we must be an addition becaus= e of * residual count (see update_counters). */ - val +=3D ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val; + val +=3D ctx_val =3D ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUN= TER].val; } else { /* for now */ if (ta !=3D current) return -EINVAL; =20 + ia64_srlz_d(); val =3D ia64_get_pmd(tmp.pfr_reg.reg_num); } tmp.pfr_reg.reg_value =3D val; =20 - DBprintk((" reading PMD[%ld]=3D0x%lx\n", tmp.pfr_reg.reg_num, val)); + DBprintk((" reading PMD[%ld]=3D0x%lx reg=3D0x%lx ctx_val=3D0x%lx pmc=3D0= x%lx\n",=20 + tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.= reg_num))); =20 if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT; } @@ -822,7 +1000,7 @@ void *sem =3D &ctx->ctx_restart_sem; =20 if (task =3D current) { - DBprintk((" restartig self %d frozen=3D%d \n", current->pid, ctx->ctx_fl= _frozen)); + DBprintk((" restarting self %d frozen=3D%d \n", current->pid, ctx->ctx_f= l_frozen)); =20 pfm_reset_regs(ctx); =20 @@ -871,6 +1049,23 @@ return 0; } =20 +/* + * system-wide mode: propagate activation/desactivation throughout the tas= klist + * + * XXX: does not work for SMP, of course + */ +static void +pfm_process_tasklist(int cmd) +{ + struct task_struct *p; + struct pt_regs *regs; + + for_each_task(p) { + regs =3D (struct pt_regs *)((unsigned long)p + IA64_STK_OFFSET); + regs--; + ia64_psr(regs)->pp =3D cmd; + } +} =20 static int do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t= *req, int count, struct pt_regs *regs) @@ -881,19 +1076,26 @@ =20 memset(&tmp, 0, sizeof(tmp)); =20 + if (ctx =3D NULL && cmd !=3D PFM_CREATE_CONTEXT && cmd < PFM_DEBUG_BASE) { + DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid)); + return -EINVAL; + } + switch (cmd) { case PFM_CREATE_CONTEXT: /* a context has already been defined */ if (ctx) return -EBUSY; =20 - /* may be a temporary limitation */ + /* + * cannot directly create a context in another process + */ if (task !=3D current) return -EINVAL; =20 if (req =3D NULL || count !=3D 1) return -EINVAL; =20 if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) r= eturn -EFAULT; =20 - return pfm_context_create(task, flags, req); + return pfm_context_create(flags, req); =20 case PFM_WRITE_PMCS: /* we don't quite support this right now */ @@ -901,10 +1103,6 @@ =20 if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) r= eturn -EFAULT; =20 - if (!ctx) { - DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid)); - return -EINVAL; - } return pfm_write_pmcs(task, req, count); =20 case PFM_WRITE_PMDS: @@ -913,45 +1111,41 @@ =20 if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) r= eturn -EFAULT; =20 - if (!ctx) { - DBprintk((" PFM_WRITE_PMDS: no context for task %d\n", task->pid)); - return -EINVAL; - } return pfm_write_pmds(task, req, count); =20 case PFM_START: /* we don't quite support this right now */ if (task !=3D current) return -EINVAL; =20 - if (!ctx) { - DBprintk((" PFM_START: no context for task %d\n", task->pid)); - return -EINVAL; - } + if (PMU_OWNER() && PMU_OWNER() !=3D current && PFM_CAN_DO_LAZY()) pfm_= lazy_save_regs(PMU_OWNER()); =20 SET_PMU_OWNER(current); =20 /* will start monitoring right after rfi */ ia64_psr(regs)->up =3D 1; + ia64_psr(regs)->pp =3D 1; + + if (ctx->ctx_fl_system) { + pfm_process_tasklist(1); + pfs_info.pfs_pp =3D 1; + } =20 /* * mark the state as valid. * this will trigger save/restore at context switch */ - th->flags |=3D IA64_THREAD_PM_VALID; + if (ctx->ctx_fl_system=3D0) th->flags |=3D IA64_THREAD_PM_VALID; =20 ia64_set_pmc(0, 0); ia64_srlz_d(); =20 - break; + break; =20 case PFM_ENABLE: /* we don't quite support this right now */ if (task !=3D current) return -EINVAL; =20 - if (!ctx) { - DBprintk((" PFM_ENABLE: no context for task %d\n", task->pid)); - return -EINVAL; - } + if (PMU_OWNER() && PMU_OWNER() !=3D current && PFM_CAN_DO_LAZY()) pfm_= lazy_save_regs(PMU_OWNER()); =20 /* reset all registers to stable quiet state */ ia64_reset_pmu(); @@ -969,7 +1163,7 @@ * mark the state as valid. * this will trigger save/restore at context switch */ - th->flags |=3D IA64_THREAD_PM_VALID; + if (ctx->ctx_fl_system=3D0) th->flags |=3D IA64_THREAD_PM_VALID; =20 /* simply unfreeze */ ia64_set_pmc(0, 0); @@ -983,54 +1177,41 @@ /* simply freeze */ ia64_set_pmc(0, 1); ia64_srlz_d(); + /* + * XXX: cannot really toggle IA64_THREAD_PM_VALID + * but context is still considered valid, so any=20 + * read request would return something valid. Same + * thing when this task terminates (pfm_flush_regs()). + */ break; =20 case PFM_READ_PMDS: if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) r= eturn -EFAULT; if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) = return -EFAULT; =20 - if (!ctx) { - DBprintk((" PFM_READ_PMDS: no context for task %d\n", task->pid)); - return -EINVAL; - } return pfm_read_pmds(task, req, count); =20 case PFM_STOP: /* we don't quite support this right now */ if (task !=3D current) return -EINVAL; =20 - ia64_set_pmc(0, 1); - ia64_srlz_d(); - + /* simply stop monitors, not PMU */ ia64_psr(regs)->up =3D 0; + ia64_psr(regs)->pp =3D 0; =20 - th->flags &=3D ~IA64_THREAD_PM_VALID; - - SET_PMU_OWNER(NULL); - - /* we probably will need some more cleanup here */ - break; - - case PFM_DEBUG_ON: - printk(" debugging on\n"); - pfm_debug =3D 1; - break; + if (ctx->ctx_fl_system) { + pfm_process_tasklist(0); + pfs_info.pfs_pp =3D 0; + } =20 - case PFM_DEBUG_OFF: - printk(" debugging off\n"); - pfm_debug =3D 0; break; =20 case PFM_RESTART: /* temporary, will most likely end up as a PFM_EN= ABLE */ =20 - if ((th->flags & IA64_THREAD_PM_VALID) =3D 0) { + if ((th->flags & IA64_THREAD_PM_VALID) =3D 0 && ctx->ctx_fl_system=3D0)= { printk(" PFM_RESTART not monitoring\n"); return -EINVAL; } - if (!ctx) { - printk(" PFM_RESTART no ctx for %d\n", task->pid); - return -EINVAL; - } if (CTX_OVFL_NOBLOCK(ctx) =3D 0 && ctx->ctx_fl_frozen=3D0) { printk("task %d without pmu_frozen set\n", task->pid); return -EINVAL; @@ -1038,6 +1219,37 @@ =20 return pfm_do_restart(task); /* we only look at first entry */ =20 + case PFM_DESTROY_CONTEXT: + /* we don't quite support this right now */ + if (task !=3D current) return -EINVAL; + + /* first stop monitors */ + ia64_psr(regs)->up =3D 0; + ia64_psr(regs)->pp =3D 0; + + /* then freeze PMU */ + ia64_set_pmc(0, 1); + ia64_srlz_d(); + + /* don't save/restore on context switch */ + if (ctx->ctx_fl_system =3D0) task->thread.flags &=3D ~IA64_THREAD_PM_VA= LID; + + SET_PMU_OWNER(NULL); + + /* now free context and related state */ + pfm_context_exit(task); + break; + + case PFM_DEBUG_ON: + printk("perfmon debugging on\n"); + pfm_debug =3D 1; + break; + + case PFM_DEBUG_OFF: + printk("perfmon debugging off\n"); + pfm_debug =3D 0; + break; + default: DBprintk((" UNknown command 0x%x\n", cmd)); return -EINVAL; @@ -1074,11 +1286,8 @@ /* XXX: pid interface is going away in favor of pfm context */ if (pid !=3D current->pid) { read_lock(&tasklist_lock); - { - child =3D find_task_by_pid(pid); - if (child) - get_task_struct(child); - } + + child =3D find_task_by_pid(pid); =20 if (!child) goto abort_call; =20 @@ -1101,93 +1310,44 @@ return ret; } =20 - -/* - * This function is invoked on the exit path of the kernel. Therefore it m= ust make sure - * it does does modify the caller's input registers (in0-in7) in case of e= ntry by system call - * which can be restarted. That's why it's declared as a system call and a= ll 8 possible args - * are declared even though not used. - */ #if __GNUC__ >=3D 3 void asmlinkage -pfm_overflow_notify(void) +pfm_block_on_overflow(void) #else void asmlinkage -pfm_overflow_notify(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 = arg5, u64 arg6, u64 arg7) +pfm_block_on_overflow(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u6= 4 arg5, u64 arg6, u64 arg7) #endif { - struct task_struct *task; struct thread_struct *th =3D ¤t->thread; pfm_context_t *ctx =3D current->thread.pfm_context; - struct siginfo si; int ret; =20 /* - * do some sanity checks first - */ - if (!ctx) { - printk("perfmon: process %d has no PFM context\n", current->pid); - return; - } - if (ctx->ctx_notify_pid < 2) { - printk("perfmon: process %d invalid notify_pid=3D%d\n", current->pid, ct= x->ctx_notify_pid); - return; - } - - DBprintk((" current=3D%d ctx=3D%p bv=3D0%lx\n", current->pid, (void *)ctx= , ctx->ctx_ovfl_regs)); - /* * NO matter what notify_pid is, * we clear overflow, won't notify again */ - th->pfm_pend_notify =3D 0; + th->pfm_must_block =3D 0; =20 /* - * When measuring in kernel mode and non-blocking fashion, it is possible= to - * get an overflow while executing this code. Therefore the state of pend= _notify - * and ovfl_regs can be altered. The important point is not to loose any = notification. - * It is fine to get called for nothing. To make sure we do collect as mu= ch state as - * possible, update_counters() always uses |=3D to add bit to the ovfl_re= gs field. - * - * In certain cases, it is possible to come here, with ovfl_regs =3D 0; - * - * XXX: pend_notify and ovfl_regs could be merged maybe ! + * do some sanity checks first */ - if (ctx->ctx_ovfl_regs =3D 0) { - printk("perfmon: spurious overflow notification from pid %d\n", current-= >pid); + if (!ctx) { + printk("perfmon: process %d has no PFM context\n", current->pid); return; } - read_lock(&tasklist_lock); - - task =3D find_task_by_pid(ctx->ctx_notify_pid); - - if (task) { - si.si_signo =3D ctx->ctx_notify_sig; - si.si_errno =3D 0; - si.si_code =3D PROF_OVFL; /* goes to user */ - si.si_addr =3D NULL; - si.si_pid =3D current->pid; /* who is sending */ - si.si_pfm_ovfl =3D ctx->ctx_ovfl_regs; - - DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task)); - - /* must be done with tasklist_lock locked */ - ret =3D send_sig_info(ctx->ctx_notify_sig, &si, task); - if (ret !=3D 0) { - DBprintk((" send_sig_info(process %d, SIGPROF)=3D%d\n", ctx->ctx_notify= _pid, ret)); - task =3D NULL; /* will cause return */ - } - } else { - printk("perfmon: notify_pid %d not found\n", ctx->ctx_notify_pid); + if (ctx->ctx_notify_task =3D 0) { + printk("perfmon: process %d has no task to notify\n", current->pid); + return; } =20 - read_unlock(&tasklist_lock); + DBprintk((" current=3D%d task=3D%d\n", current->pid, ctx->ctx_notify_task= ->pid)); =20 - /* now that we have released the lock handle error condition */ - if (!task || CTX_OVFL_NOBLOCK(ctx)) { - /* we clear all pending overflow bits in noblock mode */ - ctx->ctx_ovfl_regs =3D 0; + /* should not happen */ + if (CTX_OVFL_NOBLOCK(ctx)) { + printk("perfmon: process %d non-blocking ctx should not be here\n", curr= ent->pid); return; } + DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid)); =20 /* @@ -1211,9 +1371,6 @@ =20 pfm_reset_regs(ctx); =20 - /* now we can clear this mask */ - ctx->ctx_ovfl_regs =3D 0; - /* * Unlock sampling buffer and reset index atomically * XXX: not really needed when blocking @@ -1232,84 +1389,14 @@ } } =20 -static void -perfmon_softint(unsigned long ignored) -{ - notification_info_t *info; - int my_cpu =3D smp_processor_id(); - struct task_struct *task; - struct siginfo si; - - info =3D notify_info+my_cpu; - - DBprintk((" CPU%d current=3D%d to_pid=3D%d from_pid=3D%d bv=3D0x%lx\n", \ - smp_processor_id(), current->pid, info->to_pid, info->from_pid, info->bi= tvect)); - - /* assumption check */ - if (info->from_pid =3D info->to_pid) { - DBprintk((" Tasklet assumption error: from=3D%d tor=3D%d\n", info->from_= pid, info->to_pid)); - return; - } - - if (notification_is_invalid(info)) { - DBprintk((" invalid notification information\n")); - return; - } - - /* sanity check */ - if (info->to_pid =3D 1) { - DBprintk((" cannot notify init\n")); - return; - } - /* - * XXX: needs way more checks here to make sure we send to a task we have= control over - */ - read_lock(&tasklist_lock); - - task =3D find_task_by_pid(info->to_pid); - - DBprintk((" after find %p\n", (void *)task)); - - if (task) { - int ret; - - si.si_signo =3D SIGPROF; - si.si_errno =3D 0; - si.si_code =3D PROF_OVFL; /* goes to user */ - si.si_addr =3D NULL; - si.si_pid =3D info->from_pid; /* who is sending */ - si.si_pfm_ovfl =3D info->bitvect; - - DBprintk((" SIGPROF to %d @ %p\n", task->pid, (void *)task)); - - /* must be done with tasklist_lock locked */ - ret =3D send_sig_info(SIGPROF, &si, task); - if (ret !=3D 0) - DBprintk((" send_sig_info(process %d, SIGPROF)=3D%d\n", info->to_pid, r= et)); - - /* invalidate notification */ - info->to_pid =3D info->from_pid =3D 0; - info->bitvect =3D 0; - } - - read_unlock(&tasklist_lock); - - DBprintk((" after unlock %p\n", (void *)task)); - - if (!task) { - printk("perfmon: CPU%d cannot find process %d\n", smp_processor_id(), in= fo->to_pid); - } -} - /* * main overflow processing routine. * it can be called from the interrupt path or explicitely during the cont= ext switch code * Return: - * 0 : do not unfreeze the PMU - * 1 : PMU can be unfrozen + * new value of pmc[0]. if 0x0 then unfreeze, else keep frozen */ -static unsigned long -update_counters (struct task_struct *ta, u64 pmc0, struct pt_regs *regs) +unsigned long +update_counters (struct task_struct *task, u64 pmc0, struct pt_regs *regs) { unsigned long mask, i, cnum; struct thread_struct *th; @@ -1317,7 +1404,9 @@ unsigned long bv =3D 0; int my_cpu =3D smp_processor_id(); int ret =3D 1, buffer_is_full =3D 0; - int ovfl_is_smpl, can_notify, need_reset_pmd16=3D0; + int ovfl_has_long_recovery, can_notify, need_reset_pmd16=3D0; + struct siginfo si; + /* * It is never safe to access the task for which the overflow interrupt i= s destinated * using the current variable as the interrupt may occur in the middle of= a context switch @@ -1331,23 +1420,23 @@ * valid one, i.e. the one that caused the interrupt. */ =20 - if (ta =3D NULL) { + if (task =3D NULL) { DBprintk((" owners[%d]=3DNULL\n", my_cpu)); return 0x1; } - th =3D &ta->thread; + th =3D &task->thread; ctx =3D th->pfm_context; =20 /* * XXX: debug test * Don't think this could happen given upfront tests */ - if ((th->flags & IA64_THREAD_PM_VALID) =3D 0) { - printk("perfmon: Spurious overflow interrupt: process %d not using perfm= on\n", ta->pid); + if ((th->flags & IA64_THREAD_PM_VALID) =3D 0 && ctx->ctx_fl_system =3D 0)= { + printk("perfmon: Spurious overflow interrupt: process %d not using perfm= on\n", task->pid); return 0x1; } if (!ctx) { - printk("perfmon: Spurious overflow interrupt: process %d has no PFM cont= ext\n", ta->pid); + printk("perfmon: Spurious overflow interrupt: process %d has no PFM cont= ext\n", task->pid); return 0; } =20 @@ -1355,16 +1444,21 @@ * sanity test. Should never happen */ if ((pmc0 & 0x1 )=3D 0) { - printk("perfmon: pid %d pmc0=3D0x%lx assumption error for freeze bit\n",= ta->pid, pmc0); + printk("perfmon: pid %d pmc0=3D0x%lx assumption error for freeze bit\n",= task->pid, pmc0); return 0x0; } =20 mask =3D pmc0 >> PMU_FIRST_COUNTER; =20 - DBprintk(("pmc0=3D0x%lx pid=3D%d\n", pmc0, ta->pid)); - - DBprintk(("ctx is in %s mode\n", CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BL= OCK")); + DBprintk(("pmc0=3D0x%lx pid=3D%d owner=3D%d iip=3D0x%lx, ctx is in %s mod= e used_pmds=3D0x%lx used_pmcs=3D0x%lx\n",=20 + pmc0, task->pid, PMU_OWNER()->pid, regs->cr_iip,=20 + CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK", + ctx->ctx_used_pmds[0], + ctx->ctx_used_pmcs[0])); =20 + /* + * XXX: need to record sample only when an EAR/BTB has overflowed + */ if (CTX_HAS_SMPL(ctx)) { pfm_smpl_buffer_desc_t *psb =3D ctx->ctx_smpl_buf; unsigned long *e, m, idx=3D0; @@ -1372,11 +1466,15 @@ int j; =20 idx =3D ia64_fetch_and_add(1, &psb->psb_index); - DBprintk((" trying to record index=3D%ld entries=3D%ld\n", idx, psb->psb= _entries)); + DBprintk((" recording index=3D%ld entries=3D%ld\n", idx, psb->psb_entrie= s)); =20 /* * XXX: there is a small chance that we could run out on index before re= setting * but index is unsigned long, so it will take some time..... + * We use > instead of =3D because fetch_and_add() is off by one (see be= low) + * + * This case can happen in non-blocking mode or with multiple processes. + * For non-blocking, we need to reload and continue. */ if (idx > psb->psb_entries) { buffer_is_full =3D 1; @@ -1388,7 +1486,7 @@ =20 h =3D (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_e= ntry_size)); =20 - h->pid =3D ta->pid; + h->pid =3D task->pid; h->cpu =3D my_cpu; h->rate =3D 0; h->ip =3D regs ? regs->cr_iip : 0x0; /* where did the fault happened */ @@ -1398,6 +1496,7 @@ h->stamp =3D perfmon_get_stamp(); =20 e =3D (unsigned long *)(h+1); + /* * selectively store PMDs in increasing index number */ @@ -1406,35 +1505,66 @@ if (PMD_IS_COUNTER(j)) *e =3D ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val + (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val); - else + else { *e =3D ia64_get_pmd(j); /* slow */ + } DBprintk((" e=3D%p pmd%d =3D0x%lx\n", (void *)e, j, *e)); e++; } } - /* make the new entry visible to user, needs to be atomic */ + /* + * make the new entry visible to user, needs to be atomic + */ ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count); =20 DBprintk((" index=3D%ld entries=3D%ld hdr_count=3D%ld\n", idx, psb->psb_= entries, psb->psb_hdr->hdr_count)); - - /* sampling buffer full ? */ + /*=20 + * sampling buffer full ?=20 + */ if (idx =3D (psb->psb_entries-1)) { - bv =3D mask; + /* + * will cause notification, cannot be 0 + */ + bv =3D mask << PMU_FIRST_COUNTER; + buffer_is_full =3D 1; =20 DBprintk((" sampling buffer full must notify bv=3D0x%lx\n", bv)); =20 - if (!CTX_OVFL_NOBLOCK(ctx)) goto buffer_full; + /* + * we do not reload here, when context is blocking + */ + if (!CTX_OVFL_NOBLOCK(ctx)) goto no_reload; + /* * here, we have a full buffer but we are in non-blocking mode - * so we need to reloads overflowed PMDs with sampling reset values - * and restart + * so we need to reload overflowed PMDs with sampling reset values + * and restart right away. */ } + /* FALL THROUGH */ } reload_pmds: - ovfl_is_smpl =3D CTX_OVFL_NOBLOCK(ctx) && buffer_is_full; - can_notify =3D CTX_HAS_SMPL(ctx) =3D 0 && ctx->ctx_notify_pid; + + /* + * in the case of a non-blocking context, we reload + * with the ovfl_rval when no user notification is taking place (short re= covery) + * otherwise when the buffer is full which requires user interaction) the= n we use + * smpl_rval which is the long_recovery path (disturbance introduce by us= er execution). + * + * XXX: implies that when buffer is full then there is always notificatio= n. + */ + ovfl_has_long_recovery =3D CTX_OVFL_NOBLOCK(ctx) && buffer_is_full; + + /* + * XXX: CTX_HAS_SMPL() should really be something like CTX_HAS_SMPL() and= is activated,i.e., + * one of the PMC is configured for EAR/BTB. + * + * When sampling, we can only notify when the sampling buffer is full. + */ + can_notify =3D CTX_HAS_SMPL(ctx) =3D 0 && ctx->ctx_notify_task; + + DBprintk((" ovfl_has_long_recovery=3D%d can_notify=3D%d\n", ovfl_has_long= _recovery, can_notify)); =20 for (i =3D 0, cnum =3D PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>=3D = 1) { =20 @@ -1456,7 +1586,7 @@ DBprintk((" pmod[%ld].val=3D0x%lx pmd=3D0x%lx\n", i, ctx->ctx_pmds[i].va= l, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val)); =20 if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) { - DBprintk((" CPU%d should notify process %d with signal %d\n", my_cpu, c= tx->ctx_notify_pid, ctx->ctx_notify_sig)); + DBprintk((" CPU%d should notify task %p with signal %d\n", my_cpu, ctx-= >ctx_notify_task, ctx->ctx_notify_sig)); bv |=3D 1 << i; } else { DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum)); @@ -1467,93 +1597,150 @@ */ =20 /* writes to upper part are ignored, so this is safe */ - if (ovfl_is_smpl) { - DBprintk((" CPU%d PMD[%ld] reloaded with smpl_val=3D%lx\n", my_cpu, cn= um,ctx->ctx_pmds[i].smpl_rval)); + if (ovfl_has_long_recovery) { + DBprintk((" CPU%d PMD[%ld] reload with smpl_val=3D%lx\n", my_cpu, cnum= ,ctx->ctx_pmds[i].smpl_rval)); ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval); } else { - DBprintk((" CPU%d PMD[%ld] reloaded with ovfl_val=3D%lx\n", my_cpu, cn= um,ctx->ctx_pmds[i].smpl_rval)); + DBprintk((" CPU%d PMD[%ld] reload with ovfl_val=3D%lx\n", my_cpu, cnum= ,ctx->ctx_pmds[i].smpl_rval)); ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval); } } if (cnum =3D ctx->ctx_btb_counter) need_reset_pmd16=3D1; } /* - * In case of BTB, overflow - * we need to reset the BTB index. + * In case of BTB overflow we need to reset the BTB index. */ if (need_reset_pmd16) { DBprintk(("reset PMD16\n")); ia64_set_pmd(16, 0); } -buffer_full: - /* see pfm_overflow_notify() on details for why we use |=3D here */ - ctx->ctx_ovfl_regs |=3D bv; =20 - /* nobody to notify, return and unfreeze */ +no_reload: + + /* + * some counters overflowed, but they did not require + * user notification, so after having reloaded them above + * we simply restart + */ if (!bv) return 0x0; =20 + ctx->ctx_ovfl_regs =3D bv; /* keep track of what to reset when unblockin= g */ + /* + * Now we know that: + * - we have some counters which overflowed (contains in bv) + * - someone has asked to be notified on overflow.=20 + */ + +=09 + /* + * If the notification task is still present, then notify_task is non + * null. It is clean by that task if it ever exits before we do.=20 + */ =20 - if (ctx->ctx_notify_pid =3D ta->pid) { - struct siginfo si; + if (ctx->ctx_notify_task) { =20 si.si_errno =3D 0; si.si_addr =3D NULL; - si.si_pid =3D ta->pid; /* who is sending */ - + si.si_pid =3D task->pid; /* who is sending */ =20 si.si_signo =3D ctx->ctx_notify_sig; /* is SIGPROF */ si.si_code =3D PROF_OVFL; /* goes to user */ si.si_pfm_ovfl =3D bv; =20 =20 +=09 /* - * in this case, we don't stop the task, we let it go on. It will - * necessarily go to the signal handler (if any) when it goes back to - * user mode. + * when the target of the signal is not ourself, we have to be more + * careful. The notify_task may being cleared by the target task itself + * in release_thread(). We must ensure mutual exclusion here such that + * the signal is delivered (even to a dying task) safely. */ - DBprintk((" sending %d notification to self %d\n", si.si_signo, ta->pid)= ); - =20 - /* this call is safe in an interrupt handler */ - ret =3D send_sig_info(ctx->ctx_notify_sig, &si, ta); - if (ret !=3D 0) - printk(" send_sig_info(process %d, SIGPROF)=3D%d\n", ta->pid, ret); - /* - * no matter if we block or not, we keep PMU frozen and do not unfreeze = on ctxsw - */ - ctx->ctx_fl_frozen =3D 1; + if (ctx->ctx_notify_task !=3D current) { + /* + * grab the notification lock for this task + */ + spin_lock(&ctx->ctx_notify_lock); =20 - } else { -#if 0 /* - * The tasklet is guaranteed to be scheduled for this CPU only + * now notify_task cannot be modified until we're done + * if NULL, they it got modified while we were in the handler */ - notify_info[my_cpu].to_pid =3D ctx->notify_pid; - notify_info[my_cpu].from_pid =3D ta->pid; /* for debug only */ - notify_info[my_cpu].bitvect =3D bv; - /* tasklet is inserted and active */ - tasklet_schedule(&pfm_tasklet); -#endif + if (ctx->ctx_notify_task =3D NULL) { + spin_unlock(&ctx->ctx_notify_lock); + goto lost_notify; + } /* - * stored the vector of overflowed registers for use in notification - * mark that a notification/blocking is pending (arm the trap) + * required by send_sig_info() to make sure the target + * task does not disappear on us. */ - th->pfm_pend_notify =3D 1; + read_lock(&tasklist_lock); + } + /* + * in this case, we don't stop the task, we let it go on. It will + * necessarily go to the signal handler (if any) when it goes back to + * user mode. + */ + DBprintk((" %d sending %d notification to %d\n", task->pid, si.si_signo,= ctx->ctx_notify_task->pid)); + + + /*=20 + * this call is safe in an interrupt handler, so does read_lock() on tas= klist_lock + */ + ret =3D send_sig_info(ctx->ctx_notify_sig, &si, ctx->ctx_notify_task); + if (ret !=3D 0) printk(" send_sig_info(process %d, SIGPROF)=3D%d\n", ct= x->ctx_notify_task->pid, ret); + /* + * now undo the protections in order + */ + if (ctx->ctx_notify_task !=3D current) { + read_unlock(&tasklist_lock); + spin_unlock(&ctx->ctx_notify_lock); + } =20 /* - * if we do block, then keep PMU frozen until restart + * if we block set the pfm_must_block bit + * when in block mode, we can effectively block only when the notified + * task is not self, otherwise we would deadlock.=20 + * in this configuration, the notification is sent, the task will not=20 + * block on the way back to user mode, but the PMU will be kept frozen + * until PFM_RESTART. + * Note that here there is still a race condition with notify_task + * possibly being nullified behind our back, but this is fine because + * it can only be changed to NULL which by construction, can only be + * done when notify_task !=3D current. So if it was already different + * before, changing it to NULL will still maintain this invariant. + * Of course, when it is equal to current it cannot change at this point. */ - if (!CTX_OVFL_NOBLOCK(ctx)) ctx->ctx_fl_frozen =3D 1; + if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task !=3D current) { + th->pfm_must_block =3D 1; /* will cause blocking */ + } + } else { +lost_notify: + DBprintk((" notification task has disappeared !\n")); + /* + * for a non-blocking context, we make sure we do not fall into the pfm_= overflow_notify() + * trap. Also in the case of a blocking context with lost notify process= , then we do not + * want to block either (even though it is interruptible). In this case,= the PMU will be kept + * frozen and the process will run to completion without monitoring enab= led. + * + * Of course, we cannot loose notify process when self-monitoring. + */ + th->pfm_must_block =3D 0;=20 =20 - DBprintk((" process %d notify ovfl_regs=3D0x%lx\n", ta->pid, bv)); } /* - * keep PMU frozen (and overflowed bits cleared) when we have to stop, - * otherwise return a resume 'value' for PMC[0] - * - * XXX: maybe that's enough to get rid of ctx_fl_frozen ? + * if we block, we keep the PMU frozen. If non-blocking we restart. + * in the case of non-blocking were the notify process is lost, we also=20 + * restart.=20 */ - DBprintk((" will return pmc0=3D0x%x\n",ctx->ctx_fl_frozen ? 0x1 : 0x0)); + if (!CTX_OVFL_NOBLOCK(ctx))=20 + ctx->ctx_fl_frozen =3D 1; + else + ctx->ctx_fl_frozen =3D 0; + + DBprintk((" reload pmc0=3D0x%x must_block=3D%ld\n", + ctx->ctx_fl_frozen ? 0x1 : 0x0, th->pfm_must_block)); + return ctx->ctx_fl_frozen ? 0x1 : 0x0; } =20 @@ -1595,10 +1782,17 @@ u64 pmc0 =3D ia64_get_pmc(0); int i; =20 - p +=3D sprintf(p, "PMC[0]=3D%lx\nPerfmon debug: %s\n", pmc0, pfm_debug ? = "On" : "Off"); + p +=3D sprintf(p, "CPU%d.pmc[0]=3D%lx\nPerfmon debug: %s\n", smp_processo= r_id(), pmc0, pfm_debug ? "On" : "Off"); + p +=3D sprintf(p, "proc_sessions=3D%lu sys_sessions=3D%lu\n",=20 + pfs_info.pfs_proc_sessions,=20 + pfs_info.pfs_sys_session); + for(i=3D0; i < NR_CPUS; i++) { - if (cpu_is_online(i)) - p +=3D sprintf(p, "CPU%d.PMU %d\n", i, pmu_owners[i].owner ? pmu_owners= [i].owner->pid: 0); + if (cpu_is_online(i)) { + p +=3D sprintf(p, "CPU%d.pmu_owner: %-6d\n", + i,=20 + pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1); + } } return p - page; } @@ -1648,8 +1842,8 @@ } pmu_conf.perf_ovfl_val =3D (1L << pm_info.pal_perf_mon_info_s.width) - 1; pmu_conf.max_counters =3D pm_info.pal_perf_mon_info_s.generic; - pmu_conf.num_pmds =3D find_num_pm_regs(pmu_conf.impl_regs); - pmu_conf.num_pmcs =3D find_num_pm_regs(&pmu_conf.impl_regs[4]); + pmu_conf.num_pmcs =3D find_num_pm_regs(pmu_conf.impl_regs); + pmu_conf.num_pmds =3D find_num_pm_regs(&pmu_conf.impl_regs[4]); =20 printk("perfmon: %d bits counters (max value 0x%lx)\n", pm_info.pal_perf_= mon_info_s.width, pmu_conf.perf_ovfl_val); printk("perfmon: %ld PMC/PMD pairs, %ld PMCs, %ld PMDs\n", pmu_conf.max_c= ounters, pmu_conf.num_pmcs, pmu_conf.num_pmds); @@ -1681,21 +1875,19 @@ ia64_srlz_d(); } =20 -/* - * XXX: for system wide this function MUST never be called - */ void pfm_save_regs (struct task_struct *ta) { struct task_struct *owner; + pfm_context_t *ctx; struct thread_struct *t; u64 pmc0, psr; + unsigned long mask; int i; =20 - if (ta =3D NULL) { - panic(__FUNCTION__" task is NULL\n"); - } - t =3D &ta->thread; + t =3D &ta->thread; + ctx =3D ta->thread.pfm_context; + /* * We must make sure that we don't loose any potential overflow * interrupt while saving PMU context. In this code, external @@ -1715,7 +1907,7 @@ * in kernel. * By now, we could still have an overflow interrupt in-flight. */ - __asm__ __volatile__ ("rum psr.up;;"::: "memory"); + __asm__ __volatile__ ("rsm psr.up|psr.pp;;"::: "memory"); =20 /* * Mark the PMU as not owned @@ -1744,7 +1936,6 @@ * next process does not start with monitoring on if not requested */ ia64_set_pmc(0, 1); - ia64_srlz_d(); =20 /* * Check for overflow bits and proceed manually if needed @@ -1755,94 +1946,111 @@ * next time the task exits from the kernel. */ if (pmc0 & ~0x1) { - if (owner !=3D ta) printk(__FUNCTION__" owner=3D%p task=3D%p\n", (void *= )owner, (void *)ta); - printk(__FUNCTION__" Warning: pmc[0]=3D0x%lx explicit call\n", pmc0); - - pmc0 =3D update_counters(owner, pmc0, NULL); + update_counters(owner, pmc0, NULL); /* we will save the updated version of pmc0 */ } - /* * restore PSR for context switch to save */ __asm__ __volatile__ ("mov psr.l=3D%0;; srlz.i;;"::"r"(psr): "memory"); =20 + /* + * we do not save registers if we can do lazy + */ + if (PFM_CAN_DO_LAZY()) { + SET_PMU_OWNER(owner); + return; + } =20 /* * XXX needs further optimization. * Also must take holes into account */ - for (i=3D0; i< pmu_conf.num_pmds; i++) { - t->pmd[i] =3D ia64_get_pmd(i); + mask =3D ctx->ctx_used_pmds[0]; + for (i=3D0; mask; i++, mask>>=3D1) { + if (mask & 0x1) t->pmd[i] =3Dia64_get_pmd(i); } =20 /* skip PMC[0], we handle it separately */ - for (i=3D1; i< pmu_conf.num_pmcs; i++) { - t->pmc[i] =3D ia64_get_pmc(i); + mask =3D ctx->ctx_used_pmcs[0]>>1; + for (i=3D1; mask; i++, mask>>=3D1) { + if (mask & 0x1) t->pmc[i] =3D ia64_get_pmc(i); } - /* * Throughout this code we could have gotten an overflow interrupt. It is= transformed * into a spurious interrupt as soon as we give up pmu ownership. */ } =20 -void -pfm_load_regs (struct task_struct *ta) +static void +pfm_lazy_save_regs (struct task_struct *ta) { - struct thread_struct *t =3D &ta->thread; - pfm_context_t *ctx =3D ta->thread.pfm_context; + pfm_context_t *ctx; + struct thread_struct *t; + unsigned long mask; int i; =20 + DBprintk((" on [%d] by [%d]\n", ta->pid, current->pid)); + + t =3D &ta->thread; + ctx =3D ta->thread.pfm_context; /* * XXX needs further optimization. * Also must take holes into account */ - for (i=3D0; i< pmu_conf.num_pmds; i++) { - ia64_set_pmd(i, t->pmd[i]); + mask =3D ctx->ctx_used_pmds[0]; + for (i=3D0; mask; i++, mask>>=3D1) { + if (mask & 0x1) t->pmd[i] =3Dia64_get_pmd(i); } - - /* skip PMC[0] to avoid side effects */ - for (i=3D1; i< pmu_conf.num_pmcs; i++) { - ia64_set_pmc(i, t->pmc[i]); +=09 + /* skip PMC[0], we handle it separately */ + mask =3D ctx->ctx_used_pmcs[0]>>1; + for (i=3D1; mask; i++, mask>>=3D1) { + if (mask & 0x1) t->pmc[i] =3D ia64_get_pmc(i); } + SET_PMU_OWNER(NULL); +} + +void +pfm_load_regs (struct task_struct *ta) +{ + struct thread_struct *t =3D &ta->thread; + pfm_context_t *ctx =3D ta->thread.pfm_context; + struct task_struct *owner; + unsigned long mask; + int i; + + owner =3D PMU_OWNER(); + if (owner =3D ta) goto skip_restore; + if (owner) pfm_lazy_save_regs(owner); =20 - /* - * we first restore ownership of the PMU to the 'soon to be current' - * context. This way, if, as soon as we unfreeze the PMU at the end - * of this function, we get an interrupt, we attribute it to the correct - * task - */ SET_PMU_OWNER(ta); =20 -#if 0 - /* - * check if we had pending overflow before context switching out - * If so, we invoke the handler manually, i.e. simulate interrupt. - * - * XXX: given that we do not use the tasklet anymore to stop, we can - * move this back to the pfm_save_regs() routine. - */ - if (t->pmc[0] & ~0x1) { - /* freeze set in pfm_save_regs() */ - DBprintk((" pmc[0]=3D0x%lx manual interrupt\n",t->pmc[0])); - update_counters(ta, t->pmc[0], NULL); + mask =3D ctx->ctx_used_pmds[0]; + for (i=3D0; mask; i++, mask>>=3D1) { + if (mask & 0x1) ia64_set_pmd(i, t->pmd[i]); } -#endif =20 + /* skip PMC[0] to avoid side effects */ + mask =3D ctx->ctx_used_pmcs[0]>>1; + for (i=3D1; mask; i++, mask>>=3D1) { + if (mask & 0x1) ia64_set_pmc(i, t->pmc[i]); + } +skip_restore: /* * unfreeze only when possible */ if (ctx->ctx_fl_frozen =3D 0) { ia64_set_pmc(0, 0); ia64_srlz_d(); + /* place where we potentially (kernel level) start monitoring again */ } } =20 =20 /* * This function is called when a thread exits (from exit_thread()). - * This is a simplified pfm_save_regs() that simply flushes hthe current + * This is a simplified pfm_save_regs() that simply flushes the current * register state into the save area taking into account any pending * overflow. This time no notification is sent because the taks is dying * anyway. The inline processing of overflows avoids loosing some counts. @@ -1933,12 +2141,20 @@ /* collect latest results */ ctx->ctx_pmds[i].val +=3D ia64_get_pmd(j) & pmu_conf.perf_ovfl_val; =20 + /* + * now everything is in ctx_pmds[] and we need + * to clear the saved context from save_regs() such that + * pfm_read_pmds() gets the correct value + */ + ta->thread.pmd[j] =3D 0; + /* take care of overflow inline */ if (mask & 0x1) { ctx->ctx_pmds[i].val +=3D 1 + pmu_conf.perf_ovfl_val; DBprintk((" PMD[%d] overflowed pmd=3D0x%lx pmds.val=3D0x%lx\n", j, ia64_get_pmd(j), ctx->ctx_pmds[i].val)); } + mask >>=3D1; } } =20 @@ -1977,7 +2193,7 @@ =20 /* clears all PMD registers */ for(i=3D0;i< pmu_conf.num_pmds; i++) { - if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0); + if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0); } ia64_srlz_d(); } @@ -1986,7 +2202,7 @@ * task is the newly created task */ int -pfm_inherit(struct task_struct *task) +pfm_inherit(struct task_struct *task, struct pt_regs *regs) { pfm_context_t *ctx =3D current->thread.pfm_context; pfm_context_t *nctx; @@ -1994,12 +2210,22 @@ int i, cnum; =20 /* + * bypass completely for system wide + */ + if (pfs_info.pfs_sys_session) { + DBprintk((" enabling psr.pp for %d\n", task->pid)); + ia64_psr(regs)->pp =3D pfs_info.pfs_pp; + return 0; + } + + /* * takes care of easiest case first */ if (CTX_INHERIT_MODE(ctx) =3D PFM_FL_INHERIT_NONE) { DBprintk((" removing PFM context for %d\n", task->pid)); task->thread.pfm_context =3D NULL; - task->thread.pfm_pend_notify =3D 0; + task->thread.pfm_must_block =3D 0; + atomic_set(&task->thread.pfm_notifiers_check, 0); /* copy_thread() clears IA64_THREAD_PM_VALID */ return 0; } @@ -2009,9 +2235,11 @@ /* copy content */ *nctx =3D *ctx; =20 - if (ctx->ctx_fl_inherit =3D PFM_FL_INHERIT_ONCE) { + if (CTX_INHERIT_MODE(ctx) =3D PFM_FL_INHERIT_ONCE) { nctx->ctx_fl_inherit =3D PFM_FL_INHERIT_NONE; + atomic_set(&task->thread.pfm_notifiers_check, 0); DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid)); + pfs_info.pfs_proc_sessions++; } =20 /* initialize counters in new context */ @@ -2033,7 +2261,7 @@ sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */ =20 /* clear pending notification */ - th->pfm_pend_notify =3D 0; + th->pfm_must_block =3D 0; =20 /* link with new task */ th->pfm_context =3D nctx; @@ -2052,7 +2280,10 @@ return 0; } =20 -/* called from exit_thread() */ +/*=20 + * called from release_thread(), at this point this task is not in the=20 + * tasklist anymore + */ void pfm_context_exit(struct task_struct *task) { @@ -2068,16 +2299,126 @@ pfm_smpl_buffer_desc_t *psb =3D ctx->ctx_smpl_buf; =20 /* if only user left, then remove */ - DBprintk((" pid %d: task %d sampling psb->refcnt=3D%d\n", current->pid, = task->pid, psb->psb_refcnt.counter)); + DBprintk((" [%d] [%d] psb->refcnt=3D%d\n", current->pid, task->pid, psb-= >psb_refcnt.counter)); =20 if (atomic_dec_and_test(&psb->psb_refcnt) ) { rvfree(psb->psb_hdr, psb->psb_size); vfree(psb); - DBprintk((" pid %d: cleaning task %d sampling buffer\n", current->pid, = task->pid )); + DBprintk((" [%d] cleaning [%d] sampling buffer\n", current->pid, task->= pid )); + } + } + DBprintk((" [%d] cleaning [%d] pfm_context @%p\n", current->pid, task->pi= d, (void *)ctx)); + + /* + * To avoid getting the notified task scan the entire process list + * when it exits because it would have pfm_notifiers_check set, we=20 + * decrease it by 1 to inform the task, that one less task is going + * to send it notification. each new notifer increases this field by + * 1 in pfm_context_create(). Of course, there is race condition between + * decreasing the value and the notified task exiting. The danger comes + * from the fact that we have a direct pointer to its task structure + * thereby bypassing the tasklist. We must make sure that if we have=20 + * notify_task!=3D NULL, the target task is still somewhat present. It may + * already be detached from the tasklist but that's okay. Note that it is + * okay if we 'miss the deadline' and the task scans the list for nothing, + * it will affect performance but not correctness. The correctness is ens= ured + * by using the notify_lock whic prevents the notify_task from changing o= n us. + * Once holdhing this lock, if we see notify_task!=3D NULL, then it will = stay like + * that until we release the lock. If it is NULL already then we came too= late. + */ + spin_lock(&ctx->ctx_notify_lock); + + if (ctx->ctx_notify_task) { + DBprintk((" [%d] [%d] atomic_sub on [%d] notifiers=3D%u\n", current->pid= , task->pid, + ctx->ctx_notify_task->pid,=20 + atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check))); + + atomic_sub(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check); + } + + spin_unlock(&ctx->ctx_notify_lock); + + if (ctx->ctx_fl_system) { + /* + * if included interrupts (true by default), then reset + * to get default value + */ + if (ctx->ctx_fl_exclintr =3D 0) { + /* + * reload kernel default DCR value + */ + ia64_set_dcr(pfs_info.pfs_dfl_dcr); + DBprintk((" restored dcr to 0x%lx\n", pfs_info.pfs_dfl_dcr)); } + /*=20 + * free system wide session slot + */ + pfs_info.pfs_sys_session =3D 0; + } else { + pfs_info.pfs_proc_sessions--; } - DBprintk((" pid %d: task %d pfm_context is freed @%p\n", current->pid, ta= sk->pid, (void *)ctx)); + pfm_context_free(ctx); + /*=20 + * clean pfm state in thread structure, + */ + task->thread.pfm_context =3D NULL; + task->thread.pfm_must_block =3D 0; + /* pfm_notifiers is cleaned in pfm_cleanup_notifiers() */ + +} + +void +pfm_cleanup_notifiers(struct task_struct *task) +{ + struct task_struct *p; + pfm_context_t *ctx; + + DBprintk((" [%d] called\n", task->pid)); + + read_lock(&tasklist_lock); + + for_each_task(p) { + /* + * It is safe to do the 2-step test here, because thread.ctx + * is cleaned up only in release_thread() and at that point + * the task has been detached from the tasklist which is an + * operation which uses the write_lock() on the tasklist_lock + * so it cannot run concurrently to this loop. So we have the + * guarantee that if we find p and it has a perfmon ctx then + * it is going to stay like this for the entire execution of this + * loop. + */ + ctx =3D p->thread.pfm_context; + + DBprintk((" [%d] scanning task [%d] ctx=3D%p\n", task->pid, p->pid, ctx)= ); + + if (ctx && ctx->ctx_notify_task =3D task) { + DBprintk((" trying for notifier %d in %d\n", task->pid, p->pid)); + /* + * the spinlock is required to take care of a race condition + * with the send_sig_info() call. We must make sure that=20 + * either the send_sig_info() completes using a valid task, + * or the notify_task is cleared before the send_sig_info() + * can pick up a stale value. Note that by the time this + * function is executed the 'task' is already detached from the + * tasklist. The problem is that the notifiers have a direct + * pointer to it. It is okay to send a signal to a task in this + * stage, it simply will have no effect. But it is better than sending + * to a completely destroyed task or worse to a new task using the same + * task_struct address. + */ + spin_lock(&ctx->ctx_notify_lock); + + ctx->ctx_notify_task =3D NULL; + + spin_unlock(&ctx->ctx_notify_lock); + + DBprintk((" done for notifier %d in %d\n", task->pid, p->pid)); + } + } + read_unlock(&tasklist_lock); + } =20 #else /* !CONFIG_PERFMON */ diff -urN linux-2.4.13/arch/ia64/kernel/process.c linux-2.4.13-lia/arch/ia6= 4/kernel/process.c --- linux-2.4.13/arch/ia64/kernel/process.c Wed Oct 10 16:31:44 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/process.c Wed Oct 24 18:14:43 2001 @@ -63,7 +63,8 @@ { unsigned long ip =3D regs->cr_iip + ia64_psr(regs)->ri; =20 - printk("\npsr : %016lx ifs : %016lx ip : [<%016lx>] %s\n", + printk("\nPid: %d, comm: %20s\n", current->pid, current->comm); + printk("psr : %016lx ifs : %016lx ip : [<%016lx>] %s\n", regs->cr_ipsr, regs->cr_ifs, ip, print_tainted()); printk("unat: %016lx pfs : %016lx rsc : %016lx\n", regs->ar_unat, regs->ar_pfs, regs->ar_rsc); @@ -201,7 +202,7 @@ { unsigned long rbs, child_rbs, rbs_size, stack_offset, stack_top, stack_us= ed; struct switch_stack *child_stack, *stack; - extern char ia64_ret_from_clone; + extern char ia64_ret_from_clone, ia32_ret_from_clone; struct pt_regs *child_ptregs; int retval =3D 0; =20 @@ -250,7 +251,10 @@ child_ptregs->r12 =3D (unsigned long) (child_ptregs + 1); /* kernel sp */ child_ptregs->r13 =3D (unsigned long) p; /* set `current' pointer */ } - child_stack->b0 =3D (unsigned long) &ia64_ret_from_clone; + if (IS_IA32_PROCESS(regs)) + child_stack->b0 =3D (unsigned long) &ia32_ret_from_clone; + else + child_stack->b0 =3D (unsigned long) &ia64_ret_from_clone; child_stack->ar_bspstore =3D child_rbs + rbs_size; =20 /* copy parts of thread_struct: */ @@ -285,9 +289,8 @@ ia32_save_state(p); #endif #ifdef CONFIG_PERFMON - p->thread.pfm_pend_notify =3D 0; if (p->thread.pfm_context) - retval =3D pfm_inherit(p); + retval =3D pfm_inherit(p, child_ptregs); #endif return retval; } @@ -441,11 +444,24 @@ } =20 #ifdef CONFIG_PERFMON +/* + * By the time we get here, the task is detached from the tasklist. This i= s important + * because it means that no other tasks can ever find it as a notifiied ta= sk, therfore + * there is no race condition between this code and let's say a pfm_contex= t_create(). + * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's p= fm context if + * this other task is in the middle of its own pfm_context_exit() because = it would alreayd + * be out of the task list. Note that this case is very unlikely between a= direct child + * and its parents (if it is the notified process) because of the way the = exit is notified + * via SIGCHLD. + */ void release_thread (struct task_struct *task) { if (task->thread.pfm_context) pfm_context_exit(task); + + if (atomic_read(&task->thread.pfm_notifiers_check) > 0) + pfm_cleanup_notifiers(task); } #endif =20 @@ -516,6 +532,29 @@ } =20 void +cpu_halt (void) +{ + pal_power_mgmt_info_u_t power_info[8]; + unsigned long min_power; + int i, min_power_state; + + if (ia64_pal_halt_info(power_info) !=3D 0) + return; + + min_power_state =3D 0; + min_power =3D power_info[0].pal_power_mgmt_info_s.power_consumption; + for (i =3D 1; i < 8; ++i) + if (power_info[i].pal_power_mgmt_info_s.im + && power_info[i].pal_power_mgmt_info_s.power_consumption < min_power= ) { + min_power =3D power_info[i].pal_power_mgmt_info_s.power_consumption; + min_power_state =3D i; + } + + while (1) + ia64_pal_halt(min_power_state); +} + +void machine_restart (char *restart_cmd) { (*efi.reset_system)(EFI_RESET_WARM, 0, 0, 0); @@ -524,6 +563,7 @@ void machine_halt (void) { + cpu_halt(); } =20 void @@ -531,4 +571,5 @@ { if (pm_power_off) pm_power_off(); + machine_halt(); } diff -urN linux-2.4.13/arch/ia64/kernel/ptrace.c linux-2.4.13-lia/arch/ia64= /kernel/ptrace.c --- linux-2.4.13/arch/ia64/kernel/ptrace.c Mon Sep 24 15:06:13 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/ptrace.c Wed Oct 10 17:43:07 2001 @@ -2,7 +2,7 @@ * Kernel support for the ptrace() and syscall tracing interfaces. * * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999-2001 David Mosberger-Tang + * David Mosberger-Tang * * Derived from the x86 and Alpha versions. Most of the code in here * could actually be factored into a common set of routines. @@ -794,11 +794,14 @@ * * Make sure the single step bit is not set. */ -void ptrace_disable(struct task_struct *child) +void +ptrace_disable (struct task_struct *child) { + struct ia64_psr *child_psr =3D ia64_psr(ia64_task_regs(child)); + /* make sure the single step/take-branch tra bits are not set: */ - ia64_psr(pt)->ss =3D 0; - ia64_psr(pt)->tb =3D 0; + child_psr->ss =3D 0; + child_psr->tb =3D 0; =20 /* Turn off flag indicating that the KRBS is sync'd with child's VM: */ child->thread.flags &=3D ~IA64_THREAD_KRBS_SYNCED; @@ -809,7 +812,7 @@ long arg4, long arg5, long arg6, long arg7, long stack) { struct pt_regs *pt, *regs =3D (struct pt_regs *) &stack; - unsigned long flags, urbs_end; + unsigned long urbs_end; struct task_struct *child; struct switch_stack *sw; long ret; @@ -855,6 +858,19 @@ if (child->p_pptr !=3D current) goto out_tsk; =20 + if (request !=3D PTRACE_KILL) { + if (child->state !=3D TASK_STOPPED) + goto out_tsk; + +#ifdef CONFIG_SMP + while (child->has_cpu) { + if (child->state !=3D TASK_STOPPED) + goto out_tsk; + barrier(); + } +#endif + } + pt =3D ia64_task_regs(child); sw =3D (struct switch_stack *) (child->thread.ksp + 16); =20 @@ -925,7 +941,7 @@ child->ptrace &=3D ~PT_TRACESYS; child->exit_code =3D data; =20 - /* make sure the single step/take-branch tra bits are not set: */ + /* make sure the single step/taken-branch trap bits are not set: */ ia64_psr(pt)->ss =3D 0; ia64_psr(pt)->tb =3D 0; =20 diff -urN linux-2.4.13/arch/ia64/kernel/sal.c linux-2.4.13-lia/arch/ia64/ke= rnel/sal.c --- linux-2.4.13/arch/ia64/kernel/sal.c Thu Jan 4 12:50:17 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/sal.c Thu Oct 4 00:21:39 2001 @@ -1,8 +1,8 @@ /* * System Abstraction Layer (SAL) interface routines. * - * Copyright (C) 1998, 1999 Hewlett-Packard Co - * Copyright (C) 1998, 1999 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co + * David Mosberger-Tang * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond */ @@ -18,8 +18,6 @@ #include #include =20 -#define SAL_DEBUG - spinlock_t sal_lock =3D SPIN_LOCK_UNLOCKED; =20 static struct { @@ -122,10 +120,8 @@ switch (*p) { case SAL_DESC_ENTRY_POINT: ep =3D (struct ia64_sal_desc_entry_point *) p; -#ifdef SAL_DEBUG - printk("sal[%d] - entry: pal_proc=3D0x%lx, sal_proc=3D0x%lx\n", - i, ep->pal_proc, ep->sal_proc); -#endif + printk("SAL: entry: pal_proc=3D0x%lx, sal_proc=3D0x%lx\n", + ep->pal_proc, ep->sal_proc); ia64_pal_handler_init(__va(ep->pal_proc)); ia64_sal_handler_init(__va(ep->sal_proc), __va(ep->gp)); break; @@ -138,17 +134,12 @@ #ifdef CONFIG_SMP { struct ia64_sal_desc_ap_wakeup *ap =3D (void *) p; -# ifdef SAL_DEBUG - printk("sal[%d] - wakeup type %x, 0x%lx\n", - i, ap->mechanism, ap->vector); -# endif + switch (ap->mechanism) { case IA64_SAL_AP_EXTERNAL_INT: ap_wakeup_vector =3D ap->vector; -# ifdef SAL_DEBUG printk("SAL: AP wakeup using external interrupt " "vector 0x%lx\n", ap_wakeup_vector); -# endif break; =20 default: @@ -163,21 +154,13 @@ struct ia64_sal_desc_platform_feature *pf =3D (void *) p; printk("SAL: Platform features "); =20 -#ifdef CONFIG_IA64_HAVE_IRQREDIR - /* - * Early versions of SAL say we don't have - * IRQ redirection, even though we do... - */ - pf->feature_mask |=3D (1 << 1); -#endif - if (pf->feature_mask & (1 << 0)) printk("BusLock "); =20 if (pf->feature_mask & (1 << 1)) { printk("IRQ_Redirection "); #ifdef CONFIG_SMP - if (no_int_routing)=20 + if (no_int_routing) smp_int_redirect &=3D ~SMP_IRQ_REDIRECTION; else smp_int_redirect |=3D SMP_IRQ_REDIRECTION; diff -urN linux-2.4.13/arch/ia64/kernel/setup.c linux-2.4.13-lia/arch/ia64/= kernel/setup.c --- linux-2.4.13/arch/ia64/kernel/setup.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/setup.c Thu Oct 4 00:21:39 2001 @@ -534,10 +534,13 @@ /* * Initialize default control register to defer all speculative faults. = The * kernel MUST NOT depend on a particular setting of these bits (in other= words, - * the kernel must have recovery code for all speculative accesses). + * the kernel must have recovery code for all speculative accesses). Tur= n on + * dcr.lc as per recommendation by the architecture team. Most IA-32 apps + * shouldn't be affected by this (moral: keep your ia32 locks aligned and= you'll + * be fine). */ ia64_set_dcr( IA64_DCR_DM | IA64_DCR_DP | IA64_DCR_DK | IA64_DCR_DX | IA= 64_DCR_DR - | IA64_DCR_DA | IA64_DCR_DD); + | IA64_DCR_DA | IA64_DCR_DD | IA64_DCR_LC); #ifndef CONFIG_SMP ia64_set_fpu_owner(0); #endif diff -urN linux-2.4.13/arch/ia64/kernel/sigframe.h linux-2.4.13-lia/arch/ia= 64/kernel/sigframe.h --- linux-2.4.13/arch/ia64/kernel/sigframe.h Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/sigframe.h Thu Oct 4 00:21:52 2001 @@ -1,3 +1,9 @@ +struct sigscratch { + unsigned long scratch_unat; /* ar.unat for the general registers saved in= pt */ + unsigned long pad; + struct pt_regs pt; +}; + struct sigframe { /* * Place signal handler args where user-level unwinder can find them easi= ly. @@ -7,10 +13,11 @@ unsigned long arg0; /* signum */ unsigned long arg1; /* siginfo pointer */ unsigned long arg2; /* sigcontext pointer */ + /* + * End of architected state. + */ =20 - unsigned long rbs_base; /* base of new register backing store (or NULL) = */ void *handler; /* pointer to the plabel of the signal handler */ - struct siginfo info; struct sigcontext sc; }; diff -urN linux-2.4.13/arch/ia64/kernel/signal.c linux-2.4.13-lia/arch/ia64= /kernel/signal.c --- linux-2.4.13/arch/ia64/kernel/signal.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/signal.c Thu Oct 4 00:21:52 2001 @@ -2,7 +2,7 @@ * Architecture-specific signal handling support. * * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999-2001 David Mosberger-Tang + * David Mosberger-Tang * * Derived from i386 and Alpha versions. */ @@ -39,12 +39,6 @@ # define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0]) #endif =20 -struct sigscratch { - unsigned long scratch_unat; /* ar.unat for the general registers saved in= pt */ - unsigned long pad; - struct pt_regs pt; -}; - extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* for= ward decl */ =20 long @@ -55,6 +49,10 @@ /* XXX: Don't preclude handling different sized sigset_t's. */ if (sigsetsize !=3D sizeof(sigset_t)) return -EINVAL; + + if (!access_ok(VERIFY_READ, uset, sigsetsize)) + return -EFAULT; + if (GET_SIGSET(&set, uset)) return -EFAULT; =20 @@ -73,15 +71,9 @@ * pre-set the correct error code here to ensure that the right values * get saved in sigcontext by ia64_do_signal. */ -#ifdef CONFIG_IA32_SUPPORT - if (IS_IA32_PROCESS(&scr->pt)) { - scr->pt.r8 =3D -EINTR; - } else -#endif - { - scr->pt.r8 =3D EINTR; - scr->pt.r10 =3D -1; - } + scr->pt.r8 =3D EINTR; + scr->pt.r10 =3D -1; + while (1) { current->state =3D TASK_INTERRUPTIBLE; schedule(); @@ -139,10 +131,9 @@ struct ia64_psr *psr =3D ia64_psr(&scr->pt); =20 __copy_from_user(current->thread.fph, &sc->sc_fr[32], 96*16); - if (!psr->dfh) { - psr->mfh =3D 0; + psr->mfh =3D 0; /* drop signal handler's fph contents... */ + if (!psr->dfh) __ia64_load_fpu(current->thread.fph); - } } return err; } @@ -380,7 +371,8 @@ err =3D __put_user(sig, &frame->arg0); err |=3D __put_user(&frame->info, &frame->arg1); err |=3D __put_user(&frame->sc, &frame->arg2); - err |=3D __put_user(new_rbs, &frame->rbs_base); + err |=3D __put_user(new_rbs, &frame->sc.sc_rbs_base); + err |=3D __put_user(0, &frame->sc.sc_loadrs); /* initialize to zero */ err |=3D __put_user(ka->sa.sa_handler, &frame->handler); =20 err |=3D copy_siginfo_to_user(&frame->info, info); @@ -460,6 +452,7 @@ long ia64_do_signal (sigset_t *oldset, struct sigscratch *scr, long in_syscall) { + struct signal_struct *sig; struct k_sigaction *ka; siginfo_t info; long restart =3D in_syscall; @@ -571,8 +564,8 @@ case SIGSTOP: current->state =3D TASK_STOPPED; current->exit_code =3D signr; - if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags - & SA_NOCLDSTOP)) + sig =3D current->p_pptr->sig; + if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP)) notify_parent(current, SIGCHLD); schedule(); continue; diff -urN linux-2.4.13/arch/ia64/kernel/smp.c linux-2.4.13-lia/arch/ia64/ke= rnel/smp.c --- linux-2.4.13/arch/ia64/kernel/smp.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/smp.c Wed Oct 10 18:50:56 2001 @@ -48,6 +48,7 @@ #include #include #include +#include =20 /* The 'big kernel lock' */ spinlock_t kernel_flag =3D SPIN_LOCK_UNLOCKED; @@ -70,20 +71,18 @@ =20 #define IPI_CALL_FUNC 0 #define IPI_CPU_STOP 1 -#ifndef CONFIG_ITANIUM_PTCG -# define IPI_FLUSH_TLB 2 -#endif /*!CONFIG_ITANIUM_PTCG */ =20 static void stop_this_cpu (void) { + extern void cpu_halt (void); /* * Remove this CPU: */ clear_bit(smp_processor_id(), &cpu_online_map); max_xtp(); __cli(); - for (;;); + cpu_halt(); } =20 void @@ -136,49 +135,6 @@ stop_this_cpu(); break; =20 -#ifndef CONFIG_ITANIUM_PTCG - case IPI_FLUSH_TLB: - { - extern unsigned long flush_start, flush_end, flush_nbits, flush_rid; - extern atomic_t flush_cpu_count; - unsigned long saved_rid =3D ia64_get_rr(flush_start); - unsigned long end =3D flush_end; - unsigned long start =3D flush_start; - unsigned long nbits =3D flush_nbits; - - /* - * Current CPU may be running with different RID so we need to - * reload the RID of flushed address. Purging the translation - * also needs ALAT invalidation; we do not need "invala" here - * since it is done in ia64_leave_kernel. - */ - ia64_srlz_d(); - if (saved_rid !=3D flush_rid) { - ia64_set_rr(flush_start, flush_rid); - ia64_srlz_d(); - } - - do { - /* - * Purge local TLB entries. - */ - __asm__ __volatile__ ("ptc.l %0,%1" :: - "r"(start), "r"(nbits<<2) : "memory"); - start +=3D (1UL << nbits); - } while (start < end); - - ia64_insn_group_barrier(); - ia64_srlz_i(); /* srlz.i implies srlz.d */ - - if (saved_rid !=3D flush_rid) { - ia64_set_rr(flush_start, saved_rid); - ia64_srlz_d(); - } - atomic_dec(&flush_cpu_count); - break; - } -#endif /* !CONFIG_ITANIUM_PTCG */ - default: printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which); break; @@ -228,30 +184,6 @@ platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0); } =20 -#ifndef CONFIG_ITANIUM_PTCG - -void -smp_send_flush_tlb (void) -{ - send_IPI_allbutself(IPI_FLUSH_TLB); -} - -void -smp_resend_flush_tlb (void) -{ - int i; - - /* - * Really need a null IPI but since this rarely should happen & since thi= s code - * will go away, lets not add one. - */ - for (i =3D 0; i < smp_num_cpus; ++i) - if (i !=3D smp_processor_id()) - smp_send_reschedule(i); -} - -#endif /* !CONFIG_ITANIUM_PTCG */ - void smp_flush_tlb_all (void) { @@ -277,10 +209,6 @@ { struct call_data_struct data; int cpus =3D 1; -#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \ - || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_S= PECIFIC)) - unsigned long timeout; -#endif =20 if (cpuid =3D smp_processor_id()) { printk(__FUNCTION__" trying to call self\n"); @@ -295,26 +223,15 @@ atomic_set(&data.finished, 0); =20 spin_lock_bh(&call_lock); - call_data =3D &data; - -#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \ - || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_S= PECIFIC)) - resend: - send_IPI_single(cpuid, IPI_CALL_FUNC); =20 - /* Wait for response */ - timeout =3D jiffies + HZ; - while ((atomic_read(&data.started) !=3D cpus) && time_before(jiffies, tim= eout)) - barrier(); - if (atomic_read(&data.started) !=3D cpus) - goto resend; -#else + call_data =3D &data; + mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */ send_IPI_single(cpuid, IPI_CALL_FUNC); =20 /* Wait for response */ while (atomic_read(&data.started) !=3D cpus) barrier(); -#endif + if (wait) while (atomic_read(&data.finished) !=3D cpus) barrier(); @@ -348,10 +265,6 @@ { struct call_data_struct data; int cpus =3D smp_num_cpus-1; -#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \ - || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_S= PECIFIC)) - unsigned long timeout; -#endif =20 if (!cpus) return 0; @@ -364,27 +277,14 @@ atomic_set(&data.finished, 0); =20 spin_lock_bh(&call_lock); - call_data =3D &data; - -#if (defined(CONFIG_ITANIUM_B0_SPECIFIC) \ - || defined(CONFIG_ITANIUM_B1_SPECIFIC) || defined(CONFIG_ITANIUM_B2_S= PECIFIC)) - resend: - /* Send a message to all other CPUs and wait for them to respond */ - send_IPI_allbutself(IPI_CALL_FUNC); =20 - /* Wait for response */ - timeout =3D jiffies + HZ; - while ((atomic_read(&data.started) !=3D cpus) && time_before(jiffies, tim= eout)) - barrier(); - if (atomic_read(&data.started) !=3D cpus) - goto resend; -#else + call_data =3D &data; + mb(); /* ensure store to call_data precedes setting of IPI_CALL_FUNC */ send_IPI_allbutself(IPI_CALL_FUNC); =20 /* Wait for response */ while (atomic_read(&data.started) !=3D cpus) barrier(); -#endif =20 if (wait) while (atomic_read(&data.finished) !=3D cpus) diff -urN linux-2.4.13/arch/ia64/kernel/smpboot.c linux-2.4.13-lia/arch/ia6= 4/kernel/smpboot.c --- linux-2.4.13/arch/ia64/kernel/smpboot.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/smpboot.c Thu Oct 4 00:21:39 2001 @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -42,6 +43,8 @@ #include #include =20 +#define SMP_DEBUG 0 + #if SMP_DEBUG #define Dprintk(x...) printk(x) #else @@ -310,7 +313,7 @@ } =20 =20 -void __init +static void __init smp_callin (void) { int cpuid, phys_id; @@ -324,8 +327,7 @@ phys_id =3D hard_smp_processor_id(); =20 if (test_and_set_bit(cpuid, &cpu_online_map)) { - printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n",=20 - phys_id, cpuid); + printk("huh, phys CPU#0x%x, CPU#0x%x already present??\n", phys_id, cpui= d); BUG(); } =20 @@ -341,6 +343,12 @@ * Get our bogomips. */ ia64_init_itm(); + +#ifdef CONFIG_IA64_MCA + ia64_mca_cmc_vector_setup(); /* Setup vector on AP & enable */ + ia64_mca_check_errors(); /* For post-failure MCA error logging */ +#endif + #ifdef CONFIG_PERFMON perfmon_init_percpu(); #endif @@ -364,14 +372,15 @@ { extern int cpu_idle (void); =20 + Dprintk("start_secondary: starting CPU 0x%x\n", hard_smp_processor_id()); efi_map_pal_code(); cpu_init(); smp_callin(); - Dprintk("CPU %d is set to go. \n", smp_processor_id()); + Dprintk("CPU %d is set to go.\n", smp_processor_id()); while (!atomic_read(&smp_commenced)) ; =20 - Dprintk("CPU %d is starting idle. \n", smp_processor_id()); + Dprintk("CPU %d is starting idle.\n", smp_processor_id()); return cpu_idle(); } =20 @@ -415,7 +424,7 @@ unhash_process(idle); init_tasks[cpu] =3D idle; =20 - Dprintk("Sending Wakeup Vector to AP 0x%x/0x%x.\n", cpu, sapicid); + Dprintk("Sending wakeup vector %u to AP 0x%x/0x%x.\n", ap_wakeup_vector, = cpu, sapicid); =20 platform_send_ipi(cpu, ap_wakeup_vector, IA64_IPI_DM_INT, 0); =20 @@ -424,7 +433,6 @@ */ Dprintk("Waiting on callin_map ..."); for (timeout =3D 0; timeout < 100000; timeout++) { - Dprintk("."); if (test_bit(cpu, &cpu_callin_map)) break; /* It has booted */ udelay(100); diff -urN linux-2.4.13/arch/ia64/kernel/sys_ia64.c linux-2.4.13-lia/arch/ia= 64/kernel/sys_ia64.c --- linux-2.4.13/arch/ia64/kernel/sys_ia64.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/sys_ia64.c Thu Oct 4 00:21:39 2001 @@ -19,24 +19,29 @@ #include #include =20 -#define COLOR_ALIGN(addr) (((addr) + SHMLBA - 1) & ~(SHMLBA - 1)) - unsigned long arch_get_unmapped_area (struct file *filp, unsigned long addr, unsigned lo= ng len, unsigned long pgoff, unsigned long flags) { - struct vm_area_struct * vmm; long map_shared =3D (flags & MAP_SHARED); + unsigned long align_mask =3D PAGE_SIZE - 1; + struct vm_area_struct * vmm; =20 if (len > RGN_MAP_LIMIT) return -ENOMEM; if (!addr) addr =3D TASK_UNMAPPED_BASE; =20 - if (map_shared) - addr =3D COLOR_ALIGN(addr); - else - addr =3D PAGE_ALIGN(addr); + if (map_shared && (TASK_SIZE > 0xfffffffful)) + /* + * For 64-bit tasks, align shared segments to 1MB to avoid potential + * performance penalty due to virtual aliasing (see ASDM). For 32-bit + * tasks, we prefer to avoid exhausting the address space too quickly by + * limiting alignment to a single page. + */ + align_mask =3D SHMLBA - 1; + + addr =3D (addr + align_mask) & ~align_mask; =20 for (vmm =3D find_vma(current->mm, addr); ; vmm =3D vmm->vm_next) { /* At this point: (!vmm || addr < vmm->vm_end). */ @@ -46,9 +51,7 @@ return -ENOMEM; if (!vmm || addr + len <=3D vmm->vm_start) return addr; - addr =3D vmm->vm_end; - if (map_shared) - addr =3D COLOR_ALIGN(addr); + addr =3D (vmm->vm_end + align_mask) & ~align_mask; } } =20 @@ -184,8 +187,10 @@ if (!file) return -EBADF; =20 - if (!file->f_op || !file->f_op->mmap) - return -ENODEV; + if (!file->f_op || !file->f_op->mmap) { + addr =3D -ENODEV; + goto out; + } } =20 /* @@ -194,22 +199,26 @@ */ len =3D PAGE_ALIGN(len); if (len =3D 0) - return addr; + goto out; =20 /* don't permit mappings into unmapped space or the virtual page table of= a region: */ roff =3D rgn_offset(addr); - if ((len | roff | (roff + len)) >=3D RGN_MAP_LIMIT) - return -EINVAL; + if ((len | roff | (roff + len)) >=3D RGN_MAP_LIMIT) { + addr =3D -EINVAL; + goto out; + } =20 /* don't permit mappings that would cross a region boundary: */ - if (rgn_index(addr) !=3D rgn_index(addr + len)) - return -EINVAL; + if (rgn_index(addr) !=3D rgn_index(addr + len)) { + addr =3D -EINVAL; + goto out; + } =20 down_write(¤t->mm->mmap_sem); addr =3D do_mmap_pgoff(file, addr, len, prot, flags, pgoff); up_write(¤t->mm->mmap_sem); =20 - if (file) +out: if (file) fput(file); return addr; } diff -urN linux-2.4.13/arch/ia64/kernel/time.c linux-2.4.13-lia/arch/ia64/k= ernel/time.c --- linux-2.4.13/arch/ia64/kernel/time.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/time.c Thu Oct 4 00:21:39 2001 @@ -145,6 +145,9 @@ tv->tv_usec =3D usec; } =20 +/* XXX there should be a cleaner way for declaring an alias... */ +asm (".global get_fast_time; get_fast_time =3D do_gettimeofday"); + static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs) { diff -urN linux-2.4.13/arch/ia64/kernel/traps.c linux-2.4.13-lia/arch/ia64/= kernel/traps.c --- linux-2.4.13/arch/ia64/kernel/traps.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/traps.c Wed Oct 24 18:15:16 2001 @@ -1,20 +1,19 @@ /* * Architecture-specific trap handling. * - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang + * Copyright (C) 1998-2001 Hewlett-Packard Co + * David Mosberger-Tang * * 05/12/00 grao : added isr in siginfo for SIGFPE */ =20 /* - * The fpu_fault() handler needs to be able to access and update all - * floating point registers. Those saved in pt_regs can be accessed - * through that structure, but those not saved, will be accessed - * directly. To make this work, we need to ensure that the compiler - * does not end up using a preserved floating point register on its - * own. The following achieves this by declaring preserved registers - * that are not marked as "fixed" as global register variables. + * fp_emulate() needs to be able to access and update all floating point r= egisters. Those + * saved in pt_regs can be accessed through that structure, but those not = saved, will be + * accessed directly. To make this work, we need to ensure that the compi= ler does not end + * up using a preserved floating point register on its own. The following= achieves this + * by declaring preserved registers that are not marked as "fixed" as glob= al register + * variables. */ register double f2 asm ("f2"); register double f3 asm ("f3"); register double f4 asm ("f4"); register double f5 asm ("f5"); @@ -33,13 +32,17 @@ #include #include #include +#include /* For unblank_screen() */ =20 +#include #include #include #include =20 #include =20 +extern spinlock_t timerlist_lock; + static fpswa_interface_t *fpswa_interface; =20 void __init @@ -51,30 +54,74 @@ fpswa_interface =3D __va(ia64_boot_param->fpswa); } =20 +/* + * Unlock any spinlocks which will prevent us from getting the message out= (timerlist_lock + * is acquired through the console unblank code) + */ void -die_if_kernel (char *str, struct pt_regs *regs, long err) +bust_spinlocks (int yes) { - if (user_mode(regs)) { -#if 0 - /* XXX for debugging only */ - printk ("!!die_if_kernel: %s(%d): %s %ld\n", - current->comm, current->pid, str, err); - show_regs(regs); + spin_lock_init(&timerlist_lock); + if (yes) { + oops_in_progress =3D 1; +#ifdef CONFIG_SMP + global_irq_lock =3D 0; /* Many serial drivers do __global_cli() */ #endif - return; + } else { + int loglevel_save =3D console_loglevel; +#ifdef CONFIG_VT + unblank_screen(); +#endif + oops_in_progress =3D 0; + /* + * OK, the message is on the console. Now we call printk() without + * oops_in_progress set so that printk will give klogd a poke. Hold onto + * your hats... + */ + console_loglevel =3D 15; /* NMI oopser may have shut the console up */ + printk(" "); + console_loglevel =3D loglevel_save; } +} =20 - printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err); - - show_regs(regs); +void +die (const char *str, struct pt_regs *regs, long err) +{ + static struct { + spinlock_t lock; + int lock_owner; + int lock_owner_depth; + } die =3D { + lock: SPIN_LOCK_UNLOCKED, + lock_owner: -1, + lock_owner_depth: 0 + }; =20 - if (current->thread.flags & IA64_KERNEL_DEATH) { - printk("die_if_kernel recursion detected.\n"); - sti(); - while (1); + if (die.lock_owner !=3D smp_processor_id()) { + console_verbose(); + spin_lock_irq(&die.lock); + die.lock_owner =3D smp_processor_id(); + die.lock_owner_depth =3D 0; + bust_spinlocks(1); } - current->thread.flags |=3D IA64_KERNEL_DEATH; - do_exit(SIGSEGV); + + if (++die.lock_owner_depth < 3) { + printk("%s[%d]: %s %ld\n", current->comm, current->pid, str, err); + show_regs(regs); + } else + printk(KERN_ERR "Recursive die() failure, output suppressed\n"); + + bust_spinlocks(0); + die.lock_owner =3D -1; + spin_unlock_irq(&die.lock); + do_exit(SIGSEGV); +} + +void +die_if_kernel (char *str, struct pt_regs *regs, long err) +{ + if (!user_mode(regs)) + die(str, regs, err); } =20 void @@ -169,14 +216,12 @@ } =20 /* - * disabled_fph_fault() is called when a user-level process attempts - * to access one of the registers f32..f127 when it doesn't own the - * fp-high register partition. When this happens, we save the current - * fph partition in the task_struct of the fpu-owner (if necessary) - * and then load the fp-high partition of the current task (if - * necessary). Note that the kernel has access to fph by the time we - * get here, as the IVT's "Diabled FP-Register" handler takes care of - * clearing psr.dfh. + * disabled_fph_fault() is called when a user-level process attempts to ac= cess f32..f127 + * and it doesn't own the fp-high register partition. When this happens, = we save the + * current fph partition in the task_struct of the fpu-owner (if necessary= ) and then load + * the fp-high partition of the current task (if necessary). Note that th= e kernel has + * access to fph by the time we get here, as the IVT's "Disabled FP-Regist= er" handler takes + * care of clearing psr.dfh. */ static inline void disabled_fph_fault (struct pt_regs *regs) @@ -277,7 +322,7 @@ =20 if (jiffies - last_time > 5*HZ) fpu_swa_count =3D 0; - if (++fpu_swa_count < 5) { + if ((++fpu_swa_count < 5) && !(current->thread.flags & IA64_THREAD_FPEMU_= NOPRINT)) { last_time =3D jiffies; printk(KERN_WARNING "%s(%d): floating-point assist fault at ip %016lx\n", current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri); @@ -478,12 +523,12 @@ case 32: /* fp fault */ case 33: /* fp trap */ result =3D handle_fpu_swa((vector =3D 32) ? 1 : 0, regs, isr); - if (result < 0) { + if ((result < 0) || (current->thread.flags & IA64_THREAD_FPEMU_SIGFPE)) { siginfo.si_signo =3D SIGFPE; siginfo.si_errno =3D 0; siginfo.si_code =3D FPE_FLTINV; siginfo.si_addr =3D (void *) (regs->cr_iip + ia64_psr(regs)->ri); - force_sig(SIGFPE, current); + force_sig_info(SIGFPE, &siginfo, current); } return; =20 @@ -510,6 +555,10 @@ break; =20 case 46: +#ifdef CONFIG_IA32_SUPPORT + if (ia32_intercept(regs, isr) =3D 0) + return; +#endif printk("Unexpected IA-32 intercept trap (Trap 46)\n"); printk(" iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n", regs->cr_iip, ifa, isr, iim); diff -urN linux-2.4.13/arch/ia64/kernel/unaligned.c linux-2.4.13-lia/arch/i= a64/kernel/unaligned.c --- linux-2.4.13/arch/ia64/kernel/unaligned.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/unaligned.c Wed Oct 24 18:15:29 2001 @@ -5,6 +5,8 @@ * Copyright (C) 1999-2000 Stephane Eranian * Copyright (C) 2001 David Mosberger-Tang * + * 2001/10/11 Fix unaligned access to rotating registers in s/w pipelined = loops. + * 2001/08/13 Correct size of extended floats (float_fsz) from 16 to 10 by= tes. * 2001/01/17 Add support emulation of unaligned kernel accesses. */ #include @@ -282,9 +284,19 @@ unsigned long rnats, nat_mask; unsigned long on_kbs; long sof =3D (regs->cr_ifs) & 0x7f; + long sor =3D 8 * ((regs->cr_ifs >> 14) & 0xf); + long rrb_gr =3D (regs->cr_ifs >> 18) & 0x7f; + long ridx; + + if ((r1 - 32) > sor) + ridx =3D -sof + (r1 - 32); + else if ((r1 - 32) < (sor - rrb_gr)) + ridx =3D -sof + (r1 - 32) + rrb_gr; + else + ridx =3D -sof + (r1 - 32) - (sor - rrb_gr); =20 - DPRINT("r%lu, sw.bspstore=3D%lx pt.bspstore=3D%lx sof=3D%ld sol=3D%ld\n", - r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) &= 0x7f); + DPRINT("r%lu, sw.bspstore=3D%lx pt.bspstore=3D%lx sof=3D%ld sol=3D%ld rid= x=3D%ld\n", + r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) &= 0x7f, ridx); =20 if ((r1 - 32) >=3D sof) { /* this should never happen, as the "rsvd register fault" has higher pri= ority */ @@ -293,7 +305,7 @@ } =20 on_kbs =3D ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore); - addr =3D ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1= - 32)); + addr =3D ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx); if (addr >=3D kbs) { /* the register is on the kernel backing store: easy... */ rnat_addr =3D ia64_rse_rnat_addr(addr); @@ -318,12 +330,12 @@ return; } =20 - bspstore =3D (unsigned long *) regs->ar_bspstore; + bspstore =3D (unsigned long *)regs->ar_bspstore; ubs_end =3D ia64_rse_skip_regs(bspstore, on_kbs); bsp =3D ia64_rse_skip_regs(ubs_end, -sof); - addr =3D ia64_rse_skip_regs(bsp, r1 - 32); + addr =3D ia64_rse_skip_regs(bsp, ridx + sof); =20 - DPRINT("ubs_end=3D%p bsp=3D%p addr=3D%px\n", (void *) ubs_end, (void *) b= sp, (void *) addr); + DPRINT("ubs_end=3D%p bsp=3D%p addr=3D%p\n", (void *) ubs_end, (void *) bs= p, (void *) addr); =20 ia64_poke(current, sw, (unsigned long) ubs_end, (unsigned long) addr, val= ); =20 @@ -353,9 +365,19 @@ unsigned long rnats, nat_mask; unsigned long on_kbs; long sof =3D (regs->cr_ifs) & 0x7f; + long sor =3D 8 * ((regs->cr_ifs >> 14) & 0xf); + long rrb_gr =3D (regs->cr_ifs >> 18) & 0x7f; + long ridx; + + if ((r1 - 32) > sor) + ridx =3D -sof + (r1 - 32); + else if ((r1 - 32) < (sor - rrb_gr)) + ridx =3D -sof + (r1 - 32) + rrb_gr; + else + ridx =3D -sof + (r1 - 32) - (sor - rrb_gr); =20 - DPRINT("r%lu, sw.bspstore=3D%lx pt.bspstore=3D%lx sof=3D%ld sol=3D%ld\n", - r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) &= 0x7f); + DPRINT("r%lu, sw.bspstore=3D%lx pt.bspstore=3D%lx sof=3D%ld sol=3D%ld rid= x=3D%ld\n", + r1, sw->ar_bspstore, regs->ar_bspstore, sof, (regs->cr_ifs >> 7) &= 0x7f, ridx); =20 if ((r1 - 32) >=3D sof) { /* this should never happen, as the "rsvd register fault" has higher pri= ority */ @@ -364,7 +386,7 @@ } =20 on_kbs =3D ia64_rse_num_regs(kbs, (unsigned long *) sw->ar_bspstore); - addr =3D ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, -sof + (r1= - 32)); + addr =3D ia64_rse_skip_regs((unsigned long *) sw->ar_bspstore, ridx); if (addr >=3D kbs) { /* the register is on the kernel backing store: easy... */ *val =3D *addr; @@ -390,7 +412,7 @@ bspstore =3D (unsigned long *)regs->ar_bspstore; ubs_end =3D ia64_rse_skip_regs(bspstore, on_kbs); bsp =3D ia64_rse_skip_regs(ubs_end, -sof); - addr =3D ia64_rse_skip_regs(bsp, r1 - 32); + addr =3D ia64_rse_skip_regs(bsp, ridx + sof); =20 DPRINT("ubs_end=3D%p bsp=3D%p addr=3D%p\n", (void *) ubs_end, (void *) bs= p, (void *) addr); =20 @@ -908,7 +930,7 @@ * floating point operations sizes in bytes */ static const unsigned char float_fsz[4]=3D{ - 16, /* extended precision (e) */ + 10, /* extended precision (e) */ 8, /* integer (8) */ 4, /* single precision (s) */ 8 /* double precision (d) */ @@ -978,11 +1000,11 @@ unsigned long len =3D float_fsz[ld.x6_sz]; =20 /* - * fr0 & fr1 don't need to be checked because Illegal Instruction - * faults have higher priority than unaligned faults. + * fr0 & fr1 don't need to be checked because Illegal Instruction faults = have + * higher priority than unaligned faults. * - * r0 cannot be found as the base as it would never generate an - * unaligned reference. + * r0 cannot be found as the base as it would never generate an unaligned + * reference. */ =20 /* @@ -996,8 +1018,10 @@ * invalidate the ALAT entry and execute updates, if any. */ if (ld.x6_op !=3D 0x2) { - /* this assumes little-endian byte-order: */ - + /* + * This assumes little-endian byte-order. Note that there is no "ldfpe" + * instruction: + */ if (copy_from_user(&fpr_init[0], (void *) ifa, len) || copy_from_user(&fpr_init[1], (void *) (ifa + len), len)) return -1; @@ -1337,7 +1361,7 @@ =20 /* * IMPORTANT: - * Notice that the swictch statement DOES not cover all possible instruct= ions + * Notice that the switch statement DOES not cover all possible instructi= ons * that DO generate unaligned references. This is made on purpose because= for some * instructions it DOES NOT make sense to try and emulate the access. Som= etimes it * is WRONG to try and emulate. Here is a list of instruction we don't em= ulate i.e., diff -urN linux-2.4.13/arch/ia64/kernel/unwind.c linux-2.4.13-lia/arch/ia64= /kernel/unwind.c --- linux-2.4.13/arch/ia64/kernel/unwind.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/kernel/unwind.c Thu Oct 4 00:21:39 2001 @@ -504,7 +504,7 @@ return 0; } =20 -inline int +int unw_access_pr (struct unw_frame_info *info, unsigned long *val, int write) { unsigned long *addr; diff -urN linux-2.4.13/arch/ia64/lib/clear_page.S linux-2.4.13-lia/arch/ia6= 4/lib/clear_page.S --- linux-2.4.13/arch/ia64/lib/clear_page.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/lib/clear_page.S Thu Oct 4 00:21:39 2001 @@ -47,5 +47,5 @@ br.cloop.dptk.few 1b ;; mov ar.lc =3D r2 // restore lc - br.ret.sptk.few rp + br.ret.sptk.many rp END(clear_page) diff -urN linux-2.4.13/arch/ia64/lib/clear_user.S linux-2.4.13-lia/arch/ia6= 4/lib/clear_user.S --- linux-2.4.13/arch/ia64/lib/clear_user.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/lib/clear_user.S Thu Oct 4 00:21:39 2001 @@ -8,7 +8,7 @@ * r8: number of bytes that didn't get cleared due to a fault * * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1999 Stephane Eranian + * Stephane Eranian */ =20 #include @@ -62,11 +62,11 @@ ;; // avoid WAW on CFM adds tmp=3D-1,len // br.ctop is repeat/until mov ret0=3Dlen // return value is length at this point -(p6) br.ret.spnt.few rp +(p6) br.ret.spnt.many rp ;; cmp.lt p6,p0=16,len // if len > 16 then long memset mov ar.lc=3Dtmp // initialize lc for small count -(p6) br.cond.dptk.few long_do_clear +(p6) br.cond.dptk .long_do_clear ;; // WAR on ar.lc // // worst case 16 iterations, avg 8 iterations @@ -79,7 +79,7 @@ 1: EX( .Lexit1, st1 [buf]=3Dr0,1 ) adds len=3D-1,len // countdown length using len - br.cloop.dptk.few 1b + br.cloop.dptk 1b ;; // avoid RAW on ar.lc // // .Lexit4: comes from byte by byte loop @@ -87,7 +87,7 @@ .Lexit1: mov ret0=3Dlen // faster than using ar.lc mov ar.lc=3Dsaved_lc - br.ret.sptk.few rp // end of short clear_user + br.ret.sptk.many rp // end of short clear_user =20 =20 // @@ -98,7 +98,7 @@ // instead of ret0 is due to the fact that the exception code // changes the values of r8. // -long_do_clear: +.long_do_clear: tbit.nz p6,p0=3Dbuf,0 // odd alignment (for long_do_clear) ;; EX( .Lexit3, (p6) st1 [buf]=3Dr0,1 ) // 1-byte aligned @@ -119,7 +119,7 @@ ;; cmp.eq p6,p0=3Dr0,cnt adds tmp=3D-1,cnt -(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left +(p6) br.cond.dpnt .dotail // we have less than 16 bytes left ;; adds buf2=3D8,buf // setup second base pointer mov ar.lc=3Dtmp @@ -148,7 +148,7 @@ ;; // needed to get len correct when error st8 [buf2]=3Dr0,16 adds len=3D-16,len - br.cloop.dptk.few 2b + br.cloop.dptk 2b ;; mov ar.lc=3Dsaved_lc // @@ -178,7 +178,7 @@ ;; EX( .Lexit2, (p7) st1 [buf]=3Dr0 ) // only 1 byte left mov ret0=3Dr0 // success - br.ret.dptk.few rp // end of most likely path + br.ret.sptk.many rp // end of most likely path =20 // // Outlined error handling code @@ -205,5 +205,5 @@ .Lexit3: mov ret0=3Dlen mov ar.lc=3Dsaved_lc - br.ret.dptk.few rp + br.ret.sptk.many rp END(__do_clear_user) diff -urN linux-2.4.13/arch/ia64/lib/copy_page.S linux-2.4.13-lia/arch/ia64= /lib/copy_page.S --- linux-2.4.13/arch/ia64/lib/copy_page.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/lib/copy_page.S Thu Oct 4 00:21:39 2001 @@ -90,5 +90,5 @@ mov pr=3Dsaved_pr,0xffffffffffff0000 // restore predicates mov ar.pfs=3Dsaved_pfs mov ar.lc=3Dsaved_lc - br.ret.sptk.few rp + br.ret.sptk.many rp END(copy_page) diff -urN linux-2.4.13/arch/ia64/lib/copy_user.S linux-2.4.13-lia/arch/ia64= /lib/copy_user.S --- linux-2.4.13/arch/ia64/lib/copy_user.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/lib/copy_user.S Thu Oct 4 00:21:39 2001 @@ -19,8 +19,8 @@ * ret0 0 in case of success. The number of bytes NOT copied in * case of error. * - * Copyright (C) 2000 Hewlett-Packard Co - * Copyright (C) 2000 Stephane Eranian + * Copyright (C) 2000-2001 Hewlett-Packard Co + * Stephane Eranian * * Fixme: * - handle the case where we have more than 16 bytes and the alignment @@ -85,7 +85,7 @@ cmp.eq p8,p0=3Dr0,len // check for zero length .save ar.lc, saved_lc mov saved_lc=3Dar.lc // preserve ar.lc (slow) -(p8) br.ret.spnt.few rp // empty mempcy() +(p8) br.ret.spnt.many rp // empty mempcy() ;; add enddst=3Ddst,len // first byte after end of source add endsrc=3Dsrc,len // first byte after end of destination @@ -103,26 +103,26 @@ cmp.lt p10,p7=3DCOPY_BREAK,len // if len > COPY_BREAK then long copy =20 xor tmp=3Dsrc,dst // same alignment test prepare -(p10) br.cond.dptk.few long_copy_user +(p10) br.cond.dptk .long_copy_user ;; // RAW pr.rot/p16 ? // // Now we do the byte by byte loop with software pipeline // // p7 is necessarily false by now 1: - EX(failure_in_pipe1,(p16) ld1 val1[0]=3D[src1],1) - EX(failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) + EX(.failure_in_pipe1,(p16) ld1 val1[0]=3D[src1],1) + EX(.failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) br.ctop.dptk.few 1b ;; mov ar.lc=3Dsaved_lc mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.pfs=3Dsaved_pfs // restore ar.ec - br.ret.sptk.few rp // end of short memcpy + br.ret.sptk.many rp // end of short memcpy =20 // // Not 8-byte aligned // -diff_align_copy_user: +.diff_align_copy_user: // At this point we know we have more than 16 bytes to copy // and also that src and dest do _not_ have the same alignment. and src2=3D0x7,src1 // src offset @@ -153,7 +153,7 @@ // We know src1 is not 8-byte aligned in this case. // cmp.eq p14,p15=3Dr0,dst2 -(p15) br.cond.spnt.few 1f +(p15) br.cond.spnt 1f ;; sub t1=3D8,src2 mov t2=3Dsrc2 @@ -163,7 +163,7 @@ ;; sub lshiftd,rshift ;; - br.cond.spnt.few word_copy_user + br.cond.spnt .word_copy_user ;; 1: cmp.leu p14,p15=3Dsrc2,dst2 @@ -192,15 +192,15 @@ mov ar.lc=3Dcnt ;; 2: - EX(failure_in_pipe2,(p16) ld1 val1[0]=3D[src1],1) - EX(failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) + EX(.failure_in_pipe2,(p16) ld1 val1[0]=3D[src1],1) + EX(.failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) br.ctop.dptk.few 2b ;; clrrrb ;; -word_copy_user: +.word_copy_user: cmp.gtu p9,p0=16,len1 -(p9) br.cond.spnt.few 4f // if (16 > len1) skip 8-byte copy +(p9) br.cond.spnt 4f // if (16 > len1) skip 8-byte copy ;; shr.u cnt=3Dlen1,3 // number of 64-bit words ;; @@ -232,24 +232,24 @@ #define EPI_1 p[PIPE_DEPTH-2] #define SWITCH(pred, shift) cmp.eq pred,p0=3Dshift,rshift #define CASE(pred, shift) \ - (pred) br.cond.spnt.few copy_user_bit##shift + (pred) br.cond.spnt .copy_user_bit##shift #define BODY(rshift) \ -copy_user_bit##rshift: \ +.copy_user_bit##rshift: \ 1: \ - EX(failure_out,(EPI) st8 [dst1]=3Dtmp,8); \ + EX(.failure_out,(EPI) st8 [dst1]=3Dtmp,8); \ (EPI_1) shrp tmp=3Dval1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \ EX(3f,(p16) ld8 val1[0]=3D[src1],8); \ - br.ctop.dptk.few 1b; \ + br.ctop.dptk 1b; \ ;; \ - br.cond.sptk.few .diff_align_do_tail; \ + br.cond.sptk.many .diff_align_do_tail; \ 2: \ (EPI) st8 [dst1]=3Dtmp,8; \ (EPI_1) shrp tmp=3Dval1[PIPE_DEPTH-3],val1[PIPE_DEPTH-2],rshift; \ 3: \ (p16) mov val1[0]=3Dr0; \ - br.ctop.dptk.few 2b; \ + br.ctop.dptk 2b; \ ;; \ - br.cond.sptk.few failure_in2 + br.cond.sptk.many .failure_in2 =20 // // Since the instruction 'shrp' requires a fixed 128-bit value @@ -301,25 +301,25 @@ mov ar.lc=3Dlen1 ;; 5: - EX(failure_in_pipe1,(p16) ld1 val1[0]=3D[src1],1) - EX(failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) + EX(.failure_in_pipe1,(p16) ld1 val1[0]=3D[src1],1) + EX(.failure_out,(EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1) br.ctop.dptk.few 5b ;; mov ar.lc=3Dsaved_lc mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp =20 // // Beginning of long mempcy (i.e. > 16 bytes) // -long_copy_user: +.long_copy_user: tbit.nz p6,p7=3Dsrc1,0 // odd alignement and tmp=3D7,tmp ;; cmp.eq p10,p8=3Dr0,tmp mov len1=3Dlen // copy because of rotation -(p8) br.cond.dpnt.few diff_align_copy_user +(p8) br.cond.dpnt .diff_align_copy_user ;; // At this point we know we have more than 16 bytes to copy // and also that both src and dest have the same alignment @@ -327,11 +327,11 @@ // forward slowly until we reach 16byte alignment: no need to // worry about reaching the end of buffer. // - EX(failure_in1,(p6) ld1 val1[0]=3D[src1],1) // 1-byte aligned + EX(.failure_in1,(p6) ld1 val1[0]=3D[src1],1) // 1-byte aligned (p6) adds len1=3D-1,len1;; tbit.nz p7,p0=3Dsrc1,1 ;; - EX(failure_in1,(p7) ld2 val1[1]=3D[src1],2) // 2-byte aligned + EX(.failure_in1,(p7) ld2 val1[1]=3D[src1],2) // 2-byte aligned (p7) adds len1=3D-2,len1;; tbit.nz p8,p0=3Dsrc1,2 ;; @@ -339,28 +339,28 @@ // Stop bit not required after ld4 because if we fail on ld4 // we have never executed the ld1, therefore st1 is not executed. // - EX(failure_in1,(p8) ld4 val2[0]=3D[src1],4) // 4-byte aligned + EX(.failure_in1,(p8) ld4 val2[0]=3D[src1],4) // 4-byte aligned ;; - EX(failure_out,(p6) st1 [dst1]=3Dval1[0],1) + EX(.failure_out,(p6) st1 [dst1]=3Dval1[0],1) tbit.nz p9,p0=3Dsrc1,3 ;; // // Stop bit not required after ld8 because if we fail on ld8 // we have never executed the ld2, therefore st2 is not executed. // - EX(failure_in1,(p9) ld8 val2[1]=3D[src1],8) // 8-byte aligned - EX(failure_out,(p7) st2 [dst1]=3Dval1[1],2) + EX(.failure_in1,(p9) ld8 val2[1]=3D[src1],8) // 8-byte aligned + EX(.failure_out,(p7) st2 [dst1]=3Dval1[1],2) (p8) adds len1=3D-4,len1 ;; - EX(failure_out, (p8) st4 [dst1]=3Dval2[0],4) + EX(.failure_out, (p8) st4 [dst1]=3Dval2[0],4) (p9) adds len1=3D-8,len1;; shr.u cnt=3Dlen1,4 // number of 128-bit (2x64bit) words ;; - EX(failure_out, (p9) st8 [dst1]=3Dval2[1],8) + EX(.failure_out, (p9) st8 [dst1]=3Dval2[1],8) tbit.nz p6,p0=3Dlen1,3 cmp.eq p7,p0=3Dr0,cnt adds tmp=3D-1,cnt // br.ctop is repeat/until -(p7) br.cond.dpnt.few .dotail // we have less than 16 bytes left +(p7) br.cond.dpnt .dotail // we have less than 16 bytes left ;; adds src2=3D8,src1 adds dst2=3D8,dst1 @@ -370,12 +370,12 @@ // 16bytes/iteration // 2: - EX(failure_in3,(p16) ld8 val1[0]=3D[src1],16) + EX(.failure_in3,(p16) ld8 val1[0]=3D[src1],16) (p16) ld8 val2[0]=3D[src2],16 =20 - EX(failure_out, (EPI) st8 [dst1]=3Dval1[PIPE_DEPTH-1],16) + EX(.failure_out, (EPI) st8 [dst1]=3Dval1[PIPE_DEPTH-1],16) (EPI) st8 [dst2]=3Dval2[PIPE_DEPTH-1],16 - br.ctop.dptk.few 2b + br.ctop.dptk 2b ;; // RAW on src1 when fall through from loop // // Tail correction based on len only @@ -384,29 +384,28 @@ // is 16 byte aligned AND we have less than 16 bytes to copy. // .dotail: - EX(failure_in1,(p6) ld8 val1[0]=3D[src1],8) // at least 8 bytes + EX(.failure_in1,(p6) ld8 val1[0]=3D[src1],8) // at least 8 bytes tbit.nz p7,p0=3Dlen1,2 ;; - EX(failure_in1,(p7) ld4 val1[1]=3D[src1],4) // at least 4 bytes + EX(.failure_in1,(p7) ld4 val1[1]=3D[src1],4) // at least 4 bytes tbit.nz p8,p0=3Dlen1,1 ;; - EX(failure_in1,(p8) ld2 val2[0]=3D[src1],2) // at least 2 bytes + EX(.failure_in1,(p8) ld2 val2[0]=3D[src1],2) // at least 2 bytes tbit.nz p9,p0=3Dlen1,0 ;; - EX(failure_out, (p6) st8 [dst1]=3Dval1[0],8) + EX(.failure_out, (p6) st8 [dst1]=3Dval1[0],8) ;; - EX(failure_in1,(p9) ld1 val2[1]=3D[src1]) // only 1 byte left + EX(.failure_in1,(p9) ld1 val2[1]=3D[src1]) // only 1 byte left mov ar.lc=3Dsaved_lc ;; - EX(failure_out,(p7) st4 [dst1]=3Dval1[1],4) + EX(.failure_out,(p7) st4 [dst1]=3Dval1[1],4) mov pr=3Dsaved_pr,0xffffffffffff0000 ;; - EX(failure_out, (p8) st2 [dst1]=3Dval2[0],2) + EX(.failure_out, (p8) st2 [dst1]=3Dval2[0],2) mov ar.pfs=3Dsaved_pfs ;; - EX(failure_out, (p9) st1 [dst1]=3Dval2[1]) - br.ret.dptk.few rp - + EX(.failure_out, (p9) st1 [dst1]=3Dval2[1]) + br.ret.sptk.many rp =20 =20 // @@ -433,32 +432,32 @@ // pipeline going. We can't really do this inline because // p16 is always reset to 1 when lc > 0. // -failure_in_pipe1: +.failure_in_pipe1: sub ret0=3Dendsrc,src1 // number of bytes to zero, i.e. not copied 1: (p16) mov val1[0]=3Dr0 (EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1 - br.ctop.dptk.few 1b + br.ctop.dptk 1b ;; mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.lc=3Dsaved_lc mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp =20 // // This is the case where the byte by byte copy fails on the load // when we copy the head. We need to finish the pipeline and copy // zeros for the rest of the destination. Since this happens // at the top we still need to fill the body and tail. -failure_in_pipe2: +.failure_in_pipe2: sub ret0=3Dendsrc,src1 // number of bytes to zero, i.e. not copied 2: (p16) mov val1[0]=3Dr0 (EPI) st1 [dst1]=3Dval1[PIPE_DEPTH-1],1 - br.ctop.dptk.few 2b + br.ctop.dptk 2b ;; sub len=3Denddst,dst1,1 // precompute len - br.cond.dptk.few failure_in1bis + br.cond.dptk.many .failure_in1bis ;; =20 // @@ -533,9 +532,7 @@ // This means that we are in a situation similar the a fault in the // head part. That's nice! // -failure_in1: -// sub ret0=3Denddst,dst1 // number of bytes to zero, i.e. not copied -// sub len=3Denddst,dst1,1 +.failure_in1: sub ret0=3Dendsrc,src1 // number of bytes to zero, i.e. not copied sub len=3Dendsrc,src1,1 // @@ -546,18 +543,17 @@ // calling side. // ;; -failure_in1bis: // from (failure_in3) +.failure_in1bis: // from (.failure_in3) mov ar.lc=3Dlen // Continue with a stupid byte store. ;; 5: st1 [dst1]=3Dr0,1 - br.cloop.dptk.few 5b + br.cloop.dptk 5b ;; -skip_loop: mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.lc=3Dsaved_lc mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp =20 // // Here we simply restart the loop but instead @@ -569,7 +565,7 @@ // we MUST use src1/endsrc here and not dst1/enddst because // of the pipeline effect. // -failure_in3: +.failure_in3: sub ret0=3Dendsrc,src1 // number of bytes to zero, i.e. not copied ;; 2: @@ -577,36 +573,36 @@ (p16) mov val2[0]=3Dr0 (EPI) st8 [dst1]=3Dval1[PIPE_DEPTH-1],16 (EPI) st8 [dst2]=3Dval2[PIPE_DEPTH-1],16 - br.ctop.dptk.few 2b + br.ctop.dptk 2b ;; cmp.ne p6,p0=3Ddst1,enddst // Do we need to finish the tail ? sub len=3Denddst,dst1,1 // precompute len -(p6) br.cond.dptk.few failure_in1bis +(p6) br.cond.dptk .failure_in1bis ;; mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.lc=3Dsaved_lc mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp =20 -failure_in2: +.failure_in2: sub ret0=3Dendsrc,src1 cmp.ne p6,p0=3Ddst1,enddst // Do we need to finish the tail ? sub len=3Denddst,dst1,1 // precompute len -(p6) br.cond.dptk.few failure_in1bis +(p6) br.cond.dptk .failure_in1bis ;; mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.lc=3Dsaved_lc mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp =20 // // handling of failures on stores: that's the easy part // -failure_out: +.failure_out: sub ret0=3Denddst,dst1 mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.lc=3Dsaved_lc =20 mov ar.pfs=3Dsaved_pfs - br.ret.dptk.few rp + br.ret.sptk.many rp END(__copy_user) diff -urN linux-2.4.13/arch/ia64/lib/do_csum.S linux-2.4.13-lia/arch/ia64/l= ib/do_csum.S --- linux-2.4.13/arch/ia64/lib/do_csum.S Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/lib/do_csum.S Thu Oct 4 00:21:39 2001 @@ -16,7 +16,6 @@ * back-to-back 8-byte words per loop. Clean up the initialization * for the loop. Support the cases where load latency =3D 1 or 2. * Set CONFIG_IA64_LOAD_LATENCY to 1 or 2 (default). - * */ =20 #include @@ -130,7 +129,7 @@ ;; // avoid WAW on CFM mov tmp3=3D0x7 // a temporary mask/value add tmp1=3Dbuf,len // last byte's address -(p6) br.ret.spnt.few rp // return if true (hope we can avoid that) +(p6) br.ret.spnt.many rp // return if true (hope we can avoid that) =20 and firstoff=3D7,buf // how many bytes off for first1 element tbit.nz p15,p0=3Dbuf,0 // is buf an odd address ? @@ -181,9 +180,9 @@ cmp.ltu p6,p0=3Dresult1[0],word1[0] // check the carry ;; (p6) adds result1[0]=3D1,result1[0] -(p8) br.cond.dptk.few do_csum_exit // if (within an 8-byte word) +(p8) br.cond.dptk .do_csum_exit // if (within an 8-byte word) ;; -(p11) br.cond.dptk.few do_csum16 // if (count is even) +(p11) br.cond.dptk .do_csum16 // if (count is even) ;; // Here count is odd. ld8 word1[1]=3D[first1],8 // load an 8-byte word @@ -196,14 +195,14 @@ ;; (p6) adds result1[0]=3D1,result1[0] ;; -(p9) br.cond.sptk.few do_csum_exit // if (count =3D 1) exit +(p9) br.cond.sptk .do_csum_exit // if (count =3D 1) exit // Fall through to caluculate the checksum, feeding result1[0] as // the initial value in result1[0]. ;; // // Calculate the checksum loading two 8-byte words per loop. // -do_csum16: +.do_csum16: mov saved_lc=3Dar.lc shr.u count=3Dcount,1 // we do 16 bytes per loop ;; @@ -225,7 +224,7 @@ ;; add first2=3D8,first1 ;; -(p9) br.cond.sptk.few do_csum_exit +(p9) br.cond.sptk .do_csum_exit ;; nop.m 0 nop.i 0 @@ -241,7 +240,7 @@ 2: (p16) ld8 word1[0]=3D[first1],16 (p16) ld8 word2[0]=3D[first2],16 - br.ctop.sptk.few 1b + br.ctop.sptk 1b ;; // Since len is a 32-bit value, carry cannot be larger than // a 64-bit value. @@ -263,7 +262,7 @@ ;; (p6) adds result1[0]=3D1,result1[0] ;; -do_csum_exit: +.do_csum_exit: movl tmp3=3D0xffffffff ;; // XXX Fixme @@ -299,7 +298,7 @@ ;; mov ar.lc=3Dsaved_lc (p15) shr.u ret0=3Dret0,64-16 // + shift back to position =3D swap bytes - br.ret.sptk.few rp + br.ret.sptk.many rp =20 // I (Jun Nakajima) wrote an equivalent code (see below), but it was // not much better than the original. So keep the original there so that @@ -331,6 +330,6 @@ //(p15) mux1 ret0=3Dret0,@rev // reverse word // ;; //(p15) shr.u ret0=3Dret0,64-16 // + shift back to position =3D swap bytes -// br.ret.sptk.few rp +// br.ret.sptk.many rp =20 END(do_csum) diff -urN linux-2.4.13/arch/ia64/lib/idiv32.S linux-2.4.13-lia/arch/ia64/li= b/idiv32.S --- linux-2.4.13/arch/ia64/lib/idiv32.S Mon Oct 9 17:54:56 2000 +++ linux-2.4.13-lia/arch/ia64/lib/idiv32.S Thu Oct 4 00:21:39 2001 @@ -79,5 +79,5 @@ ;; #endif getf.sig r8 =3D f6 // transfer result to result register - br.ret.sptk rp + br.ret.sptk.many rp END(NAME) diff -urN linux-2.4.13/arch/ia64/lib/idiv64.S linux-2.4.13-lia/arch/ia64/li= b/idiv64.S --- linux-2.4.13/arch/ia64/lib/idiv64.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/idiv64.S Thu Oct 4 00:21:39 2001 @@ -89,5 +89,5 @@ #endif getf.sig r8 =3D f17 // transfer result to result register ldf.fill f17 =3D [sp] - br.ret.sptk rp + br.ret.sptk.many rp END(NAME) diff -urN linux-2.4.13/arch/ia64/lib/memcpy.S linux-2.4.13-lia/arch/ia64/li= b/memcpy.S --- linux-2.4.13/arch/ia64/lib/memcpy.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/memcpy.S Thu Oct 4 00:21:39 2001 @@ -9,20 +9,14 @@ * Output: * no return value * - * Copyright (C) 2000 Hewlett-Packard Co - * Copyright (C) 2000 Stephane Eranian - * Copyright (C) 2000 David Mosberger-Tang + * Copyright (C) 2000-2001 Hewlett-Packard Co + * Stephane Eranian + * David Mosberger-Tang */ #include =20 #include =20 -#if defined(CONFIG_ITANIUM_B0_SPECIFIC) || defined(CONFIG_ITANIUM_B1_SPECI= FIC) -# define BRP(args...) nop.b 0 -#else -# define BRP(args...) brp.loop.imp args -#endif - GLOBAL_ENTRY(bcopy) .regstk 3,0,0,0 mov r8=3Din0 @@ -103,8 +97,8 @@ cmp.ne p6,p0=3Dt0,r0 =20 mov src=3Din1 // copy because of rotation -(p7) br.cond.spnt.few memcpy_short -(p6) br.cond.spnt.few memcpy_long +(p7) br.cond.spnt.few .memcpy_short +(p6) br.cond.spnt.few .memcpy_long ;; nop.m 0 ;; @@ -119,7 +113,7 @@ 1: { .mib (p[0]) ld8 val[0]=3D[src],8 nop.i 0 - BRP(1b, 2f) + brp.loop.imp 1b, 2f } 2: { .mfb (p[N-1])st8 [dst]=3Dval[N-1],8 @@ -139,14 +133,14 @@ * issues, we want to avoid read-modify-write of entire words. */ .align 32 -memcpy_short: +.memcpy_short: adds cnt=3D-1,in2 // br.ctop is repeat/until mov ar.ec=3DMEM_LAT - BRP(1f, 2f) + brp.loop.imp 1f, 2f ;; mov ar.lc=3Dcnt ;; - nop.m 0 =09 + nop.m 0 ;; nop.m 0 nop.i 0 @@ -163,7 +157,7 @@ 1: { .mib (p[0]) ld1 val[0]=3D[src],1 nop.i 0 - BRP(1b, 2f) + brp.loop.imp 1b, 2f } ;; 2: { .mfb (p[MEM_LAT-1])st1 [dst]=3Dval[MEM_LAT-1],1 @@ -202,7 +196,7 @@ =20 #define LOG_LOOP_SIZE 6 =20 -memcpy_long: +.memcpy_long: alloc t3=3Dar.pfs,3,Nrot,0,Nrot // resize register frame and t0=3D-8,src // t0 =3D src & ~7 and t2=3D7,src // t2 =3D src & 7 @@ -247,7 +241,7 @@ mov t4=3Dip } ;; and src2=3D-8,src // align source pointer - adds t4=3Dmemcpy_loops-1b,t4 + adds t4=3D.memcpy_loops-1b,t4 mov ar.ec=3DN =20 and t0=3D7,src // t0 =3D src & 7 @@ -266,7 +260,7 @@ mov pr=3Dcnt,0x38 // set (p5,p4,p3) to # of bytes last-word bytes to co= py mov ar.lc=3Dt2 ;; - nop.m 0 =09 + nop.m 0 ;; nop.m 0 nop.i 0 @@ -278,7 +272,7 @@ br.sptk.few b6 ;; =20 -memcpy_tail: +.memcpy_tail: // At this point, (p5,p4,p3) are set to the number of bytes left to copy = (which is // less than 8) and t0 contains the last few bytes of the src buffer: (p5) st4 [dst]=3Dt0,4 @@ -300,7 +294,7 @@ 1: { .mib \ (p[0]) ld8 val[0]=3D[src2],8; \ (p[MEM_LAT+3]) shrp w[0]=3Dval[MEM_LAT+3],val[MEM_LAT+4-index],shift; \ - BRP(1b, 2f) \ + brp.loop.imp 1b, 2f \ }; \ 2: { .mfb \ (p[MEM_LAT+4]) st8 [dst]=3Dw[1],8; \ @@ -311,8 +305,8 @@ ld8 val[N-1]=3D[src_end]; /* load last word (may be same as val[N]) */ \ ;; \ shrp t0=3Dval[N-1],val[N-index],shift; \ - br memcpy_tail -memcpy_loops: + br .memcpy_tail +.memcpy_loops: COPY(0, 1) /* no point special casing this---it doesn't go any faster wit= hout shrp */ COPY(8, 0) COPY(16, 0) diff -urN linux-2.4.13/arch/ia64/lib/memset.S linux-2.4.13-lia/arch/ia64/li= b/memset.S --- linux-2.4.13/arch/ia64/lib/memset.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/memset.S Thu Oct 4 00:21:40 2001 @@ -43,11 +43,11 @@ =20 adds tmp=3D-1,len // br.ctop is repeat/until tbit.nz p6,p0=3Dbuf,0 // odd alignment -(p8) br.ret.spnt.few rp +(p8) br.ret.spnt.many rp =20 cmp.lt p7,p0=16,len // if len > 16 then long memset mux1 val=3Dval,@brcst // prepare value -(p7) br.cond.dptk.few long_memset +(p7) br.cond.dptk .long_memset ;; mov ar.lc=3Dtmp // initialize lc for small count ;; // avoid RAW and WAW on ar.lc @@ -57,11 +57,11 @@ ;; // avoid RAW on ar.lc mov ar.lc=3Dsaved_lc mov ar.pfs=3Dsaved_pfs - br.ret.sptk.few rp // end of short memset + br.ret.sptk.many rp // end of short memset =20 // at this point we know we have more than 16 bytes to copy // so we focus on alignment -long_memset: +.long_memset: (p6) st1 [buf]=3Dval,1 // 1-byte aligned (p6) adds len=3D-1,len;; // sync because buf is modified tbit.nz p6,p0=3Dbuf,1 @@ -80,7 +80,7 @@ ;; cmp.eq p6,p0=3Dr0,cnt adds tmp=3D-1,cnt -(p6) br.cond.dpnt.few .dotail // we have less than 16 bytes left +(p6) br.cond.dpnt .dotail // we have less than 16 bytes left ;; adds buf2=3D8,buf // setup second base pointer mov ar.lc=3Dtmp @@ -104,5 +104,5 @@ mov ar.lc=3Dsaved_lc ;; (p6) st1 [buf]=3Dval // only 1 byte left - br.ret.dptk.few rp + br.ret.sptk.many rp END(memset) diff -urN linux-2.4.13/arch/ia64/lib/strlen.S linux-2.4.13-lia/arch/ia64/li= b/strlen.S --- linux-2.4.13/arch/ia64/lib/strlen.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/strlen.S Thu Oct 4 00:21:40 2001 @@ -11,7 +11,7 @@ * does not count the \0 * * Copyright (C) 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1999 Stephane Eranian + * Stephane Eranian * * 09/24/99 S.Eranian add speculation recovery code */ @@ -116,7 +116,7 @@ ld8.s w[0]=3D[src],8 // speculatively load next to next cmp.eq.and p6,p0=3D8,val1 // p6 =3D p6 and val1=3D8 cmp.eq.and p6,p0=3D8,val2 // p6 =3D p6 and mask=3D8 -(p6) br.wtop.dptk.few 1b // loop until p6 =3D 0 +(p6) br.wtop.dptk 1b // loop until p6 =3D 0 ;; // // We must return try the recovery code iff @@ -127,14 +127,14 @@ // cmp.eq p8,p9=3D8,val1 // p6 =3D val1 had zero (disambiguate) tnat.nz p6,p7=3Dval1 // test NaT on val1 -(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT +(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT ;; // // if we come here p7 is true, i.e., initialized for // cmp // cmp.eq.and p7,p0=3D8,val1// val1=3D8? tnat.nz.and p7,p0=3Dval2 // test NaT if val2 -(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT +(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT ;; (p8) mov val1=3Dval2 // the other test got us out of the loop (p8) adds src=3D-16,src // correct position when 3 ahead @@ -146,7 +146,7 @@ ;; sub ret0=3Dret0,tmp // adjust mov ar.pfs=3Dsaved_pfs // because of ar.ec, restore no matter what - br.ret.sptk.few rp // end of normal execution + br.ret.sptk.many rp // end of normal execution =20 // // Outlined recovery code when speculation failed @@ -165,7 +165,7 @@ // - today we restart from the beginning of the string instead // of trying to continue where we left off. // -recover: +.recover: ld8 val=3D[base],8 // will fail if unrecoverable fault ;; or val=3Dval,mask // remask first bytes @@ -180,7 +180,7 @@ czx1.r val1=3Dval // search 0 byte from right ;; cmp.eq p6,p0=3D8,val1 // val1=3D8 ? -(p6) br.wtop.dptk.few 2b // loop until p6 =3D 0 +(p6) br.wtop.dptk 2b // loop until p6 =3D 0 ;; // (avoid WAW on p63) sub ret0=BAse,orig // distance from base sub tmp=3D8,val1 @@ -188,5 +188,5 @@ ;; sub ret0=3Dret0,tmp // length=3Dnow - back -1 mov ar.pfs=3Dsaved_pfs // because of ar.ec, restore no matter what - br.ret.sptk.few rp // end of successful recovery code + br.ret.sptk.many rp // end of successful recovery code END(strlen) diff -urN linux-2.4.13/arch/ia64/lib/strlen_user.S linux-2.4.13-lia/arch/ia= 64/lib/strlen_user.S --- linux-2.4.13/arch/ia64/lib/strlen_user.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/strlen_user.S Thu Oct 4 00:21:40 2001 @@ -8,8 +8,8 @@ * ret0 0 in case of fault, strlen(buffer)+1 otherwise * * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang - * Copyright (C) 1998, 1999 Stephane Eranian + * David Mosberger-Tang + * Stephane Eranian * * 01/19/99 S.Eranian heavily enhanced version (see details below) * 09/24/99 S.Eranian added speculation recovery code @@ -108,7 +108,7 @@ mov ar.ec=3Dr0 // clear epilogue counter (saved in ar.pfs) ;; add base=3D-16,src // keep track of aligned base - chk.s v[1], recover // if already NaT, then directly skip to recover + chk.s v[1], .recover // if already NaT, then directly skip to recover or v[1]=3Dv[1],mask // now we have a safe initial byte pattern ;; 1: @@ -130,14 +130,14 @@ // cmp.eq p8,p9=3D8,val1 // p6 =3D val1 had zero (disambiguate) tnat.nz p6,p7=3Dval1 // test NaT on val1 -(p6) br.cond.spnt.few recover// jump to recovery if val1 is NaT +(p6) br.cond.spnt .recover // jump to recovery if val1 is NaT ;; // // if we come here p7 is true, i.e., initialized for // cmp // cmp.eq.and p7,p0=3D8,val1// val1=3D8? tnat.nz.and p7,p0=3Dval2 // test NaT if val2 -(p7) br.cond.spnt.few recover// jump to recovery if val2 is NaT +(p7) br.cond.spnt .recover // jump to recovery if val2 is NaT ;; (p8) mov val1=3Dval2 // val2 contains the value (p8) adds src=3D-16,src // correct position when 3 ahead @@ -149,7 +149,7 @@ ;; sub ret0=3Dret0,tmp // length=3Dnow - back -1 mov ar.pfs=3Dsaved_pfs // because of ar.ec, restore no matter what - br.ret.sptk.few rp // end of normal execution + br.ret.sptk.many rp // end of normal execution =20 // // Outlined recovery code when speculation failed @@ -162,7 +162,7 @@ // - today we restart from the beginning of the string instead // of trying to continue where we left off. // -recover: +.recover: EX(.Lexit1, ld8 val=3D[base],8) // load the initial bytes ;; or val=3Dval,mask // remask first bytes @@ -185,7 +185,7 @@ ;; sub ret0=3Dret0,tmp // length=3Dnow - back -1 mov ar.pfs=3Dsaved_pfs // because of ar.ec, restore no matter what - br.ret.sptk.few rp // end of successful recovery code + br.ret.sptk.many rp // end of successful recovery code =20 // // We failed even on the normal load (called from exception handler) @@ -194,5 +194,5 @@ mov ret0=3D0 mov pr=3Dsaved_pr,0xffffffffffff0000 mov ar.pfs=3Dsaved_pfs // because of ar.ec, restore no matter what - br.ret.sptk.few rp + br.ret.sptk.many rp END(__strlen_user) diff -urN linux-2.4.13/arch/ia64/lib/strncpy_from_user.S linux-2.4.13-lia/a= rch/ia64/lib/strncpy_from_user.S --- linux-2.4.13/arch/ia64/lib/strncpy_from_user.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/strncpy_from_user.S Thu Oct 4 00:21:40 = 2001 @@ -40,5 +40,5 @@ (p6) mov r8=3Din2 // buffer filled up---return buffer length (p7) sub r8=3Din1,r9,1 // return string length (excluding NUL character) [.Lexit:] - br.ret.sptk.few rp + br.ret.sptk.many rp END(__strncpy_from_user) diff -urN linux-2.4.13/arch/ia64/lib/strnlen_user.S linux-2.4.13-lia/arch/i= a64/lib/strnlen_user.S --- linux-2.4.13/arch/ia64/lib/strnlen_user.S Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/lib/strnlen_user.S Thu Oct 4 00:21:40 2001 @@ -33,7 +33,7 @@ add r9=3D1,r9 ;; cmp.eq p6,p0=3Dr8,r0 -(p6) br.dpnt.few .Lexit +(p6) br.cond.dpnt .Lexit br.cloop.dptk.few .Loop1 =20 add r9=3D1,in1 // NUL not found---return N+1 @@ -41,5 +41,5 @@ .Lexit: mov r8=3Dr9 mov ar.lc=3Dr16 // restore ar.lc - br.ret.sptk.few rp + br.ret.sptk.many rp END(__strnlen_user) diff -urN linux-2.4.13/arch/ia64/mm/fault.c linux-2.4.13-lia/arch/ia64/mm/f= ault.c --- linux-2.4.13/arch/ia64/mm/fault.c Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/mm/fault.c Thu Oct 4 00:21:40 2001 @@ -1,8 +1,8 @@ /* * MMU fault handling support. * - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang + * Copyright (C) 1998-2001 Hewlett-Packard Co + * David Mosberger-Tang */ #include #include @@ -16,7 +16,7 @@ #include #include =20 -extern void die_if_kernel (char *, struct pt_regs *, long); +extern void die (char *, struct pt_regs *, long); =20 /* * This routine is analogous to expand_stack() but instead grows the @@ -46,16 +46,15 @@ void ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_re= gs *regs) { + int signal =3D SIGSEGV, code =3D SEGV_MAPERR; + struct vm_area_struct *vma, *prev_vma; struct mm_struct *mm =3D current->mm; struct exception_fixup fix; - struct vm_area_struct *vma, *prev_vma; struct siginfo si; - int signal =3D SIGSEGV; unsigned long mask; =20 /* - * If we're in an interrupt or have no user - * context, we must not take the fault.. + * If we're in an interrupt or have no user context, we must not take the= fault.. */ if (in_interrupt() || !mm) goto no_context; @@ -71,6 +70,8 @@ goto check_expansion; =20 good_area: + code =3D SEGV_ACCERR; + /* OK, we've got a good vm_area for this memory area. Check the access p= ermissions: */ =20 # define VM_READ_BIT 0 @@ -89,12 +90,13 @@ if ((vma->vm_flags & mask) !=3D mask) goto bad_area; =20 + survive: /* * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the * fault. */ - switch (handle_mm_fault(mm, vma, address, mask) !=3D 0) { + switch (handle_mm_fault(mm, vma, address, mask)) { case 1: ++current->min_flt; break; @@ -147,7 +149,7 @@ if (user_mode(regs)) { si.si_signo =3D signal; si.si_errno =3D 0; - si.si_code =3D SI_KERNEL; + si.si_code =3D code; si.si_addr =3D (void *) address; force_sig_info(signal, &si, current); return; @@ -174,17 +176,29 @@ } =20 /* - * Oops. The kernel tried to access some bad page. We'll have - * to terminate things with extreme prejudice. + * Oops. The kernel tried to access some bad page. We'll have to terminat= e things + * with extreme prejudice. */ - printk(KERN_ALERT "Unable to handle kernel paging request at " - "virtual address %016lx\n", address); - die_if_kernel("Oops", regs, isr); + bust_spinlocks(1); + + if (address < PAGE_SIZE) + printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference"); + else + printk(KERN_ALERT "Unable to handle kernel paging request at " + "virtual address %016lx\n", address); + die("Oops", regs, isr); + bust_spinlocks(0); do_exit(SIGKILL); return; =20 out_of_memory: up_read(&mm->mmap_sem); + if (current->pid =3D 1) { + current->policy |=3D SCHED_YIELD; + schedule(); + down_read(&mm->mmap_sem); + goto survive; + } printk("VM: killing process %s\n", current->comm); if (user_mode(regs)) do_exit(SIGKILL); diff -urN linux-2.4.13/arch/ia64/mm/init.c linux-2.4.13-lia/arch/ia64/mm/in= it.c --- linux-2.4.13/arch/ia64/mm/init.c Mon Sep 24 15:06:13 2001 +++ linux-2.4.13-lia/arch/ia64/mm/init.c Wed Oct 10 17:43:54 2001 @@ -167,13 +167,40 @@ } =20 void -show_mem (void) +show_mem(void) { int i, total =3D 0, reserved =3D 0; int shared =3D 0, cached =3D 0; =20 printk("Mem-info:\n"); show_free_areas(); + +#ifdef CONFIG_DISCONTIGMEM + { + pg_data_t *pgdat =3D pgdat_list; + + printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10)); + do { + printk("Node ID: %d\n", pgdat->node_id); + for(i =3D 0; i < pgdat->node_size; i++) { + if (PageReserved(pgdat->node_mem_map+i)) + reserved++; + else if (PageSwapCache(pgdat->node_mem_map+i)) + cached++; + else if (page_count(pgdat->node_mem_map + i)) + shared +=3D page_count(pgdat->node_mem_map + i) - 1; + } + printk("\t%d pages of RAM\n", pgdat->node_size); + printk("\t%d reserved pages\n", reserved); + printk("\t%d pages shared\n", shared); + printk("\t%d pages swap cached\n", cached); + pgdat =3D pgdat->node_next; + } while (pgdat); + printk("Total of %ld pages in page table cache\n", pgtable_cache_size); + show_buffers(); + printk("%d free buffer pages\n", nr_free_buffer_pages()); + } +#else /* !CONFIG_DISCONTIGMEM */ printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10)); i =3D max_mapnr; while (i-- > 0) { @@ -191,6 +218,7 @@ printk("%d pages swap cached\n", cached); printk("%ld pages in page table cache\n", pgtable_cache_size); show_buffers(); +#endif /* !CONFIG_DISCONTIGMEM */ } =20 /* diff -urN linux-2.4.13/arch/ia64/mm/tlb.c linux-2.4.13-lia/arch/ia64/mm/tlb= .c --- linux-2.4.13/arch/ia64/mm/tlb.c Tue Jul 31 10:30:08 2001 +++ linux-2.4.13-lia/arch/ia64/mm/tlb.c Wed Oct 10 17:45:07 2001 @@ -2,7 +2,7 @@ * TLB support routines. * * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * David Mosberger-Tang * * 08/02/00 A. Mallick * Modified RID allocation for SMP @@ -41,89 +41,6 @@ }; =20 /* - * Seralize usage of ptc.g - */ -spinlock_t ptcg_lock =3D SPIN_LOCK_UNLOCKED; /* see */ - -#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG) - -#include - -unsigned long flush_end, flush_start, flush_nbits, flush_rid; -atomic_t flush_cpu_count; - -/* - * flush_tlb_no_ptcg is called with ptcg_lock locked - */ -static inline void -flush_tlb_no_ptcg (unsigned long start, unsigned long end, unsigned long n= bits) -{ - extern void smp_send_flush_tlb (void); - unsigned long saved_tpr =3D 0; - unsigned long flags; - - /* - * Some times this is called with interrupts disabled and causes - * dead-lock; to avoid this we enable interrupt and raise the TPR - * to enable ONLY IPI. - */ - __save_flags(flags); - if (!(flags & IA64_PSR_I)) { - saved_tpr =3D ia64_get_tpr(); - ia64_srlz_d(); - ia64_set_tpr(IA64_IPI_VECTOR - 16); - ia64_srlz_d(); - local_irq_enable(); - } - - spin_lock(&ptcg_lock); - flush_rid =3D ia64_get_rr(start); - ia64_srlz_d(); - flush_start =3D start; - flush_end =3D end; - flush_nbits =3D nbits; - atomic_set(&flush_cpu_count, smp_num_cpus - 1); - smp_send_flush_tlb(); - /* - * Purge local TLB entries. ALAT invalidation is done in ia64_leave_kerne= l. - */ - do { - asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory"); - start +=3D (1UL << nbits); - } while (start < end); - - ia64_srlz_i(); /* srlz.i implies srlz.d */ - - /* - * Wait for other CPUs to finish purging entries. - */ -#if defined(CONFIG_ITANIUM_BSTEP_SPECIFIC) - { - extern void smp_resend_flush_tlb (void); - unsigned long start =3D ia64_get_itc(); - - while (atomic_read(&flush_cpu_count) > 0) { - if ((ia64_get_itc() - start) > 400000UL) { - smp_resend_flush_tlb(); - start =3D ia64_get_itc(); - } - } - } -#else - while (atomic_read(&flush_cpu_count)) { - /* Nothing */ - } -#endif - if (!(flags & IA64_PSR_I)) { - local_irq_disable(); - ia64_set_tpr(saved_tpr); - ia64_srlz_d(); - } -} - -#endif /* CONFIG_SMP && !CONFIG_ITANIUM_PTCG */ - -/* * Acquire the ia64_ctx.lock before calling this function! */ void @@ -162,6 +79,26 @@ flush_tlb_all(); } =20 +static void +ia64_global_tlb_purge (unsigned long start, unsigned long end, unsigned lo= ng nbits) +{ + static spinlock_t ptcg_lock =3D SPIN_LOCK_UNLOCKED; + + /* HW requires global serialization of ptc.ga. */ + spin_lock(&ptcg_lock); + { + do { + /* + * Flush ALAT entries also. + */ + asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2) + : "memory"); + start +=3D (1UL << nbits); + } while (start < end); + } + spin_unlock(&ptcg_lock); +} + void __flush_tlb_all (void) { @@ -222,23 +159,15 @@ } start &=3D ~((1UL << nbits) - 1); =20 -#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG) - flush_tlb_no_ptcg(start, end, nbits); -#else - spin_lock(&ptcg_lock); - do { # ifdef CONFIG_SMP - /* - * Flush ALAT entries also. - */ - asm volatile ("ptc.ga %0,%1;;srlz.i;;" :: "r"(start), "r"(nbits<<2) : "m= emory"); + platform_global_tlb_purge(start, end, nbits); # else + do { asm volatile ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory"); -# endif start +=3D (1UL << nbits); } while (start < end); -#endif /* CONFIG_SMP && !defined(CONFIG_ITANIUM_PTCG) */ - spin_unlock(&ptcg_lock); +# endif + ia64_insn_group_barrier(); ia64_srlz_i(); /* srlz.i implies srlz.d */ ia64_insn_group_barrier(); diff -urN linux-2.4.13/arch/ia64/sn/sn1/llsc4.c linux-2.4.13-lia/arch/ia64/= sn/sn1/llsc4.c --- linux-2.4.13/arch/ia64/sn/sn1/llsc4.c Thu Apr 5 12:51:47 2001 +++ linux-2.4.13-lia/arch/ia64/sn/sn1/llsc4.c Thu Oct 4 00:21:40 2001 @@ -35,16 +35,6 @@ static int inttest=3D0; #endif =20 -#ifdef IA64_SEMFIX_INSN -#undef IA64_SEMFIX_INSN -#endif -#ifdef IA64_SEMFIX -#undef IA64_SEMFIX -#endif -# define IA64_SEMFIX_INSN -# define IA64_SEMFIX "" - - /* * Test parameter table for AUTOTEST */ @@ -192,7 +182,6 @@ printk (" llscfail \t%s\tForce a failure to test the trigger & err= or messages\n", fail_enabled ? "on" : "off"); printk (" llscselt \t%s\tSelective triger on failures\n", selectiv= e_trigger ? "on" : "off"); printk (" llscblkadr \t%s\tDump data block addresses\n", dump_block_= addrs_opt ? "on" : "off"); - printk (" SEMFIX: %s\n", IA64_SEMFIX); printk ("\n"); } __setup("autotest", autotest_enable); diff -urN linux-2.4.13/arch/ia64/tools/print_offsets.c linux-2.4.13-lia/arc= h/ia64/tools/print_offsets.c --- linux-2.4.13/arch/ia64/tools/print_offsets.c Tue Jul 31 10:30:09 2001 +++ linux-2.4.13-lia/arch/ia64/tools/print_offsets.c Thu Oct 4 00:21:52 20= 01 @@ -57,11 +57,8 @@ { "IA64_TASK_PROCESSOR_OFFSET", offsetof (struct task_struct, processo= r) }, { "IA64_TASK_THREAD_OFFSET", offsetof (struct task_struct, thread) }, { "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.= ksp) }, -#ifdef CONFIG_IA32_SUPPORT - { "IA64_TASK_THREAD_SIGMASK_OFFSET",offsetof (struct task_struct, thre= ad.un.sigmask) }, -#endif #ifdef CONFIG_PERFMON - { "IA64_TASK_PFM_NOTIFY_OFFSET", offsetof(struct task_struct, thread.p= fm_pend_notify) }, + { "IA64_TASK_PFM_MUST_BLOCK_OFFSET",offsetof(struct task_struct, threa= d.pfm_must_block) }, #endif { "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) }, { "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) }, @@ -165,17 +162,18 @@ { "IA64_SIGCONTEXT_FR6_OFFSET", offsetof (struct sigcontext, sc_fr[6])= }, { "IA64_SIGCONTEXT_PR_OFFSET", offsetof (struct sigcontext, sc_pr) }, { "IA64_SIGCONTEXT_R12_OFFSET", offsetof (struct sigcontext, sc_gr[12]= ) }, + { "IA64_SIGCONTEXT_RBS_BASE_OFFSET",offsetof (struct sigcontext, sc_rb= s_base) }, + { "IA64_SIGCONTEXT_LOADRS_OFFSET", offsetof (struct sigcontext, sc_loa= drs) }, { "IA64_SIGFRAME_ARG0_OFFSET", offsetof (struct sigframe, arg0) }, { "IA64_SIGFRAME_ARG1_OFFSET", offsetof (struct sigframe, arg1) }, { "IA64_SIGFRAME_ARG2_OFFSET", offsetof (struct sigframe, arg2) }, - { "IA64_SIGFRAME_RBS_BASE_OFFSET", offsetof (struct sigframe, rbs_bas= e) }, { "IA64_SIGFRAME_HANDLER_OFFSET", offsetof (struct sigframe, handler)= }, { "IA64_SIGFRAME_SIGCONTEXT_OFFSET", offsetof (struct sigframe, sc) }, { "IA64_CLONE_VFORK", CLONE_VFORK }, { "IA64_CLONE_VM", CLONE_VM }, { "IA64_CPU_IRQ_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat= .f.irq_count) }, { "IA64_CPU_BH_COUNT_OFFSET", offsetof (struct cpuinfo_ia64, irq_stat.= f.bh_count) }, - { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET", offsetof (struct cpuinfo_ia6= 4, phys_stacked_size_p8) }, + { "IA64_CPU_PHYS_STACKED_SIZE_P8_OFFSET",offsetof (struct cpuinfo_ia64= , phys_stacked_size_p8)}, }; =20 static const char *tabs =3D "\t\t\t\t\t\t\t\t\t\t"; diff -urN linux-2.4.13/arch/parisc/kernel/traps.c linux-2.4.13-lia/arch/par= isc/kernel/traps.c --- linux-2.4.13/arch/parisc/kernel/traps.c Wed Oct 10 16:31:44 2001 +++ linux-2.4.13-lia/arch/parisc/kernel/traps.c Wed Oct 24 18:17:29 2001 @@ -43,7 +43,6 @@ =20 static inline void console_verbose(void) { - extern int console_loglevel; console_loglevel =3D 15; } =20 diff -urN linux-2.4.13/drivers/acpi/acpiconf.c linux-2.4.13-lia/drivers/acp= i/acpiconf.c --- linux-2.4.13/drivers/acpi/acpiconf.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/acpi/acpiconf.c Wed Oct 10 17:47:00 2001 @@ -0,0 +1,593 @@ +/* + * acpiconf.c - ACPI based kernel configuration + * + * Copyright (C) 2000-2001 Intel Corp. + * Copyright (C) 2000-2001 J.I. Lee + * + * Revision History: + * 9/15/2000 J.I. + * Major revision: for new ACPI initialization requirements + * 11/15/2000 J.I. + * Major revision: ACPI 2.0 tables support + * 04/23/2001 J.I. + * Rewrote functions to support multiple _PRTs of child P2Ps + * under root pci bus + */ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include "acpi.h" +#include "osconf.h" +#include "acpiconf.h" + + +static int acpi_cf_initialized __initdata =3D 0; + +acpi_status __init +acpi_cf_init ( + void * rsdp + ) +{ + acpi_status status; + + acpi_os_bind_osd(ACPI_CF_PHASE_BOOTTIME); + + status =3D acpi_initialize_subsystem (); + if (ACPI_FAILURE(status)) { + printk ("Acpi cfg:initialize_subsystem error=3D0x%x\n", status); + return status; + } + dprintk(("Acpi cfg:initialize_subsystem pass\n")); + + status =3D acpi_load_tables (); + if (ACPI_FAILURE(status)) { + printk ("Acpi cfg:load firmware tables error=3D0x%x\n", status); + acpi_terminate(); + return status; + } + dprintk(("Acpi cfg:load firmware tables pass\n")); + + status =3D acpi_enable_subsystem (ACPI_FULL_INITIALIZATION); + if (ACPI_FAILURE(status)) { + printk ("Acpi cfg:enable_subsystem error=3D0x%x\n", status); + acpi_terminate(); + return status; + } + dprintk(("Acpi cfg:enable_subsystem pass\n")); + + acpi_cf_initialized++; + + return AE_OK; +} + + +acpi_status __init +acpi_cf_terminate ( void ) +{ + acpi_status status; + + if (! ACPI_CF_INITIALIZED()) { + acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME); + return AE_ERROR; + } + + status =3D acpi_disable (); + if (ACPI_FAILURE(status)) { + printk ("Acpi cfg:disable fail=3D0x%x\n", status); + /* fall thru...*/ + } + + status =3D acpi_terminate (); + if (ACPI_FAILURE(status)) { + printk ("Acpi cfg:acpi terminate error=3D0x%x\n", status); + /* fall thru...*/ + } + + acpi_cf_cleanup(); + acpi_os_bind_osd(ACPI_CF_PHASE_RUNTIME); + + acpi_cf_initialized--; + + return status; +} + + +acpi_status __init +acpi_cf_get_pci_vectors ( + struct pci_vector_struct **vectors, + int *num_pci_vectors + ) +{ + acpi_status status; + void *prts; + + if (! ACPI_CF_INITIALIZED()) { + status =3D acpi_cf_init((void *)efi.acpi); + if (ACPI_FAILURE (status)) + return status; + } + + *vectors =3D NULL; + *num_pci_vectors =3D 0; + + status =3D acpi_cf_get_prt (&prts); + if (ACPI_FAILURE (status)) { + printk("Acpi cfg: get prt fail\n"); + return status; + } + + status =3D acpi_cf_convert_prt_to_vectors (prts, vectors, num_pci_vectors= ); +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG + if (ACPI_SUCCESS(status)) { + acpi_cf_print_pci_vectors (*vectors, *num_pci_vectors); + } +#endif + printk("Acpi cfg: get PCI interrupt vectors %s\n", + (ACPI_SUCCESS(status))?"pass":"fail"); + + return status; +} + + +static pci_routing_table *pci_routing_tables[PCI_MAX_BUS] __initdata =3D {= NULL}; + + +typedef struct _acpi_rpb { + NATIVE_UINT rpb_busnum; + NATIVE_UINT lastbusnum; + acpi_handle rpb_handle; +} acpi_rpb_t; + + +static acpi_status __init +acpi_cf_evaluate_method ( + acpi_handle handle, + UINT8 *method_name, + NATIVE_UINT *nuint + ) +{ + UINT32 tnuint =3D 0; + acpi_status status; + + acpi_buffer ret_buf; + acpi_object *ext_obj; + UINT8 buf[PATHNAME_MAX]; + + + ret_buf.length =3D PATHNAME_MAX; + ret_buf.pointer =3D (void *) buf; + + status =3D acpi_evaluate_object(handle, method_name, NULL, &ret_buf); + if (ACPI_FAILURE(status)) { + if (status =3D AE_NOT_FOUND) { + printk("Acpi cfg: no %s found\n", method_name); + } else { + printk("Acpi cfg: %s fail=3D0x%x\n", method_name, status); + } + } else { + ext_obj =3D (acpi_object *) ret_buf.pointer; + + switch (ext_obj->type) { + case ACPI_TYPE_INTEGER: + tnuint =3D (NATIVE_UINT) ext_obj->integer.value; + break; + default: + printk("Acpi cfg: %s obj type incorrect\n", method_name); + status =3D AE_TYPE; + break; + } + } + + *nuint =3D tnuint; + return (status); +} + + +static acpi_status __init +acpi_cf_evaluate_PRT ( + acpi_handle handle, + pci_routing_table **prt + ) +{ + acpi_buffer acpi_buffer; + acpi_status status; + + acpi_buffer.length =3D 0; + acpi_buffer.pointer =3D NULL; + + status =3D acpi_get_irq_routing_table (handle, &acpi_buffer); + + switch (status) { + case AE_BUFFER_OVERFLOW: + dprintk(("Acpi cfg: _PRT found. need %d bytes\n", + acpi_buffer.length)); + break; /* found */ + default: + printk("Acpi cfg: _PRT fail=3D0x%x\n", status); + case AE_NOT_FOUND: + return status; + } + + *prt =3D (pci_routing_table *) acpi_os_callocate (acpi_buffer.length); + if (!*prt) { + printk("Acpi cfg: callocate %d bytes for _PRT fail\n", + acpi_buffer.length); + return AE_NO_MEMORY; + } + acpi_buffer.pointer =3D (void *) *prt; + + status =3D acpi_get_irq_routing_table (handle, &acpi_buffer); + if (ACPI_FAILURE(status)) { + printk("Acpi cfg: _PRT fail=3D0x%x.\n", status); + acpi_os_free(prt); + } + + return status; +} + +static acpi_status __init +acpi_cf_get_root_pci_callback ( + acpi_handle handle, + UINT32 Level, + void *context, + void **retval + ) +{ + NATIVE_UINT busnum =3D 0; + acpi_status status; + acpi_rpb_t rpb; + pci_routing_table *prt; + + UINT8 path_name[PATHNAME_MAX]; +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG + acpi_buffer ret_buf; + + ret_buf.length =3D PATHNAME_MAX; + ret_buf.pointer =3D (void *) path_name; + + status =3D acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf); +#else + memset(path_name, 0, sizeof (path_name)); +#endif + + /* + * get bus number of this pci root bridge + */ + status =3D acpi_cf_evaluate_method(handle, METHOD_NAME__BBN, &busnum); + if (ACPI_FAILURE(status)) { + printk("Acpi cfg:%s evaluate _BBN fail=3D0x%x\n", + path_name, status); + return (status); + } + printk("Acpi cfg:%s ROOT PCI bus %ld\n", path_name, busnum); + + /* + * evaluate root pci bridge's _CRS for Bus number range for child P2P + * (bus min/max/len) - not yet. + */ + + /* + * get immediate _PRT of this root pci bridge if any + */ + status =3D acpi_cf_evaluate_PRT (handle, &prt); + switch(status) { + case AE_NOT_FOUND: + break; + default: + if (ACPI_FAILURE(status)) { + printk("Acpi cfg:%s _PRT fail=3D0x%x\n", + path_name, status); + return status; + } + dprintk(("Acpi cfg:%s bus %ld got _PRT\n", path_name, busnum)); + acpi_cf_add_to_pci_routing_tables (busnum, prt); + break; + } + + + /* + * walk down this root pci bridge to get _PRTs if any + */ + rpb.rpb_busnum =3D rpb.lastbusnum =3D busnum; + rpb.rpb_handle =3D handle; + status =3D acpi_walk_namespace ( ACPI_TYPE_DEVICE, + handle, + ACPI_UINT32_MAX, + acpi_cf_get_prt_callback, + &rpb, + NULL ); + if (ACPI_FAILURE(status)) + printk("Acpi cfg:%s walk namespace for _PRT error=3D0x%x\n", + path_name, status); + + return (status); +} + + +/* + * handle _PRTs of immediate P2Ps of root pci.=20 + */ +static acpi_status __init +acpi_cf_associate_prt_to_bus ( + acpi_handle handle, + acpi_rpb_t *rpb, + NATIVE_UINT *retbusnum, + NATIVE_UINT depth + ) +{ + acpi_status status; + UINT32 segbus; + NATIVE_UINT devfn; + UINT8 bn; + + UINT8 path_name[PATHNAME_MAX]; + acpi_pci_id pci_id; + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG + acpi_buffer ret_buf; + + ret_buf.length =3D PATHNAME_MAX; + ret_buf.pointer =3D (void *) path_name; + + status =3D acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf); +#else + memset(path_name, 0, sizeof (path_name)); +#endif + + /* + * get devfn from _ADR + */ + status =3D acpi_cf_evaluate_method(handle, METHOD_NAME__ADR, &devfn); + if (ACPI_FAILURE(status)) { + *retbusnum =3D rpb->rpb_busnum + 1; + printk("Acpi cfg:%s _ADR fail=3D0x%x. Set busnum to %ld\n", + path_name, status, *retbusnum); + return AE_OK; + } + dprintk(("Acpi cfg:%s _ADR =3D0x%x\n", path_name, (UINT32)devfn)); + + + /* + * access pci config space for bus number + * segbus =3D from rpb, devfn =3D from _ADR + */ + pci_id.segment =3D 0; + pci_id.bus =3D (u16)(rpb->rpb_busnum & 0xffffffff); + pci_id.device =3D (u16)((devfn >> 16) & 0xffff); + pci_id.function =3D (u16)(devfn & 0xffff); + + status =3D acpi_os_read_pci_configuration(&pci_id, PCI_PRIMARY_BUS, + &bn, 8); + if (ACPI_FAILURE(status)) { + *retbusnum =3D rpb->rpb_busnum + 1; + printk("Acpi cfg:%s pci read fail=3D0x%x. b:df:a=3D%x:%x:%x\n", + path_name, status, segbus, (UINT32)devfn, + PCI_PRIMARY_BUS); + printk("Acpi cfg:%s Set busnum to %ld\n", + path_name, *retbusnum); + return AE_OK; + } + dprintk(("Acpi cfg:%s pribus %d\n", path_name, bn)); + + + status =3D acpi_os_read_pci_configuration(&pci_id, PCI_SECONDARY_BUS, + &bn, 8); + if (ACPI_FAILURE(status)) { + *retbusnum =3D rpb->rpb_busnum + 1; + printk("Acpi cfg:%s pci read fail=3D0x%x. b:df:a=3D%x:%x:%x\n", + path_name, status, segbus, (UINT32)devfn, + PCI_SECONDARY_BUS); + printk("Acpi cfg:%s Set busnum to %ld\n", + path_name, *retbusnum); + return AE_OK; + } + dprintk(("Acpi cfg:%s busnum %d\n", path_name, bn)); + + *retbusnum =3D (NATIVE_UINT)bn; + return AE_OK; +} + + +static acpi_status __init +acpi_cf_get_prt ( + void **prts + ) +{ + acpi_status status; + + status =3D acpi_get_devices ( PCI_ROOT_HID_STRING, + acpi_cf_get_root_pci_callback, + NULL, + NULL ); + + if (ACPI_FAILURE(status)) { + printk("Acpi cfg:get_device PCI ROOT HID error=3D0x%x\n", status); + } + + *prts =3D (void *)pci_routing_tables; + + return status; +} + +static acpi_status __init +acpi_cf_get_prt_callback ( + acpi_handle handle, + UINT32 Level, + void *context, + void **retval + ) +{ + pci_routing_table *prt; + NATIVE_UINT busnum =3D 0; + NATIVE_UINT temp =3D 0x0F; + acpi_status status; + + UINT8 path_name[PATHNAME_MAX]; + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG + acpi_buffer ret_buf; + + ret_buf.length =3D PATHNAME_MAX; + ret_buf.pointer =3D (void *) path_name; + + status =3D acpi_get_name(handle, ACPI_FULL_PATHNAME, &ret_buf); +#else + memset(path_name, 0, sizeof (path_name)); +#endif + + status =3D acpi_cf_evaluate_PRT (handle, &prt); + switch(status) { + case AE_NOT_FOUND: + return AE_OK; + default: + if (ACPI_FAILURE(status)) { + printk("Acpi cfg:%s _PRT fail=3D0x%x\n", + path_name, status); + return status; + } + } + + /* + * evaluate _STA in case this device does not exist + */ + status =3D acpi_cf_evaluate_method(handle, METHOD_NAME__STA, &temp); + switch(status) { + case AE_NOT_FOUND: + break; + default: + if (ACPI_FAILURE(status)) { + printk("Acpi cfg:%s _STA fail=3D0x%x\n", + path_name, status); + return status; + } + if (!(temp & ACPI_STA_DEVICE_PRESENT)) { + dprintk(("Acpi cfg:%s not exist. _PRT discarded\n", + path_name)); + acpi_os_free(prt); + return AE_OK; + } + break; + } + + /* + * associate a bus number to this _PRT since=20 + * this _PRT is not on root pci bridge=20 + */ + acpi_cf_associate_prt_to_bus(handle, context, &busnum, 0); + + printk("Acpi cfg:%s busnum %ld got _PRT\n", path_name, busnum); + acpi_cf_add_to_pci_routing_tables (busnum, prt); + + return AE_OK; +} + + +static void __init +acpi_cf_add_to_pci_routing_tables ( + NATIVE_UINT busnum, + pci_routing_table *prt + ) +{ + if ( busnum >=3D PCI_MAX_BUS ) { + printk("Acpi cfg:invalid pci bus number %ld\n", busnum); + acpi_os_free(prt); + return; + } + + if (pci_routing_tables[busnum]) { + printk("Acpi cfg:duplicate PRT for pci bus %ld. overiding...\n", busnum); + acpi_os_free(pci_routing_tables[busnum]); + } + + pci_routing_tables[busnum] =3D prt; +} + + +#define DUMPVECTOR(pv) printk("PCI bus=3D0x%x id=3D0x%x pin=3D0x%x irq=3D0= x%x\n", pv->bus, pv->pci_id, pv->pin, pv->irq); + +static acpi_status __init +acpi_cf_convert_prt_to_vectors ( + void *prts, + struct pci_vector_struct **vectors, + int *num_pci_vectors + ) +{ + struct pci_vector_struct *pvec; + pci_routing_table **pprts, *prt, *prtf; + int nvec =3D 0; + int i; + + + pprts =3D (pci_routing_table **)prts; + + for ( i =3D 0; i < PCI_MAX_BUS; i++) { + prt =3D *pprts++; + if (prt) { + for ( ; prt->length > 0; nvec++) { + prt =3D (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->le= ngth); + } + } + } + + *num_pci_vectors =3D nvec; + *vectors =3D acpi_os_callocate (sizeof(struct pci_vector_struct) * nvec); + if (!*vectors) { + printk("Acpi cfg: callocate for pci_vector error\n"); + return AE_NO_MEMORY; + } + + pvec =3D *vectors; + pprts =3D (pci_routing_table **)prts; + + for ( i =3D 0; i < PCI_MAX_BUS; i++) { + prt =3D prtf =3D *pprts++; + if (prt) { + for ( ; prt->length > 0; pvec++) { + pvec->bus =3D (UINT16)i; + pvec->pci_id =3D prt->address; + pvec->pin =3D (UINT8)prt->pin; + pvec->irq =3D (UINT8)prt->source_index; + + prt =3D (pci_routing_table *) ((NATIVE_UINT)prt + (NATIVE_UINT)prt->le= ngth); + } + acpi_os_free((void *)prtf); + } + } + + return AE_OK; +} + + +void __init +acpi_cf_cleanup ( void ) +{ + /* nothing to free, pci_vectors are used by the kernel */ +} + + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG +void __init +acpi_cf_print_pci_vectors ( + struct pci_vector_struct *vectors, + int num_pci_vectors + ) +{ + struct pci_vector_struct *pvec; + int i; + + printk("number of PCI interrupt vectors =3D %d\n", num_pci_vectors); + + pvec =3D vectors; + for (i =3D 0; i < num_pci_vectors; i++) { + DUMPVECTOR(pvec); + pvec++; + } +} +#endif diff -urN linux-2.4.13/drivers/acpi/acpiconf.h linux-2.4.13-lia/drivers/acp= i/acpiconf.h --- linux-2.4.13/drivers/acpi/acpiconf.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/acpi/acpiconf.h Fri Oct 12 09:03:25 2001 @@ -0,0 +1,63 @@ +/* + * acpiconf.h - ACPI based kernel configuration=20 + * + * Copyright (C) 2000 Intel Corp. + * Copyright (C) 2000 J.I. Lee + */ + +#include + +#define PCI_MAX_BUS 0x100 +#define ACPI_STA_DEVICE_PRESENT 0x01 + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_DEBUG +#define ACPI_CF_INITIALIZED() (acpi_cf_initialized > 0) +#undef dprintk +#define dprintk(a) printk a +#else +#define ACPI_CF_INITIALIZED() 1 +#undef dprintk +#define dprintk(a) +#endif + + +extern +void __init +acpi_os_bind_osd(int acpi_phase); + + +static +acpi_status __init +acpi_cf_get_prt (void **prts); + + +static +acpi_status __init +acpi_cf_get_prt_callback ( + acpi_handle handle, + UINT32 level, + void *context, + void **retval + ); + + +static +void __init +acpi_cf_add_to_pci_routing_tables ( + NATIVE_UINT busnum, + pci_routing_table *prt + ); + + +static +acpi_status __init +acpi_cf_convert_prt_to_vectors ( + void *prts, + struct pci_vector_struct **vectors, + int *num_pci_vectors + ); + + +void __init +acpi_cf_cleanup ( void ); + diff -urN linux-2.4.13/drivers/acpi/hardware/hwacpi.c linux-2.4.13-lia/driv= ers/acpi/hardware/hwacpi.c --- linux-2.4.13/drivers/acpi/hardware/hwacpi.c Mon Sep 24 15:06:41 2001 +++ linux-2.4.13-lia/drivers/acpi/hardware/hwacpi.c Thu Oct 4 00:21:40 2001 @@ -196,6 +196,7 @@ { =20 acpi_status status =3D AE_NO_HARDWARE_RESPONSE; + u32 retries =3D 20; =20 =20 FUNCTION_TRACE ("Hw_set_mode"); @@ -220,11 +221,14 @@ =20 /* Give the platform some time to react */ =20 - acpi_os_stall (5000); + while (retries-- > 0) { + acpi_os_stall (5000); =20 - if (acpi_hw_get_mode () =3D mode) { - ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode)= ); - status =3D AE_OK; + if (acpi_hw_get_mode () =3D mode) { + ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "Mode %X successfully enabled\n", mode= )); + status =3D AE_OK; + break; + } } =20 return_ACPI_STATUS (status); diff -urN linux-2.4.13/drivers/acpi/include/actypes.h linux-2.4.13-lia/driv= ers/acpi/include/actypes.h --- linux-2.4.13/drivers/acpi/include/actypes.h Mon Sep 24 15:06:42 2001 +++ linux-2.4.13-lia/drivers/acpi/include/actypes.h Thu Oct 4 00:21:40 2001 @@ -60,6 +60,7 @@ typedef int INT32; typedef unsigned int UINT32; typedef COMPILER_DEPENDENT_UINT64 UINT64; +typedef long INT64; =20 typedef UINT64 NATIVE_UINT; typedef INT64 NATIVE_INT; diff -urN linux-2.4.13/drivers/acpi/include/acutils.h linux-2.4.13-lia/driv= ers/acpi/include/acutils.h --- linux-2.4.13/drivers/acpi/include/acutils.h Mon Sep 24 15:06:42 2001 +++ linux-2.4.13-lia/drivers/acpi/include/acutils.h Wed Oct 24 18:17:40 2001 @@ -383,6 +383,7 @@ /* Method name strings */ =20 #define METHOD_NAME__HID "_HID" +#define METHOD_NAME__CID "_CID" #define METHOD_NAME__UID "_UID" #define METHOD_NAME__ADR "_ADR" #define METHOD_NAME__STA "_STA" @@ -396,6 +397,11 @@ NATIVE_CHAR *object_name, acpi_namespace_node *device_node, acpi_integer *address); + +acpi_status +acpi_ut_execute_CID ( + acpi_namespace_node *device_node, + ACPI_DEVICE_ID *cid); =20 acpi_status acpi_ut_execute_HID ( diff -urN linux-2.4.13/drivers/acpi/include/platform/acgcc.h linux-2.4.13-l= ia/drivers/acpi/include/platform/acgcc.h --- linux-2.4.13/drivers/acpi/include/platform/acgcc.h Wed Oct 24 10:17:44 = 2001 +++ linux-2.4.13-lia/drivers/acpi/include/platform/acgcc.h Wed Oct 24 18:17= :50 2001 @@ -42,11 +42,32 @@ =20 /*! [Begin] no source code translation */ =20 +#include + +#include #include =20 #define halt() ia64_pal_halt_light() /* PAL_HALT[_L= IGHT] */ #define safe_halt() ia64_pal_halt(1) /* PAL_HALT */ =20 +static inline void +wbinvd (void) +{ + unsigned long flags, vector, position =3D 0; + long status; + + do { + ia64_clear_ic(flags); + status =3D ia64_pal_cache_flush(0x3, (PAL_CACHE_FLUSH_INVALIDATE + | PAL_CACHE_FLUSH_CHK_INTRS), + &position, &vector); + local_irq_restore(flags); + if (status =3D 1) { + ia64_eoi(); + hw_resend_irq(NULL, vector); + } + } while (status =3D 1); +} =20 #define ACPI_ACQUIRE_GLOBAL_LOCK(GLptr, Acq) \ do { \ diff -urN linux-2.4.13/drivers/acpi/namespace/nsxfobj.c linux-2.4.13-lia/dr= ivers/acpi/namespace/nsxfobj.c --- linux-2.4.13/drivers/acpi/namespace/nsxfobj.c Mon Sep 24 15:06:43 2001 +++ linux-2.4.13-lia/drivers/acpi/namespace/nsxfobj.c Wed Oct 24 18:18:06 2= 001 @@ -588,6 +588,7 @@ acpi_namespace_node *node; u32 flags; ACPI_DEVICE_ID device_id; + ACPI_DEVICE_ID compatible_id; ACPI_GET_DEVICES_INFO *info; =20 =20 @@ -628,7 +629,17 @@ } =20 if (STRNCMP (device_id.buffer, info->hid, sizeof (device_id.buffer)) != =3D 0) { - return (AE_OK); + status =3D acpi_ut_execute_CID (node, &compatible_id); + if (status =3D AE_NOT_FOUND) { + return (AE_OK); + } + else if (ACPI_FAILURE (status)) { + return (AE_CTRL_DEPTH); + } + + if (STRNCMP (compatible_id.buffer, info->hid, sizeof (compatible_id.buf= fer)) !=3D 0) { + return (AE_OK); + } } } =20 diff -urN linux-2.4.13/drivers/acpi/os.c linux-2.4.13-lia/drivers/acpi/os.c --- linux-2.4.13/drivers/acpi/os.c Mon Sep 24 15:06:43 2001 +++ linux-2.4.13-lia/drivers/acpi/os.c Thu Oct 4 00:21:40 2001 @@ -31,6 +31,8 @@ * - Fixed improper kernel_thread parameters=20 */ =20 +#include + #include #include #include @@ -48,7 +50,8 @@ =20 #ifdef _IA64 #include -#endif=20 +#include +#endif =20 #define _COMPONENT ACPI_OS_SERVICES MODULE_NAME ("os") @@ -61,6 +64,33 @@ =20 =20 /*************************************************************************= **** + * Function Binding + *************************************************************************= ****/ + +#ifdef CONFIG_ACPI_KERNEL_CONFIG +#include "osconf.h" + +struct acpi_osd acpi_osd_rt =3D { + /* these are runtime osd entries that differ from boottime entries */ + acpi_os_allocate_rt, + acpi_os_callocate_rt, + acpi_os_free_rt, + acpi_os_queue_for_execution_rt, + acpi_os_read_pci_configuration_rt, + acpi_os_write_pci_configuration_rt, + acpi_os_stall_rt +}; +#else +#define acpi_os_allocate_rt acpi_os_allocate +#define acpi_os_callocate_rt acpi_os_callocate +#define acpi_os_free_rt acpi_os_free +#define acpi_os_queue_for_execution_rt acpi_os_queue_for_execution +#define acpi_os_read_pci_configuration_rt acpi_os_read_pci_configuration +#define acpi_os_write_pci_configuration_rt acpi_os_write_pci_configuration +#define acpi_os_stall_rt acpi_os_stall +#endif + +/*************************************************************************= **** * Debugger Stuff *************************************************************************= ****/ =20 @@ -137,13 +167,13 @@ } =20 void * -acpi_os_allocate(u32 size) +acpi_os_allocate_rt(u32 size) { return kmalloc(size, GFP_KERNEL); } =20 void * -acpi_os_callocate(u32 size) +acpi_os_callocate_rt(u32 size) { void *ptr =3D acpi_os_allocate(size); if (ptr) @@ -153,7 +183,7 @@ } =20 void -acpi_os_free(void *ptr) +acpi_os_free_rt(void *ptr) { kfree(ptr); } @@ -233,12 +263,105 @@ (*acpi_irq_handler)(acpi_irq_context); } =20 +#ifdef CONFIG_ACPI_KERNEL_CONFIG +struct irqaction acpiirqaction; +/* + * codes from request_irq and free_irq. + */ acpi_status acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *cont= ext) { -#ifdef _IA64 + struct irqaction *act; + int retval; + + if (irq >=3D NR_IRQS) { + printk("ACPI: install SCI handler fail: invalid irq%d\n", irq); + return AE_ERROR; + } + + if (!handler) { + printk("ACPI: install SCI handler fail: invalid handler\n"); + return AE_ERROR; + } + + act =3D & acpiirqaction; + irq =3D isa_irq_to_vector(irq); -#endif /*_IA64*/ + acpi_irq_irq =3D irq; + acpi_irq_handler =3D handler; + acpi_irq_context =3D context; + + act->handler =3D acpi_irq; + act->flags =3D SA_INTERRUPT | SA_SHIRQ; + act->mask =3D 0; + act->name =3D "acpi"; + act->next =3D NULL; + act->dev_id =3D acpi_irq; + + retval =3D setup_irq(irq, act); + if (retval) { + printk("ACPI: install SCI handler fail: setup_irq\n"); + acpi_irq_handler =3D NULL; + return AE_ERROR; + } + printk("ACPI: install SCI %d handler pass\n", irq); + + return AE_OK; +} + +acpi_status +acpi_os_remove_interrupt_handler(u32 irq, OSD_HANDLER handler) +{ + irq_desc_t *desc; + struct irqaction **p; + unsigned long flags; + + if (!acpi_irq_handler) + return AE_OK; + + irq =3D isa_irq_to_vector(irq); + if (irq !=3D acpi_irq_irq) return AE_ERROR; + + acpi_irq_handler =3D NULL; + + desc =3D irq_desc(irq); + spin_lock_irqsave(&desc->lock,flags); + p =3D &desc->action; + for (;;) { + struct irqaction * action =3D *p; + if (action) { + struct irqaction **pp =3D p; + p =3D &action->next; + if (action->dev_id !=3D acpi_irq) + continue; + + /* Found it - now remove it from the list of entries */ + *pp =3D action->next; + if (!desc->action) { + desc->status |=3D IRQ_DISABLED; + desc->handler->shutdown(irq); + } + spin_unlock_irqrestore(&desc->lock,flags); + +#ifdef CONFIG_SMP + /* Wait to make sure it's not being used on another CPU */ + while (desc->status & IRQ_INPROGRESS) + barrier(); +#endif + return AE_OK; + } + printk("ACPI: Trying to free free IRQ%d\n",irq); + spin_unlock_irqrestore(&desc->lock,flags); + return AE_OK; + } + + return AE_OK; +} + +#else +acpi_status +acpi_os_install_interrupt_handler(u32 irq, OSD_HANDLER handler, void *cont= ext) +{ acpi_irq_irq =3D irq; acpi_irq_handler =3D handler; acpi_irq_context =3D context; @@ -267,6 +390,7 @@ =20 return AE_OK; } +#endif =20 /* * Running in interpreter thread context, safe to sleep @@ -280,7 +404,7 @@ } =20 void -acpi_os_stall(u32 us) +acpi_os_stall_rt(u32 us) { if (us > 10000) { mdelay(us / 1000); @@ -322,7 +446,7 @@ acpi_status acpi_os_write_port( ACPI_IO_ADDRESS port, - u32 value, + NATIVE_UINT value, u32 width) { switch (width) @@ -375,7 +499,7 @@ acpi_status acpi_os_write_memory( ACPI_PHYSICAL_ADDRESS phys_addr, - u32 value, + NATIVE_UINT value, u32 width) { switch (width) @@ -468,7 +592,7 @@ #else /*CONFIG_ACPI_PCI*/ =20 acpi_status -acpi_os_read_pci_configuration ( +acpi_os_read_pci_configuration_rt ( acpi_pci_id *pci_id, u32 reg, void *value, @@ -502,10 +626,10 @@ } =20 acpi_status -acpi_os_write_pci_configuration ( +acpi_os_write_pci_configuration_rt ( acpi_pci_id *pci_id, u32 reg, - u32 value, + NATIVE_UINT value, u32 width) { int devfn =3D PCI_DEVFN(pci_id->device, pci_id->function); @@ -620,6 +744,22 @@ acpi_os_free(dpc); } } + +#ifdef CONFIG_ACPI_KERNEL_CONFIG +/* + * Queue for interpreter thread + */ + +acpi_status +acpi_os_queue_for_execution_rt( + u32 priority, + OSD_EXECUTION_CALLBACK callback, + void *context) +{ + (*callback)(context); + return AE_OK; +} +#endif =20 acpi_status acpi_os_queue_for_execution( diff -urN linux-2.4.13/drivers/acpi/osconf.c linux-2.4.13-lia/drivers/acpi/= osconf.c --- linux-2.4.13/drivers/acpi/osconf.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/acpi/osconf.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,286 @@ +/* + * osconf.c - ACPI OS-dependent functions for Kernel Boot/Configuration t= ime + * + * Copyright (C) 2000 Intel Corp. + * Copyright (C) 2000 J.I. Lee + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "acpi.h" +#include "osconf.h" + + +static void * __init acpi_os_allocate_bt(u32 size); +static void * __init acpi_os_callocate_bt(u32 size); +static void __init acpi_os_free_bt(void *ptr); +static void __init acpi_os_stall_bt(u32 us); + +static acpi_status __init +acpi_os_queue_for_execution_bt( + u32 priority, + OSD_EXECUTION_CALLBACK callback, + void *context + ); + +static acpi_status __init +acpi_os_read_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, void *val= ue, u32 width); + +static acpi_status __init +acpi_os_write_pci_configuration_bt( acpi_pci_id *pci_id, u32 reg, NATIVE_U= INT value, u32 width); + + +extern struct acpi_osd acpi_osd_rt; +static struct acpi_osd acpi_osd_bt __initdata =3D { + /* these are boottime osd entries that differ from runtime entries */ + acpi_os_allocate_bt, + acpi_os_callocate_bt, + acpi_os_free_bt, + acpi_os_queue_for_execution_bt, + acpi_os_read_pci_configuration_bt, + acpi_os_write_pci_configuration_bt, + acpi_os_stall_bt +}; +static struct acpi_osd *acpi_osd =3D &acpi_osd_rt; + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE +static void __init +acpi_cf_bm_statistics( void ); +#endif + +void __init +acpi_os_bind_osd(int acpi_phase) +{ + switch (acpi_phase) { + case ACPI_CF_PHASE_BOOTTIME: + acpi_osd =3D &acpi_osd_bt; + printk("Acpi cfg:bind to Boot time Acpi OSD\n"); + break; + case ACPI_CF_PHASE_RUNTIME: + default: + acpi_osd =3D &acpi_osd_rt; + printk("Acpi cfg:bind to Run time Acpi OSD\n"); +#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE + acpi_cf_bm_statistics(); +#endif + break; + } +} + +void * +acpi_os_allocate(u32 size) +{ + return acpi_osd->allocate(size); +} + +void * +acpi_os_callocate(u32 size) +{ + return acpi_osd->callocate(size); +} + +void +acpi_os_free(void *ptr) +{ + acpi_osd->free(ptr); + return; +} + +void +acpi_os_stall(u32 us) +{ + acpi_osd->stall(us); + return; +} + +acpi_status +acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value,= u32 width) +{ + return acpi_osd->read_pci_configuration(pci_id, reg, value, width); +} + + +acpi_status +acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT= value, u32 width) +{ + return acpi_osd->write_pci_configuration(pci_id, reg, value, width); +} + + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE +/* + * Let's profile bootmem usage to see how much we consume. J.I. + */ +static unsigned long bm_alloc_size __initdata =3D 0; +static unsigned long bm_alloc_size_max __initdata =3D 0; +static unsigned long bm_alloc_count_max __initdata =3D 0; +static unsigned long bm_free_count_max __initdata =3D 0; + +static void __init +acpi_cf_bm_checkin(void *ptr, u32 size) +{ + bm_alloc_count_max++; + bm_alloc_size +=3D size; + if (bm_alloc_size > bm_alloc_size_max) + bm_alloc_size_max =3D bm_alloc_size; +}; + +static void __init +acpi_cf_bm_checkout(void *ptr, u32 size) +{ + bm_free_count_max++; + bm_alloc_size -=3D size; +}; + +static void __init +acpi_cf_bm_statistics( void ) +{ + printk("Acpi cfg:bm_alloc_size_max =3D%ld bytes\n", bm_alloc_size_max); + printk("Acpi cfg:bm_alloc_count_max=3D%ld\n", bm_alloc_count_max); + printk("Acpi cfg:bm_free_count_max =3D%ld\n", bm_free_count_max); +} +#endif + + +static void * __init +acpi_os_allocate_bt(u32 size) +{ + void *ptr; + + size +=3D sizeof(unsigned long); + ptr =3D alloc_bootmem(size); + + if (ptr) { +#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE + acpi_cf_bm_checkin(ptr, size); +#endif + *((unsigned long *)ptr) =3D (unsigned long)size; + ptr +=3D sizeof(unsigned long); + } + + return ptr; +} + +static void * __init +acpi_os_callocate_bt(u32 size) +{ + void *ptr =3D acpi_os_allocate_bt(size); + + return ptr; +} + +static void __init +acpi_os_free_bt(void *ptr) +{ + unsigned long size; + + ptr -=3D sizeof(size); + size =3D *((unsigned long *)ptr); + +#ifdef CONFIG_ACPI_KERNEL_CONFIG_BM_PROFILE + acpi_cf_bm_checkout(ptr, (unsigned long)size); +#endif + //if (size) + free_bootmem (__pa((unsigned long)ptr), (u32)size); +} + + +static void __init +acpi_os_stall_bt(u32 us) +{ + unsigned long start =3D ia64_get_itc(); + unsigned long cycles =3D us*733; /* XXX: 733 or 800 */ + while (ia64_get_itc() - start < cycles) + /* skip */; +} + + +static acpi_status __init +acpi_os_queue_for_execution_bt( + u32 priority, + OSD_EXECUTION_CALLBACK callback, + void *context) +{ + /* + * run callback immediately + */ + (*callback)(context); + return AE_OK; +} + + +static acpi_status __init +acpi_os_read_pci_configuration_bt ( + acpi_pci_id *pci_id, + u32 reg, + void *value, + u32 width) +{ + unsigned int devfn; + s64 status; + u64 lval; + + devfn =3D PCI_DEVFN(pci_id->device, pci_id->function); + + switch (width) + { + case 8: + status =3D ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), de= vfn, reg), 1, &lval); + *(u8*)value =3D (u8)lval; + break; + case 16: + status =3D ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), de= vfn, reg), 2, &lval); + *(u16*)value =3D (u16)lval; + break; + case 32: + status =3D ia64_sal_pci_config_read(PCI_CONFIG_ADDRESS((pci_id->bus), de= vfn, reg), 4, &lval); + *(u32*)value =3D (u32)lval; + break; + default: + BUG(); + } + + return status; +} + + +static acpi_status __init +acpi_os_write_pci_configuration_bt ( + acpi_pci_id *pci_id, + u32 reg, + NATIVE_UINT value, + u32 width) +{ + unsigned int devfn; + s64 status; + + devfn =3D PCI_DEVFN(pci_id->device, pci_id->function); + + switch (width) + { + case 8: + status =3D ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), d= evfn, reg), 1, value); + break; + case 16: + status =3D ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), d= evfn, reg), 2, value); + break; + case 32: + status =3D ia64_sal_pci_config_write(PCI_CONFIG_ADDRESS((pci_id->bus), d= evfn, reg), 4, value); + break; + default: + BUG(); + } + + return status; +} diff -urN linux-2.4.13/drivers/acpi/osconf.h linux-2.4.13-lia/drivers/acpi/= osconf.h --- linux-2.4.13/drivers/acpi/osconf.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/acpi/osconf.h Thu Oct 4 00:21:40 2001 @@ -0,0 +1,57 @@ +/* + * osconf.h - ACPI OS-dependent headers for Kernel Boot/Configuration time + * + * Copyright (C) 2000 Intel Corp. + * Copyright (C) 2000 J.I. Lee + */ + + +struct acpi_osd { + void * (*allocate)(u32 size); + void * (*callocate)(u32 size); + void (*free)(void *ptr); + acpi_status (*queue_for_exec)(u32 pri, OSD_EXECUTION_CALLBACK cb, void *c= ontext); + acpi_status (*read_pci_configuration)(acpi_pci_id *pci_id, u32 reg, void = *value, u32 width); + acpi_status (*write_pci_configuration)(acpi_pci_id *pci_id, u32 reg, NATI= VE_UINT value, u32 width); + void (*stall)(u32 us); +}; + + +#define PCI_CONFIG_ADDRESS(bus, devfn, where) \ + (((u64) bus << 16) | ((u64) (devfn & 0xff) << 8) | (where & 0xff)) + +#define ACPI_CF_PHASE_BOOTTIME 0x00 +#define ACPI_CF_PHASE_RUNTIME 0x01 + + +/* acpi_osd functions */ +void * acpi_os_allocate(u32 size); +void * acpi_os_callocate(u32 size); +void acpi_os_free(void *ptr); +void acpi_os_stall(u32 us); + +acpi_status +acpi_os_read_pci_configuration( acpi_pci_id *pci_id, u32 reg, void *value,= u32 width ); + +acpi_status +acpi_os_write_pci_configuration( acpi_pci_id *pci_id, u32 reg, NATIVE_UINT= value, u32 width ); + + +/* acpi_osd_rt functions */ +extern void * acpi_os_allocate_rt(u32 size); +extern void * acpi_os_callocate_rt(u32 size); +extern void acpi_os_free_rt(void *ptr); +extern void acpi_os_stall_rt(u32 us); + +extern acpi_status +acpi_os_queue_for_execution_rt( + u32 priority, + OSD_EXECUTION_CALLBACK callback, + void *context + ); + +extern acpi_status +acpi_os_read_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, void *val= ue, u32 width ); + +extern acpi_status +acpi_os_write_pci_configuration_rt( acpi_pci_id *pci_id, u32 reg, NATIVE_U= INT value, u32 width ); diff -urN linux-2.4.13/drivers/acpi/ospm/include/ec.h linux-2.4.13-lia/driv= ers/acpi/ospm/include/ec.h --- linux-2.4.13/drivers/acpi/ospm/include/ec.h Mon Sep 24 15:06:44 2001 +++ linux-2.4.13-lia/drivers/acpi/ospm/include/ec.h Thu Oct 4 00:21:40 2001 @@ -167,14 +167,14 @@ acpi_status ec_io_read ( EC_CONTEXT *ec, - u32 io_port, + ACPI_IO_ADDRESS io_port, u8 *data, EC_EVENT wait_event); =20 acpi_status ec_io_write ( EC_CONTEXT *ec, - u32 io_port, + ACPI_IO_ADDRESS io_port, u8 data, EC_EVENT wait_event); =20 diff -urN linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c linux-2.4.13-lia/d= rivers/acpi/ospm/system/sm_osl.c --- linux-2.4.13/drivers/acpi/ospm/system/sm_osl.c Mon Sep 24 15:06:44 2001 +++ linux-2.4.13-lia/drivers/acpi/ospm/system/sm_osl.c Thu Oct 4 00:21:40 = 2001 @@ -33,7 +33,9 @@ #include #include #include +#ifndef __ia64__ #include +#endif #include =20 #include @@ -278,6 +280,7 @@ int *eof, void *context) { +#ifndef _IA64 char *str =3D page; int len; u32 sec,min,hr; @@ -351,6 +354,9 @@ *start =3D page; =20 return len; +#else + return 0; +#endif } =20 static int get_date_field(char **str, u32 *value) @@ -381,6 +387,7 @@ unsigned long count, void *data) { +#ifndef _IA64 char buf[30]; char *str =3D buf; u32 sec,min,hr; @@ -520,6 +527,9 @@ error =3D 0; out: return error ? error : count; +#else + return 0; +#endif } =20 static int=20 diff -urN linux-2.4.13/drivers/acpi/utilities/uteval.c linux-2.4.13-lia/dri= vers/acpi/utilities/uteval.c --- linux-2.4.13/drivers/acpi/utilities/uteval.c Mon Sep 24 15:06:47 2001 +++ linux-2.4.13-lia/drivers/acpi/utilities/uteval.c Wed Oct 24 18:18:19 20= 01 @@ -115,6 +115,93 @@ =20 /*************************************************************************= ****** * + * FUNCTION: Acpi_ut_execute_CID + * + * PARAMETERS: Device_node - Node for the device + * *Cid - Where the CID is returned + * + * RETURN: Status + * + * DESCRIPTION: Executes the _CID control method that returns the compatib= le + * ID of the device. + * + * NOTE: Internal function, no parameter validation + * + *************************************************************************= *****/ + +acpi_status +acpi_ut_execute_CID ( + acpi_namespace_node *device_node, + ACPI_DEVICE_ID *cid) +{ + acpi_operand_object *obj_desc; + acpi_status status; + + + FUNCTION_TRACE ("Ut_execute_CID"); + + + /* Execute the method */ + + status =3D acpi_ns_evaluate_relative (device_node, + METHOD_NAME__CID, NULL, &obj_desc); + if (ACPI_FAILURE (status)) { + if (status =3D AE_NOT_FOUND) { + ACPI_DEBUG_PRINT ((ACPI_DB_INFO, "_CID on %4.4s was not found\n", + &device_node->name)); + } + + else { + ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "_CID on %4.4s failed %s\n", + &device_node->name, acpi_format_exception (status))); + } + + return_ACPI_STATUS (status); + } + + /* Did we get a return object? */ + + if (!obj_desc) { + ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "No object was returned from _CID\n")); + return_ACPI_STATUS (AE_TYPE); + } + + /* + * A _CID can return either a Number (32 bit compressed EISA ID) or + * a string + */ + if ((obj_desc->common.type !=3D ACPI_TYPE_INTEGER) && + (obj_desc->common.type !=3D ACPI_TYPE_STRING)) { + status =3D AE_TYPE; + ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, + "Type returned from _CID not a number or string: %s(%X) \n", + acpi_ut_get_type_name (obj_desc->common.type), obj_desc->common.type)); + } + + else { + if (obj_desc->common.type =3D ACPI_TYPE_INTEGER) { + /* Convert the Numeric CID to string */ + + acpi_ex_eisa_id_to_string ((u32) obj_desc->integer.value, cid->buffer); + } + + else { + /* Copy the String CID from the returned object */ + + STRNCPY(cid->buffer, obj_desc->string.pointer, sizeof(cid->buffer)); + } + } + + + /* On exit, we must delete the return object */ + + acpi_ut_remove_reference (obj_desc); + + return_ACPI_STATUS (status); +} + +/*************************************************************************= ****** + * * FUNCTION: Acpi_ut_execute_HID * * PARAMETERS: Device_node - Node for the device diff -urN linux-2.4.13/drivers/char/Config.in linux-2.4.13-lia/drivers/char= /Config.in --- linux-2.4.13/drivers/char/Config.in Wed Oct 24 10:17:45 2001 +++ linux-2.4.13-lia/drivers/char/Config.in Wed Oct 24 10:21:08 2001 @@ -207,6 +207,9 @@ dep_tristate '/dev/agpgart (AGP Support)' CONFIG_AGP $CONFIG_DRM_AGP if [ "$CONFIG_AGP" !=3D "n" ]; then bool ' Intel 440LX/BX/GX and I815/I830M/I840/I850 support' CONFIG_AGP_= INTEL + if [ "$CONFIG_IA64" !=3D "n" ]; then + bool ' Intel 460GX support' CONFIG_AGP_I460 + fi bool ' Intel I810/I815/I830M (on-board) support' CONFIG_AGP_I810 bool ' VIA chipset support' CONFIG_AGP_VIA bool ' AMD Irongate, 761, and 762 support' CONFIG_AGP_AMD @@ -215,7 +218,17 @@ bool ' Serverworks LE/HE support' CONFIG_AGP_SWORKS fi =20 -source drivers/char/drm/Config.in +bool 'Direct Rendering Manager (XFree86 DRI support)' CONFIG_DRM + +if [ "$CONFIG_DRM" =3D "y" ]; then + bool ' Build drivers for new (XFree 4.1) DRM' CONFIG_DRM_NEW + if [ "$CONFIG_DRM_NEW" =3D "y" ]; then + source drivers/char/drm/Config.in + else + define_bool CONFIG_DRM_OLD y + source drivers/char/drm-4.0/Config.in + fi +fi =20 if [ "$CONFIG_HOTPLUG" =3D "y" -a "$CONFIG_PCMCIA" !=3D "n" ]; then source drivers/char/pcmcia/Config.in diff -urN linux-2.4.13/drivers/char/Makefile linux-2.4.13-lia/drivers/char/= Makefile --- linux-2.4.13/drivers/char/Makefile Wed Oct 24 10:17:45 2001 +++ linux-2.4.13-lia/drivers/char/Makefile Wed Oct 24 10:21:08 2001 @@ -25,7 +25,7 @@ misc.o pty.o random.o selection.o serial.o \ sonypi.o tty_io.o tty_ioctl.o generic_serial.o =20 -mod-subdirs :=3D joystick ftape drm pcmcia +mod-subdirs :=3D joystick ftape drm pcmcia drm-4.0 =20 list-multi :=3D=09 =20 @@ -138,6 +138,7 @@ =20 obj-$(CONFIG_MAGIC_SYSRQ) +=3D sysrq.o obj-$(CONFIG_ATARI_DSP56K) +=3D dsp56k.o +obj-$(CONFIG_SIM_SERIAL) +=3D simserial.o obj-$(CONFIG_ROCKETPORT) +=3D rocket.o obj-$(CONFIG_MOXA_SMARTIO) +=3D mxser.o obj-$(CONFIG_MOXA_INTELLIO) +=3D moxa.o @@ -198,7 +199,8 @@ obj-$(CONFIG_QIC02_TAPE) +=3D tpqic02.o =20 subdir-$(CONFIG_FTAPE) +=3D ftape -subdir-$(CONFIG_DRM) +=3D drm +subdir-$(CONFIG_DRM_NEW) +=3D drm +subdir-$(CONFIG_DRM_OLD) +=3D drm-4.0 subdir-$(CONFIG_PCMCIA) +=3D pcmcia subdir-$(CONFIG_AGP) +=3D agp =20 diff -urN linux-2.4.13/drivers/char/agp/agp.h linux-2.4.13-lia/drivers/char= /agp/agp.h --- linux-2.4.13/drivers/char/agp/agp.h Wed Oct 10 16:31:46 2001 +++ linux-2.4.13-lia/drivers/char/agp/agp.h Wed Oct 10 16:33:17 2001 @@ -84,8 +84,8 @@ void *dev_private_data; struct pci_dev *dev; gatt_mask *masks; - unsigned long *gatt_table; - unsigned long *gatt_table_real; + u32 *gatt_table; + u32 *gatt_table_real; unsigned long scratch_page; unsigned long gart_bus_addr; unsigned long gatt_bus_addr; @@ -111,6 +111,7 @@ void (*cleanup) (void); void (*tlb_flush) (agp_memory *); unsigned long (*mask_memory) (unsigned long, int); + unsigned long (*unmask_memory) (unsigned long); void (*cache_flush) (void); int (*create_gatt_table) (void); int (*free_gatt_table) (void); @@ -150,6 +151,10 @@ #define A_IDXFIX() (A_SIZE_FIX(agp_bridge.aperture_sizes) + i) #define MAXKEY (4096 * 32) =20 +#ifndef max +#define max(a,b) (((a)>(b))?(a):(b)) +#endif + #define AGPGART_MODULE_NAME "agpgart" #define PFX AGPGART_MODULE_NAME ": " =20 @@ -209,6 +214,9 @@ #ifndef PCI_DEVICE_ID_INTEL_82443GX_1 #define PCI_DEVICE_ID_INTEL_82443GX_1 0x71a1 #endif +#ifndef PCI_DEVICE_ID_INTEL_460GX +#define PCI_DEVICE_ID_INTEL_460GX 0x84ea +#endif #ifndef PCI_DEVICE_ID_AMD_IRONGATE_0 #define PCI_DEVICE_ID_AMD_IRONGATE_0 0x7006 #endif @@ -250,6 +258,15 @@ #define INTEL_AGPCTRL 0xb0 #define INTEL_NBXCFG 0x50 #define INTEL_ERRSTS 0x91 + +/* Intel 460GX Registers */ +#define INTEL_I460_APBASE 0x10 +#define INTEL_I460_BAPBASE 0x98 +#define INTEL_I460_GXBCTL 0xa0 +#define INTEL_I460_AGPSIZ 0xa2 +#define INTEL_I460_ATTBASE 0xfe200000 +#define INTEL_I460_GATT_VALID (1UL << 24) +#define INTEL_I460_GATT_COHERENT (1UL << 25) =20 /* intel i840 registers */ #define INTEL_I840_MCHCFG 0x50 diff -urN linux-2.4.13/drivers/char/agp/agpgart_be.c linux-2.4.13-lia/drive= rs/char/agp/agpgart_be.c --- linux-2.4.13/drivers/char/agp/agpgart_be.c Wed Oct 10 16:31:46 2001 +++ linux-2.4.13-lia/drivers/char/agp/agpgart_be.c Wed Oct 10 16:33:17 2001 @@ -22,6 +22,7 @@ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE=20 * OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. * + * 460GX support by Chris Ahna */ #include #include @@ -43,6 +44,9 @@ #include #include #include +#include +#include +#include =20 #include #include "agp.h" @@ -60,7 +64,7 @@ EXPORT_SYMBOL(agp_backend_release); =20 static void flush_cache(void); - +=20 static struct agp_bridge_data agp_bridge; static int agp_try_unsupported __initdata =3D 0; =20 @@ -205,19 +209,56 @@ agp_bridge.free_by_type(curr); return; } - if (curr->page_count !=3D 0) { - for (i =3D 0; i < curr->page_count; i++) { - curr->memory[i] &=3D ~(0x00000fff); - agp_bridge.agp_destroy_page((unsigned long) - phys_to_virt(curr->memory[i])); + if(agp_bridge.cant_use_aperture =3D 0) { + if (curr->page_count !=3D 0) { + for (i =3D 0; i < curr->page_count; i++) { + curr->memory[i] =3D agp_bridge.unmask_memory( + curr->memory[i]); + agp_bridge.agp_destroy_page((unsigned long) + phys_to_virt(curr->memory[i])); + } } + } else { + vfree(curr->vmptr); } + agp_free_key(curr->key); vfree(curr->memory); kfree(curr); MOD_DEC_USE_COUNT; } =20 +#define IN_VMALLOC(_x) (((_x) >=3D VMALLOC_START) && ((_x) < VMALLOC_END= )) + +/* + * Look up and return the pte corresponding to addr. We only do this for + * agp_ioremap'ed addresses.=20 + */ +static pte_t * agp_lookup_pte(unsigned long addr) {=20 + + pgd_t *dir; + pmd_t *pmd; + pte_t *pte; + + if(!IN_VMALLOC(addr)) + return NULL; + + dir =3D pgd_offset_k(addr); + pmd =3D pmd_offset(dir, addr); + + if(pmd) { + pte =3D pte_offset(pmd, addr); + + if(pte) { + return pte; + } else { + return NULL; + } + } else { + return NULL; + } +} + #define ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(unsigned long)) =20 agp_memory *agp_allocate_memory(size_t page_count, u32 type) @@ -247,24 +288,60 @@ scratch_pages =3D (page_count + ENTRIES_PER_PAGE - 1) / ENTRIES_PER_PAGE; =20 new =3D agp_create_memory(scratch_pages); - if (new =3D NULL) { MOD_DEC_USE_COUNT; return NULL; } - for (i =3D 0; i < page_count; i++) { - new->memory[i] =3D agp_bridge.agp_alloc_page(); =20 - if (new->memory[i] =3D 0) { - /* Free this structure */ - agp_free_memory(new); + if(agp_bridge.cant_use_aperture =3D 0) { + for (i =3D 0; i < page_count; i++) { + new->memory[i] =3D agp_bridge.agp_alloc_page(); + + if (new->memory[i] =3D 0) { + /* Free this structure */ + agp_free_memory(new); + return NULL; + } + new->memory[i] + agp_bridge.mask_memory( + virt_to_phys((void *) new->memory[i]), + type); + new->page_count++; + } + } else { + void *vmblock; + unsigned long vaddr, paddr; + pte_t *pte; + + vmblock =3D __vmalloc(page_count << PAGE_SHIFT, GFP_KERNEL, +#ifdef __ia64__ + pgprot_writecombine(PAGE_KERNEL)); +#else + PAGE_KERNEL); +#endif + if(vmblock =3D NULL) { + MOD_DEC_USE_COUNT; return NULL; } - new->memory[i] - agp_bridge.mask_memory( - virt_to_phys((void *) new->memory[i]), - type); - new->page_count++; + + new->vmptr =3D vmblock; + vaddr =3D (unsigned long) vmblock; + + for(i =3D 0; i < page_count; i++, vaddr +=3D PAGE_SIZE) { + pte =3D agp_lookup_pte(vaddr); + if(pte =3D NULL) { + MOD_DEC_USE_COUNT; + return NULL; + } +#ifdef __ia64__ + paddr =3D pte_val(*pte) & _PFN_MASK; +#else + paddr =3D pte_val(*pte) & PAGE_MASK; +#endif + new->memory[i] =3D agp_bridge.mask_memory(paddr, type); + } + + new->page_count =3D page_count; } =20 return new; @@ -353,12 +430,13 @@ curr->is_flushed =3D TRUE; } ret_val =3D agp_bridge.insert_memory(curr, pg_start, curr->type); - + =09 if (ret_val !=3D 0) { return ret_val; } curr->is_bound =3D TRUE; curr->pg_start =3D pg_start; + return 0; } =20 @@ -377,6 +455,7 @@ if (ret_val !=3D 0) { return ret_val; } + curr->is_bound =3D FALSE; curr->pg_start =3D 0; return 0; @@ -387,9 +466,9 @@ /*=20 * Driver routines - start * Currently this module supports the following chipsets: - * i810, i815, 440lx, 440bx, 440gx, i840, i850, via vp3, via mvp3, - * via kx133, via kt133, amd irongate, amd 761, amd 762, ALi M1541, - * and generic support for the SiS chipsets. + * i810, 440lx, 440bx, 440gx, 460gx, i840, i850, via vp3, via mvp3, via kx= 133,=20 + * via kt133, amd irongate, ALi M1541, and generic support for the SiS=20 + * chipsets. */ =20 /* Generic Agp routines - Start */ @@ -614,7 +693,7 @@ for (page =3D virt_to_page(table); page <=3D virt_to_page(table_end); pag= e++) set_bit(PG_reserved, &page->flags); =20 - agp_bridge.gatt_table_real =3D (unsigned long *) table; + agp_bridge.gatt_table_real =3D (u32 *) table; CACHE_FLUSH(); agp_bridge.gatt_table =3D ioremap_nocache(virt_to_phys(table), (PAGE_SIZE * (1 << page_order))); @@ -832,6 +911,11 @@ agp_bridge.agp_enable(mode); } =20 +static unsigned long agp_generic_unmask_memory(unsigned long addr) +{ + return addr & ~(0x00000fff); +} + /* End - Generic Agp routines */ =20 #ifdef CONFIG_AGP_I810 @@ -1096,6 +1180,7 @@ agp_bridge.cleanup =3D intel_i810_cleanup; agp_bridge.tlb_flush =3D intel_i810_tlbflush; agp_bridge.mask_memory =3D intel_i810_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D intel_i810_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1399,6 +1484,633 @@ =20 #endif /* CONFIG_AGP_I810 */ =20 +#ifdef CONFIG_AGP_I460 + +/* BIOS configures the chipset so that one of two apbase registers are use= d */ +static u8 intel_i460_dynamic_apbase =3D 0x10; + +/* 460 supports multiple GART page sizes, so GART pageshift is dynamic */ = +static u8 intel_i460_pageshift =3D 12; + +/* Keep track of which is larger, chipset or kernel page size. */ +static u32 intel_i460_cpk =3D 1; + +/* Structure for tracking partial use of 4MB GART pages */ +static u32 **i460_pg_detail =3D NULL; +static u32 *i460_pg_count =3D NULL; + +#define I460_CPAGES_PER_KPAGE (PAGE_SIZE >> intel_i460_pageshift) +#define I460_KPAGES_PER_CPAGE ((1 << intel_i460_pageshift) >> PAGE_SHIFT) + +#define I460_SRAM_IO_DISABLE (1 << 4) +#define I460_BAPBASE_ENABLE (1 << 3) +#define I460_AGPSIZ_MASK 0x7 +#define I460_4M_PS (1 << 1) + +#define log2(x) ffz(~(x)) + +static int intel_i460_fetch_size(void) +{ + int i; + u8 temp; + aper_size_info_8 *values; + + /* Determine the GART page size */ + pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &temp); + intel_i460_pageshift =3D (temp & I460_4M_PS) ? 22 : 12; + + values =3D A_SIZE_8(agp_bridge.aperture_sizes); + + pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp); + + /* Exit now if the IO drivers for the GART SRAMS are turned off */ + if(temp & I460_SRAM_IO_DISABLE) { + printk("[agpgart] GART SRAMS disabled on 460GX chipset\n"); + printk("[agpgart] AGPGART operation not possible\n"); + return 0; + } + + /* Make sure we don't try to create an 2 ^ 23 entry GATT */ + if((intel_i460_pageshift =3D 0) && ((temp & I460_AGPSIZ_MASK) =3D 4)) { + printk("[agpgart] We can't have a 32GB aperture with 4KB" + " GART pages\n"); + return 0; + } + + /* Determine the proper APBASE register */ + if(temp & I460_BAPBASE_ENABLE) + intel_i460_dynamic_apbase =3D INTEL_I460_BAPBASE; + else intel_i460_dynamic_apbase =3D INTEL_I460_APBASE; + + for (i =3D 0; i < agp_bridge.num_aperture_sizes; i++) { + + /* + * Dynamically calculate the proper num_entries and page_order=20 + * values for the define aperture sizes. Take care not to + * shift off the end of values[i].size. + */=09 + values[i].num_entries =3D (values[i].size << 8) >> + (intel_i460_pageshift - 12); + values[i].page_order =3D log2((sizeof(u32)*values[i].num_entries) + >> PAGE_SHIFT); + } + + for (i =3D 0; i < agp_bridge.num_aperture_sizes; i++) { + /* Neglect control bits when matching up size_value */ + if ((temp & I460_AGPSIZ_MASK) =3D values[i].size_value) { + agp_bridge.previous_size + agp_bridge.current_size =3D (void *) (= values + i); + agp_bridge.aperture_size_idx =3D i; + return values[i].size; + } + } + + return 0; +} + +/* There isn't anything to do here since 460 has no GART TLB. */=20 +static void intel_i460_tlb_flush(agp_memory * mem) +{ + return; +} + +/* + * This utility function is needed to prevent corruption of the control bi= ts + * which are stored along with the aperture size in 460's AGPSIZ register + */ +static void intel_i460_write_agpsiz(u8 size_value) +{ + u8 temp; + + pci_read_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, &temp); + pci_write_config_byte(agp_bridge.dev, INTEL_I460_AGPSIZ, + ((temp & ~I460_AGPSIZ_MASK) | size_value)); +} + +static void intel_i460_cleanup(void) +{ + aper_size_info_8 *previous_size; + + previous_size =3D A_SIZE_8(agp_bridge.previous_size); + intel_i460_write_agpsiz(previous_size->size_value); + + if(intel_i460_cpk =3D 0) + { + vfree(i460_pg_detail); + vfree(i460_pg_count); + } +} + + +/* Control bits for Out-Of-GART coherency and Burst Write Combining */ +#define I460_GXBCTL_OOG (1UL << 0) +#define I460_GXBCTL_BWC (1UL << 2) + +static int intel_i460_configure(void) +{ + union { + u32 small[2]; + u64 large; + } temp; + u8 scratch; + int i; + + aper_size_info_8 *current_size; + + temp.large =3D 0; + + current_size =3D A_SIZE_8(agp_bridge.current_size); + intel_i460_write_agpsiz(current_size->size_value);=09 + + /* + * Do the necessary rigmarole to read all eight bytes of APBASE. + * This has to be done since the AGP aperture can be above 4GB on + * 460 based systems. + */ + pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase,=20 + &(temp.small[0])); + pci_read_config_dword(agp_bridge.dev, intel_i460_dynamic_apbase + 4, + &(temp.small[1])); + + /* Clear BAR control bits */ + agp_bridge.gart_bus_addr =3D temp.large & ~((1UL << 3) - 1);=20 + + pci_read_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, &scratch); + pci_write_config_byte(agp_bridge.dev, INTEL_I460_GXBCTL, + (scratch & 0x02) | I460_GXBCTL_OOG | I460_GXBCTL_BWC); + + /*=20 + * Initialize partial allocation trackers if a GART page is bigger than + * a kernel page. + */ + if(I460_CPAGES_PER_KPAGE >=3D 1) { + intel_i460_cpk =3D 1; + } else { + intel_i460_cpk =3D 0; + + i460_pg_detail =3D (void *) vmalloc(sizeof(*i460_pg_detail) * + current_size->num_entries); + i460_pg_count =3D (void *) vmalloc(sizeof(*i460_pg_count) * + current_size->num_entries); +=09 + for (i =3D 0; i < current_size->num_entries; i++) { + i460_pg_count[i] =3D 0; + i460_pg_detail[i] =3D NULL; + } + } + + return 0; +} + +static int intel_i460_create_gatt_table(void) { + + char *table; + int i; + int page_order; + int num_entries; + void *temp; + unsigned int read_back; + + /* + * Load up the fixed address of the GART SRAMS which hold our + * GATT table. + */ + table =3D (char *) __va(INTEL_I460_ATTBASE); + + temp =3D agp_bridge.current_size; + page_order =3D A_SIZE_8(temp)->page_order; + num_entries =3D A_SIZE_8(temp)->num_entries; + + agp_bridge.gatt_table_real =3D (u32 *) table; + agp_bridge.gatt_table =3D ioremap_nocache(virt_to_phys(table), + (PAGE_SIZE * (1 << page_order))); + agp_bridge.gatt_bus_addr =3D virt_to_phys(agp_bridge.gatt_table_real); + + for (i =3D 0; i < num_entries; i++) { + agp_bridge.gatt_table[i] =3D 0; + } + + /*=20 + * The 460 spec says we have to read the last location written to=20 + * make sure that all writes have taken effect + */ + read_back =3D agp_bridge.gatt_table[i - 1]; + + return 0; +} + +static int intel_i460_free_gatt_table(void) +{ + int num_entries; + int i; + void *temp; + unsigned int read_back; + + temp =3D agp_bridge.current_size; + + num_entries =3D A_SIZE_8(temp)->num_entries; + + for (i =3D 0; i < num_entries; i++) { + agp_bridge.gatt_table[i] =3D 0; + } +=09 + /*=20 + * The 460 spec says we have to read the last location written to=20 + * make sure that all writes have taken effect + */ + read_back =3D agp_bridge.gatt_table[i - 1]; + + iounmap(agp_bridge.gatt_table); + + return 0; +} + +/* These functions are called when PAGE_SIZE exceeds the GART page size */= =09 + +static int intel_i460_insert_memory_cpk(agp_memory * mem, + off_t pg_start, int type) +{ + int i, j, k, num_entries; + void *temp; + unsigned int hold; + unsigned int read_back; + + /*=20 + * The rest of the kernel will compute page offsets in terms of + * PAGE_SIZE. + */ + pg_start =3D I460_CPAGES_PER_KPAGE * pg_start; + + temp =3D agp_bridge.current_size; + num_entries =3D A_SIZE_8(temp)->num_entries; + + if((pg_start + I460_CPAGES_PER_KPAGE * mem->page_count) > num_entries) { + printk("[agpgart] Looks like we're out of AGP memory\n"); + return -EINVAL; + } + + j =3D pg_start; + while (j < (pg_start + I460_CPAGES_PER_KPAGE * mem->page_count)) { + if (!PGE_EMPTY(agp_bridge.gatt_table[j])) { + return -EBUSY; + } + j++; + } + + if (mem->is_flushed =3D FALSE) { + CACHE_FLUSH(); + mem->is_flushed =3D TRUE; + } + + for (i =3D 0, j =3D pg_start; i < mem->page_count; i++) { + + hold =3D (unsigned int) (mem->memory[i]); + + for (k =3D 0; k < I460_CPAGES_PER_KPAGE; k++, j++, hold++) + agp_bridge.gatt_table[j] =3D hold; + } + + /*=20 + * The 460 spec says we have to read the last location written to=20 + * make sure that all writes have taken effect + */ + read_back =3D agp_bridge.gatt_table[j - 1]; + + return 0; +} + +static int intel_i460_remove_memory_cpk(agp_memory * mem, off_t pg_start, + int type) +{ + int i; + unsigned int read_back; + + pg_start =3D I460_CPAGES_PER_KPAGE * pg_start; + + for (i =3D pg_start; i < (pg_start + I460_CPAGES_PER_KPAGE *=20 + mem->page_count); i++)=20 + agp_bridge.gatt_table[i] =3D 0; + + /*=20 + * The 460 spec says we have to read the last location written to=20 + * make sure that all writes have taken effect + */ + read_back =3D agp_bridge.gatt_table[i - 1]; + + return 0; +} + +/* + * These functions are called when the GART page size exceeds PAGE_SIZE. + * + * This situation is interesting since AGP memory allocations that are + * smaller than a single GART page are possible. The structures i460_pg_c= ount + * and i460_pg_detail track partial allocation of the large GART pages to + * work around this issue. + * + * i460_pg_count[pg_num] tracks the number of kernel pages in use within + * GART page pg_num. i460_pg_detail[pg_num] is an array containing a + * psuedo-GART entry for each of the aforementioned kernel pages. The who= le + * of i460_pg_detail is equivalent to a giant GATT with page size equal to + * that of the kernel. + */=09 + +static void *intel_i460_alloc_large_page(int pg_num) +{ + int i; + void *bp, *bp_end; + struct page *page; + + i460_pg_detail[pg_num] =3D (void *) vmalloc(sizeof(u32) *=20 + I460_KPAGES_PER_CPAGE); + if(i460_pg_detail[pg_num] =3D NULL) { + printk("[agpgart] Out of memory, we're in trouble...\n"); + return NULL; + } + + for(i =3D 0; i < I460_KPAGES_PER_CPAGE; i++) + i460_pg_detail[pg_num][i] =3D 0; + + bp =3D (void *) __get_free_pages(GFP_KERNEL,=20 + intel_i460_pageshift - PAGE_SHIFT); + if(bp =3D NULL) { + printk("[agpgart] Couldn't alloc 4M GART page...\n"); + return NULL; + } + + bp_end =3D bp + ((PAGE_SIZE *=20 + (1 << (intel_i460_pageshift - PAGE_SHIFT))) - 1); + + for (page =3D virt_to_page(bp); page <=3D virt_to_page(bp_end); page++) + { + atomic_inc(&page->count); + set_bit(PG_locked, &page->flags); + atomic_inc(&agp_bridge.current_memory_agp); + } + + return bp; =09 +} + +static void intel_i460_free_large_page(int pg_num, unsigned long addr) +{ + struct page *page; + void *bp, *bp_end; + + bp =3D (void *) __va(addr); + bp_end =3D bp + (PAGE_SIZE *=20 + (1 << (intel_i460_pageshift - PAGE_SHIFT))); + + vfree(i460_pg_detail[pg_num]); + i460_pg_detail[pg_num] =3D NULL; + + for (page =3D virt_to_page(bp); page < virt_to_page(bp_end); page++) + { + atomic_dec(&page->count); + clear_bit(PG_locked, &page->flags); + wake_up(&page->wait); + atomic_dec(&agp_bridge.current_memory_agp); + } + + free_pages((unsigned long) bp, intel_i460_pageshift - PAGE_SHIFT); +} +=09 +static int intel_i460_insert_memory_kpc(agp_memory * mem, + off_t pg_start, int type) +{ + int i, pg, start_pg, end_pg, start_offset, end_offset, idx; + int num_entries;=09 + void *temp; + unsigned int read_back; + + temp =3D agp_bridge.current_size; + num_entries =3D A_SIZE_8(temp)->num_entries; + + /* Figure out what pg_start means in terms of our large GART pages */ + start_pg =3D pg_start / I460_KPAGES_PER_CPAGE; + start_offset =3D pg_start % I460_KPAGES_PER_CPAGE; + end_pg =3D (pg_start + mem->page_count - 1) /=20 + I460_KPAGES_PER_CPAGE; + end_offset =3D (pg_start + mem->page_count - 1) %=20 + I460_KPAGES_PER_CPAGE; + + if(end_pg > num_entries) + { + printk("[agpgart] Looks like we're out of AGP memory\n"); + return -EINVAL; + } + + /* Check if the requested region of the aperture is free */ + for(pg =3D start_pg; pg <=3D end_pg; pg++) + { + /* Allocate new GART pages if necessary */ + if(i460_pg_detail[pg] =3D NULL) { + temp =3D intel_i460_alloc_large_page(pg); + if(temp =3D NULL) + return -ENOMEM; + agp_bridge.gatt_table[pg] =3D agp_bridge.mask_memory( + (unsigned long) temp, 0); + read_back =3D agp_bridge.gatt_table[pg]; + } + + for(idx =3D ((pg =3D start_pg) ? start_offset : 0); + idx < ((pg =3D end_pg) ? (end_offset + 1)=20 + : I460_KPAGES_PER_CPAGE); + idx++)=20 + { + if(i460_pg_detail[pg][idx] !=3D 0) + return -EBUSY; + } + } + =09 + if (mem->is_flushed =3D FALSE) { + CACHE_FLUSH(); + mem->is_flushed =3D TRUE; + } + + for(pg =3D start_pg, i =3D 0; pg <=3D end_pg; pg++) + { + for(idx =3D ((pg =3D start_pg) ? start_offset : 0); + idx < ((pg =3D end_pg) ? (end_offset + 1) + : I460_KPAGES_PER_CPAGE); + idx++, i++) + { + i460_pg_detail[pg][idx] =3D agp_bridge.gatt_table[pg] +=20 + ((idx * PAGE_SIZE) >> 12); + i460_pg_count[pg]++; + + /* Finally we fill in mem->memory... */ + mem->memory[i] =3D ((unsigned long) (0xffffff &=20 + i460_pg_detail[pg][idx])) << 12; + } + } + + return 0; +} +=09 +static int intel_i460_remove_memory_kpc(agp_memory * mem, + off_t pg_start, int type) +{ + int i, pg, start_pg, end_pg, start_offset, end_offset, idx; + int num_entries; + void *temp; + unsigned int read_back; + unsigned long addr; + + temp =3D agp_bridge.current_size; + num_entries =3D A_SIZE_8(temp)->num_entries; + + /* Figure out what pg_start means in terms of our large GART pages */ + start_pg =3D pg_start / I460_KPAGES_PER_CPAGE; + start_offset =3D pg_start % I460_KPAGES_PER_CPAGE; + end_pg =3D (pg_start + mem->page_count - 1) /=20 + I460_KPAGES_PER_CPAGE; + end_offset =3D (pg_start + mem->page_count - 1) %=20 + I460_KPAGES_PER_CPAGE; + + for(i =3D 0, pg =3D start_pg; pg <=3D end_pg; pg++) + { + for(idx =3D ((pg =3D start_pg) ? start_offset : 0); + idx < ((pg =3D end_pg) ? (end_offset + 1) + : I460_KPAGES_PER_CPAGE); + idx++, i++)=20 + { + mem->memory[i] =3D 0; + i460_pg_detail[pg][idx] =3D 0; + i460_pg_count[pg]--; + } + + /* Free GART pages if they are unused */ + if(i460_pg_count[pg] =3D 0) { + addr =3D (0xffffffUL & (unsigned long)=20 + (agp_bridge.gatt_table[pg])) << 12; + + agp_bridge.gatt_table[pg] =3D 0; + read_back =3D agp_bridge.gatt_table[pg]; + + intel_i460_free_large_page(pg, addr); + } + } + =09 + return 0; +} + +/* Dummy routines to call the approriate {cpk,kpc} function */ + +static int intel_i460_insert_memory(agp_memory * mem, + off_t pg_start, int type) +{ + if(intel_i460_cpk) + return intel_i460_insert_memory_cpk(mem, pg_start, type); + else + return intel_i460_insert_memory_kpc(mem, pg_start, type); +} + +static int intel_i460_remove_memory(agp_memory * mem, + off_t pg_start, int type) +{ + if(intel_i460_cpk) + return intel_i460_remove_memory_cpk(mem, pg_start, type); + else + return intel_i460_remove_memory_kpc(mem, pg_start, type); +} + +/* + * If the kernel page size is smaller that the chipset page size, we don't + * want to allocate memory until we know where it is to be bound in the + * aperture (a multi-kernel-page alloc might fit inside of an already + * allocated GART page). Consequently, don't allocate or free anything + * if i460_cpk (meaning chipset pages per kernel page) isn't set. + * + * Let's just hope nobody counts on the allocated AGP memory being there + * before bind time (I don't think current drivers do)... + */=20 +static unsigned long intel_i460_alloc_page(void) +{ + if(intel_i460_cpk) + return agp_generic_alloc_page(); + + /* Returning NULL would cause problems */ + return ((unsigned long) ~0UL); +} + +static void intel_i460_destroy_page(unsigned long page) +{ + if(intel_i460_cpk) + agp_generic_destroy_page(page); +} + +static gatt_mask intel_i460_masks[] +{ + {=20 + INTEL_I460_GATT_VALID,=20 + 0 + } +}; + +static unsigned long intel_i460_mask_memory(unsigned long addr, int type) = +{ + /* Make sure the returned address is a valid GATT entry */ + return (agp_bridge.masks[0].mask | (((addr &=20 + ~((1 << intel_i460_pageshift) - 1)) & 0xffffff000) >> 12)); +} + +static unsigned long intel_i460_unmask_memory(unsigned long addr) +{ + /* Turn a GATT entry into a physical address */ + return ((addr & 0xffffff) << 12); +} + +static aper_size_info_8 intel_i460_sizes[3] +{ + /*=20 + * The 32GB aperture is only available with a 4M GART page size. + * Due to the dynamic GART page size, we can't figure out page_order + * or num_entries until runtime. + */ + {32768, 0, 0, 4}, + {1024, 0, 0, 2}, + {256, 0, 0, 1} +}; + +static int __init intel_i460_setup (struct pci_dev *pdev) +{ + + agp_bridge.masks =3D intel_i460_masks; + agp_bridge.num_of_masks =3D 1; + agp_bridge.aperture_sizes =3D (void *) intel_i460_sizes; + agp_bridge.size_type =3D U8_APER_SIZE; + agp_bridge.num_aperture_sizes =3D 3; + agp_bridge.dev_private_data =3D NULL; + agp_bridge.needs_scratch_page =3D FALSE; + agp_bridge.configure =3D intel_i460_configure; + agp_bridge.fetch_size =3D intel_i460_fetch_size; + agp_bridge.cleanup =3D intel_i460_cleanup; + agp_bridge.tlb_flush =3D intel_i460_tlb_flush; + agp_bridge.mask_memory =3D intel_i460_mask_memory; + agp_bridge.unmask_memory =3D intel_i460_unmask_memory; + agp_bridge.agp_enable =3D agp_generic_agp_enable; + agp_bridge.cache_flush =3D global_cache_flush; + agp_bridge.create_gatt_table =3D intel_i460_create_gatt_table; + agp_bridge.free_gatt_table =3D intel_i460_free_gatt_table; + agp_bridge.insert_memory =3D intel_i460_insert_memory; + agp_bridge.remove_memory =3D intel_i460_remove_memory; + agp_bridge.alloc_by_type =3D agp_generic_alloc_by_type; + agp_bridge.free_by_type =3D agp_generic_free_by_type; + agp_bridge.agp_alloc_page =3D intel_i460_alloc_page; + agp_bridge.agp_destroy_page =3D intel_i460_destroy_page; +#if 0 + agp_bridge.suspend =3D ??; + agp_bridge.resume =3D ??; +#endif + agp_bridge.cant_use_aperture =3D 1; + + return 0; + + (void) pdev; /* unused */ +} + +#endif /* CONFIG_AGP_I460 */ + #ifdef CONFIG_AGP_INTEL =20 static int intel_fetch_size(void) @@ -1579,6 +2291,7 @@ agp_bridge.cleanup =3D intel_cleanup; agp_bridge.tlb_flush =3D intel_tlbflush; agp_bridge.mask_memory =3D intel_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1612,6 +2325,7 @@ agp_bridge.cleanup =3D intel_cleanup; agp_bridge.tlb_flush =3D intel_tlbflush; agp_bridge.mask_memory =3D intel_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1645,6 +2359,7 @@ agp_bridge.cleanup =3D intel_cleanup; agp_bridge.tlb_flush =3D intel_tlbflush; agp_bridge.mask_memory =3D intel_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1765,6 +2480,7 @@ agp_bridge.cleanup =3D via_cleanup; agp_bridge.tlb_flush =3D via_tlbflush; agp_bridge.mask_memory =3D via_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1879,6 +2595,7 @@ agp_bridge.cleanup =3D sis_cleanup; agp_bridge.tlb_flush =3D sis_tlbflush; agp_bridge.mask_memory =3D sis_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -1901,8 +2618,8 @@ #ifdef CONFIG_AGP_AMD =20 typedef struct _amd_page_map { - unsigned long *real; - unsigned long *remapped; + u32 *real; + u32 *remapped; } amd_page_map; =20 static struct _amd_irongate_private { @@ -1915,7 +2632,7 @@ { int i; =20 - page_map->real =3D (unsigned long *) __get_free_page(GFP_KERNEL); + page_map->real =3D (u32 *) __get_free_page(GFP_KERNEL); if (page_map->real =3D NULL) { return -ENOMEM; } @@ -2170,7 +2887,7 @@ off_t pg_start, int type) { int i, j, num_entries; - unsigned long *cur_gatt; + u32 *cur_gatt; unsigned long addr; =20 num_entries =3D A_SIZE_LVL2(agp_bridge.current_size)->num_entries; @@ -2210,7 +2927,7 @@ int type) { int i; - unsigned long *cur_gatt; + u32 *cur_gatt; unsigned long addr; =20 if (type !=3D 0 || mem->type !=3D 0) { @@ -2257,6 +2974,7 @@ agp_bridge.cleanup =3D amd_irongate_cleanup; agp_bridge.tlb_flush =3D amd_irongate_tlbflush; agp_bridge.mask_memory =3D amd_irongate_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D global_cache_flush; agp_bridge.create_gatt_table =3D amd_create_gatt_table; @@ -2505,6 +3223,7 @@ agp_bridge.cleanup =3D ali_cleanup; agp_bridge.tlb_flush =3D ali_tlbflush; agp_bridge.mask_memory =3D ali_mask_memory; + agp_bridge.unmask_memory =3D agp_generic_unmask_memory; agp_bridge.agp_enable =3D agp_generic_agp_enable; agp_bridge.cache_flush =3D ali_cache_flush; agp_bridge.create_gatt_table =3D agp_generic_create_gatt_table; @@ -3287,6 +4006,15 @@ =20 #endif /* CONFIG_AGP_INTEL */ =20 +#ifdef CONFIG_AGP_I460 + { PCI_DEVICE_ID_INTEL_460GX, + PCI_VENDOR_ID_INTEL, + INTEL_460GX, + "Intel", + "460GX", + intel_i460_setup }, +#endif + #ifdef CONFIG_AGP_SIS { PCI_DEVICE_ID_SI_630, PCI_VENDOR_ID_SI, @@ -3455,6 +4183,18 @@ return -ENODEV; } =20 +static int agp_check_supported_device(struct pci_dev *dev) { + + int i; + + for(i =3D 0; i < ARRAY_SIZE (agp_bridge_info); i++) { + if(dev->vendor =3D agp_bridge_info[i].vendor_id && + dev->device =3D agp_bridge_info[i].device_id) + return 1; + } + + return 0; +} =20 /* Supported Device Scanning routine */ =20 @@ -3464,8 +4204,14 @@ u8 cap_ptr =3D 0x00; u32 cap_id, scratch; =20 - if ((dev =3D pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, NULL)) =3D NULL) - return -ENODEV; + /*=20 + * Some systems have multiple host bridges (i.e. BigSur), so + * we can't just use the first one we find. + */ + do { + if ((dev =3D pci_find_class(PCI_CLASS_BRIDGE_HOST << 8, dev)) =3D NULL) + return -ENODEV; + } while(!agp_check_supported_device(dev)); =20 agp_bridge.dev =3D dev; =20 diff -urN linux-2.4.13/drivers/char/drm/Config.in linux-2.4.13-lia/drivers/= char/drm/Config.in --- linux-2.4.13/drivers/char/drm/Config.in Wed Aug 8 09:42:10 2001 +++ linux-2.4.13-lia/drivers/char/drm/Config.in Thu Oct 4 00:21:40 2001 @@ -5,12 +5,9 @@ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. # =20 -bool 'Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)' CON= FIG_DRM -if [ "$CONFIG_DRM" !=3D "n" ]; then - tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX - tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA - tristate ' ATI Rage 128' CONFIG_DRM_R128 - dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP - dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP - dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP -fi +tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM_TDFX +tristate ' 3dlabs GMX 2000' CONFIG_DRM_GAMMA +tristate ' ATI Rage 128' CONFIG_DRM_R128 +dep_tristate ' ATI Radeon' CONFIG_DRM_RADEON $CONFIG_AGP +dep_tristate ' Intel I810' CONFIG_DRM_I810 $CONFIG_AGP +dep_tristate ' Matrox g200/g400' CONFIG_DRM_MGA $CONFIG_AGP diff -urN linux-2.4.13/drivers/char/drm/ati_pcigart.h linux-2.4.13-lia/driv= ers/char/drm/ati_pcigart.h --- linux-2.4.13/drivers/char/drm/ati_pcigart.h Mon Sep 24 15:06:57 2001 +++ linux-2.4.13-lia/drivers/char/drm/ati_pcigart.h Thu Oct 4 00:21:40 2001 @@ -30,7 +30,10 @@ #define __NO_VERSION__ #include "drmP.h" =20 -#if PAGE_SIZE =3D 8192 +#if PAGE_SIZE =3D 16384 +# define ATI_PCIGART_TABLE_ORDER 1 +# define ATI_PCIGART_TABLE_PAGES (1 << 1) +#elif PAGE_SIZE =3D 8192 # define ATI_PCIGART_TABLE_ORDER 2 # define ATI_PCIGART_TABLE_PAGES (1 << 2) #elif PAGE_SIZE =3D 4096 @@ -103,6 +106,7 @@ goto done; } =20 +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) if ( !dev->pdev ) { DRM_ERROR( "PCI device unknown!\n" ); goto done; @@ -117,6 +121,9 @@ address =3D 0; goto done; } +#else + bus_address =3D virt_to_bus( (void *)address ); +#endif =20 pci_gart =3D (u32 *)address; =20 @@ -126,6 +133,7 @@ memset( pci_gart, 0, ATI_MAX_PCIGART_PAGES * sizeof(u32) ); =20 for ( i =3D 0 ; i < pages ; i++ ) { +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) /* we need to support large memory configurations */ entry->busaddr[i] =3D pci_map_single(dev->pdev, page_address( entry->pagelist[i] ), @@ -139,7 +147,9 @@ goto done; } page_base =3D (u32) entry->busaddr[i]; - +#else + page_base =3D page_to_bus( entry->pagelist[i] ); +#endif for (j =3D 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) { *pci_gart++ =3D cpu_to_le32( page_base ); page_base +=3D ATI_PCIGART_PAGE_SIZE; @@ -164,6 +174,7 @@ unsigned long addr, dma_addr_t bus_addr) { +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) drm_sg_mem_t *entry =3D dev->sg; unsigned long pages; int i; @@ -188,6 +199,8 @@ PAGE_SIZE, PCI_DMA_TODEVICE); } } + +#endif =20 if ( addr ) { DRM(ati_free_pcigart_table)( addr ); diff -urN linux-2.4.13/drivers/char/drm/drmP.h linux-2.4.13-lia/drivers/cha= r/drm/drmP.h --- linux-2.4.13/drivers/char/drm/drmP.h Mon Sep 24 15:06:58 2001 +++ linux-2.4.13-lia/drivers/char/drm/drmP.h Thu Oct 4 00:21:52 2001 @@ -366,13 +366,13 @@ if (len > DRM_PROC_LIMIT) { ret; *eof =3D 1; return len - offset; } =20 /* Mapping helper macros */ -#define DRM_IOREMAP(map) \ - (map)->handle =3D DRM(ioremap)( (map)->offset, (map)->size ) +#define DRM_IOREMAP(map, dev) \ + (map)->handle =3D DRM(ioremap)( (map)->offset, (map)->size, (dev) ) =20 -#define DRM_IOREMAPFREE(map) \ +#define DRM_IOREMAPFREE(map, dev) \ do { \ if ( (map)->handle && (map)->size ) \ - DRM(ioremapfree)( (map)->handle, (map)->size ); \ + DRM(ioremapfree)( (map)->handle, (map)->size, (dev) ); \ } while (0) =20 #define DRM_FIND_MAP(_map, _o) \ @@ -826,8 +826,8 @@ extern unsigned long DRM(alloc_pages)(int order, int area); extern void DRM(free_pages)(unsigned long address, int order, int area); -extern void *DRM(ioremap)(unsigned long offset, unsigned long size); -extern void DRM(ioremapfree)(void *pt, unsigned long size); +extern void *DRM(ioremap)(unsigned long offset, unsigned long size, d= rm_device_t *dev); +extern void DRM(ioremapfree)(void *pt, unsigned long size, drm_device= _t *dev); =20 #if __REALLY_HAVE_AGP extern agp_memory *DRM(alloc_agp)(int pages, u32 type); diff -urN linux-2.4.13/drivers/char/drm/drm_agpsupport.h linux-2.4.13-lia/d= rivers/char/drm/drm_agpsupport.h --- linux-2.4.13/drivers/char/drm/drm_agpsupport.h Mon Sep 24 15:06:58 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_agpsupport.h Thu Oct 4 00:21:40 = 2001 @@ -275,6 +275,7 @@ case INTEL_I815: head->chipset =3D "Intel i815"; break; case INTEL_I840: head->chipset =3D "Intel i840"; break; case INTEL_I850: head->chipset =3D "Intel i850"; break; + case INTEL_460GX: head->chipset =3D "Intel 460GX"; break; #endif =20 case VIA_GENERIC: head->chipset =3D "VIA"; break; diff -urN linux-2.4.13/drivers/char/drm/drm_bufs.h linux-2.4.13-lia/drivers= /char/drm/drm_bufs.h --- linux-2.4.13/drivers/char/drm/drm_bufs.h Fri Aug 10 18:14:41 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_bufs.h Thu Oct 4 00:21:40 2001 @@ -107,7 +107,7 @@ switch ( map->type ) { case _DRM_REGISTERS: case _DRM_FRAME_BUFFER: -#if !defined(__sparc__) && !defined(__alpha__) +#if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__) if ( map->offset + map->size < map->offset || map->offset < virt_to_phys(high_memory) ) { DRM(free)( map, sizeof(*map), DRM_MEM_MAPS ); @@ -124,7 +124,7 @@ MTRR_TYPE_WRCOMB, 1 ); } #endif - map->handle =3D DRM(ioremap)( map->offset, map->size ); + map->handle =3D DRM(ioremap)( map->offset, map->size, dev ); break; =20 case _DRM_SHM: @@ -249,7 +249,7 @@ DRM_DEBUG("mtrr_del =3D %d\n", retcode); } #endif - DRM(ioremapfree)(map->handle, map->size); + DRM(ioremapfree)(map->handle, map->size, dev); break; case _DRM_SHM: vfree(map->handle); diff -urN linux-2.4.13/drivers/char/drm/drm_drv.h linux-2.4.13-lia/drivers/= char/drm/drm_drv.h --- linux-2.4.13/drivers/char/drm/drm_drv.h Wed Oct 24 10:17:46 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_drv.h Wed Oct 24 10:21:09 2001 @@ -439,7 +439,7 @@ DRM_DEBUG( "mtrr_del=3D%d\n", retcode ); } #endif - DRM(ioremapfree)( map->handle, map->size ); + DRM(ioremapfree)( map->handle, map->size, dev ); break; case _DRM_SHM: vfree(map->handle); diff -urN linux-2.4.13/drivers/char/drm/drm_memory.h linux-2.4.13-lia/drive= rs/char/drm/drm_memory.h --- linux-2.4.13/drivers/char/drm/drm_memory.h Fri Aug 10 18:14:41 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_memory.h Thu Oct 4 00:21:40 2001 @@ -306,9 +306,14 @@ } } =20 -void *DRM(ioremap)(unsigned long offset, unsigned long size) +void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t = *dev) { void *pt; +#if __REALLY_HAVE_AGP + drm_map_t *map =3D NULL; + drm_map_list_t *r_list; + struct list_head *list; +#endif =20 if (!size) { DRM_MEM_ERROR(DRM_MEM_MAPPINGS, @@ -316,12 +321,51 @@ return NULL; } =20 +#if __REALLY_HAVE_AGP + if(dev->agp->cant_use_aperture =3D 0) + goto standard_ioremap; + + list_for_each(list, &dev->maplist->head) { + r_list =3D (drm_map_list_t *)list; + map =3D r_list->map; + if (!map) continue; + if (map->offset <=3D offset && + (map->offset + map->size) >=3D (offset + size)) + break; + } +=09 + if(map && map->type =3D _DRM_AGP) { + struct drm_agp_mem *agpmem; + + for(agpmem =3D dev->agp->memory; agpmem; + agpmem =3D agpmem->next) { + if(agpmem->bound <=3D offset && + (agpmem->bound + (agpmem->pages + << PAGE_SHIFT)) >=3D (offset + size)) + break; + } + + if(agpmem =3D NULL) + goto ioremap_failure; + + pt =3D agpmem->memory->vmptr + (offset - agpmem->bound); + goto ioremap_success; + } + +standard_ioremap: +#endif if (!(pt =3D ioremap(offset, size))) { +#if __REALLY_HAVE_AGP +ioremap_failure: +#endif spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count; spin_unlock(&DRM(mem_lock)); return NULL; } +#if __REALLY_HAVE_AGP +ioremap_success: +#endif spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count; DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated +=3D size; @@ -329,7 +373,7 @@ return pt; } =20 -void DRM(ioremapfree)(void *pt, unsigned long size) +void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev) { int alloc_count; int free_count; @@ -337,7 +381,11 @@ if (!pt) DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Attempt to free NULL pointer\n"); +#if __REALLY_HAVE_AGP + else if(dev->agp->cant_use_aperture =3D 0) +#else else +#endif iounmap(pt); =20 spin_lock(&DRM(mem_lock)); diff -urN linux-2.4.13/drivers/char/drm/drm_scatter.h linux-2.4.13-lia/driv= ers/char/drm/drm_scatter.h --- linux-2.4.13/drivers/char/drm/drm_scatter.h Mon Sep 24 15:06:58 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_scatter.h Thu Oct 4 00:21:40 2001 @@ -47,9 +47,11 @@ =20 vfree( entry->virtual ); =20 +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) DRM(free)( entry->busaddr, entry->pages * sizeof(*entry->busaddr), DRM_MEM_PAGES ); +#endif DRM(free)( entry->pagelist, entry->pages * sizeof(*entry->pagelist), DRM_MEM_PAGES ); @@ -97,6 +99,7 @@ return -ENOMEM; } =20 +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) entry->busaddr =3D DRM(alloc)( pages * sizeof(*entry->busaddr), DRM_MEM_PAGES ); if ( !entry->busaddr ) { @@ -109,12 +112,15 @@ return -ENOMEM; } memset( (void *)entry->busaddr, 0, pages * sizeof(*entry->busaddr) ); +#endif =20 entry->virtual =3D vmalloc_32( pages << PAGE_SHIFT ); if ( !entry->virtual ) { +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) DRM(free)( entry->busaddr, entry->pages * sizeof(*entry->busaddr), DRM_MEM_PAGES ); +#endif DRM(free)( entry->pagelist, entry->pages * sizeof(*entry->pagelist), DRM_MEM_PAGES ); diff -urN linux-2.4.13/drivers/char/drm/drm_vm.h linux-2.4.13-lia/drivers/c= har/drm/drm_vm.h --- linux-2.4.13/drivers/char/drm/drm_vm.h Wed Oct 24 10:17:48 2001 +++ linux-2.4.13-lia/drivers/char/drm/drm_vm.h Wed Oct 24 10:21:09 2001 @@ -89,7 +89,7 @@ =20 if (map && map->type =3D _DRM_AGP) { unsigned long offset =3D address - vma->vm_start; - unsigned long baddr =3D VM_OFFSET(vma) + offset; + unsigned long baddr =3D VM_OFFSET(vma) + offset, paddr; struct drm_agp_mem *agpmem; struct page *page; =20 @@ -115,8 +115,19 @@ * Get the page, inc the use count, and return it */ offset =3D (baddr - agpmem->bound) >> PAGE_SHIFT; - agpmem->memory->memory[offset] &=3D dev->agp->page_mask; - page =3D virt_to_page(__va(agpmem->memory->memory[offset])); + + /* + * This is bad. What we really want to do here is unmask + * the GART table entry held in the agp_memory structure. + * There isn't a convenient way to call agp_bridge.unmask_ + * memory from here, so hard code it for now. + */ +#if defined(__ia64__) + paddr =3D (agpmem->memory->memory[offset] & 0xffffff) << 12; +#else + paddr =3D agpmem->memory->memory[offset] & dev->agp->page_mask; +#endif + page =3D virt_to_page(__va(paddr)); get_page(page); =20 DRM_DEBUG("baddr =3D 0x%lx page =3D 0x%p, offset =3D 0x%lx\n", @@ -255,7 +266,7 @@ DRM_DEBUG("mtrr_del =3D %d\n", retcode); } #endif - DRM(ioremapfree)(map->handle, map->size); + DRM(ioremapfree)(map->handle, map->size, dev); break; case _DRM_SHM: vfree(map->handle); @@ -502,15 +513,21 @@ =20 switch (map->type) { case _DRM_AGP: -#if defined(__alpha__) - /* - * On Alpha we can't talk to bus dma address from the - * CPU, so for memory of type DRM_AGP, we'll deal with - * sorting out the real physical pages and mappings - * in nopage() - */ - vma->vm_ops =3D &DRM(vm_ops); - break; +#if __REALLY_HAVE_AGP + if(dev->agp->cant_use_aperture =3D 1) { + /* + * On some systems we can't talk to bus dma address from + * the CPU, so for memory of type DRM_AGP, we'll deal + * with sorting out the real physical pages and mappings + * in nopage() + */ + vma->vm_ops =3D &DRM(vm_ops); +#if defined(__ia64__) + vma->vm_page_prot + pgprot_writecombine(vma->vm_page_= prot); +#endif + goto mapswitch_out; + } #endif /* fall through to _DRM_FRAME_BUFFER... */ =20 case _DRM_FRAME_BUFFER: @@ -522,8 +539,7 @@ pgprot_val(vma->vm_page_prot) &=3D ~_PAGE_PWT; } #elif defined(__ia64__) - if (map->type !=3D _DRM_AGP) - vma->vm_page_prot + vma->vm_page_prot pgprot_writecombine(vma-= >vm_page_prot); #elif defined(__powerpc__) pgprot_val(vma->vm_page_prot) |=3D _PAGE_NO_CACHE | _PAGE_GUARDED; @@ -572,6 +588,9 @@ default: return -EINVAL; /* This should never happen. */ } +#if __REALLY_HAVE_AGP +mapswitch_out: +#endif vma->vm_flags |=3D VM_LOCKED | VM_SHM; /* Don't swap */ =20 #if LINUX_VERSION_CODE < 0x020203 /* KERNEL_VERSION(2,2,3) */ diff -urN linux-2.4.13/drivers/char/drm/i810_dma.c linux-2.4.13-lia/drivers= /char/drm/i810_dma.c --- linux-2.4.13/drivers/char/drm/i810_dma.c Wed Aug 8 09:42:15 2001 +++ linux-2.4.13-lia/drivers/char/drm/i810_dma.c Thu Oct 4 00:21:40 2001 @@ -315,7 +315,7 @@ =20 if(dev_priv->ring.virtual_start) { DRM(ioremapfree)((void *) dev_priv->ring.virtual_start, - dev_priv->ring.Size); + dev_priv->ring.Size, dev); } if(dev_priv->hw_status_page !=3D 0UL) { i810_free_page(dev, dev_priv->hw_status_page); @@ -329,7 +329,8 @@ for (i =3D 0; i < dma->buf_count; i++) { drm_buf_t *buf =3D dma->buflist[ i ]; drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; - DRM(ioremapfree)(buf_priv->kernel_virtual, buf->total); + DRM(ioremapfree)(buf_priv->kernel_virtual, + buf->total, dev); } } return 0; @@ -402,7 +403,7 @@ *buf_priv->in_use =3D I810_BUF_FREE; =20 buf_priv->kernel_virtual =3D DRM(ioremap)(buf->bus_address, - buf->total); + buf->total, dev); } return 0; } @@ -458,7 +459,7 @@ =20 dev_priv->ring.virtual_start =3D DRM(ioremap)(dev->agp->base + init->ring_start, - init->ring_size); + init->ring_size, dev); =20 if (dev_priv->ring.virtual_start =3D NULL) { dev->dev_private =3D (void *) dev_priv; diff -urN linux-2.4.13/drivers/char/drm/mga_dma.c linux-2.4.13-lia/drivers/= char/drm/mga_dma.c --- linux-2.4.13/drivers/char/drm/mga_dma.c Wed Aug 8 09:42:15 2001 +++ linux-2.4.13-lia/drivers/char/drm/mga_dma.c Thu Oct 4 00:21:40 2001 @@ -557,9 +557,9 @@ (drm_mga_sarea_t *)((u8 *)dev_priv->sarea->handle + init->sarea_priv_offset); =20 - DRM_IOREMAP( dev_priv->warp ); - DRM_IOREMAP( dev_priv->primary ); - DRM_IOREMAP( dev_priv->buffers ); + DRM_IOREMAP( dev_priv->warp, dev ); + DRM_IOREMAP( dev_priv->primary, dev ); + DRM_IOREMAP( dev_priv->buffers, dev ); =20 if(!dev_priv->warp->handle || !dev_priv->primary->handle || @@ -647,9 +647,9 @@ if ( dev->dev_private ) { drm_mga_private_t *dev_priv =3D dev->dev_private; =20 - DRM_IOREMAPFREE( dev_priv->warp ); - DRM_IOREMAPFREE( dev_priv->primary ); - DRM_IOREMAPFREE( dev_priv->buffers ); + DRM_IOREMAPFREE( dev_priv->warp, dev ); + DRM_IOREMAPFREE( dev_priv->primary, dev ); + DRM_IOREMAPFREE( dev_priv->buffers, dev ); =20 if ( dev_priv->head !=3D NULL ) { mga_freelist_cleanup( dev ); diff -urN linux-2.4.13/drivers/char/drm/r128_cce.c linux-2.4.13-lia/drivers= /char/drm/r128_cce.c --- linux-2.4.13/drivers/char/drm/r128_cce.c Mon Sep 24 15:06:58 2001 +++ linux-2.4.13-lia/drivers/char/drm/r128_cce.c Thu Oct 4 00:21:52 2001 @@ -216,7 +216,22 @@ int i; =20 for ( i =3D 0 ; i < dev_priv->usec_timeout ; i++ ) { +#ifndef CONFIG_AGP_I460 if ( GET_RING_HEAD( &dev_priv->ring ) =3D dev_priv->ring.tail ) { +#else + /* + * XXX - this is (I think) a 460GX specific hack + * + * When doing texturing, ring.tail sometimes gets ahead of + * PM4_BUFFER_DL_WPTR by 2; consequently, the card processes + * its whole quota of instructions and *ring.head is still 2 + * short of ring.tail. Work around this for now in lieu of + * a better solution. + */ + if ( GET_RING_HEAD( &dev_priv->ring ) =3D dev_priv->ring.tail || + ( dev_priv->ring.tail - + GET_RING_HEAD( &dev_priv->ring ) ) =3D 2 ) { +#endif int pm4stat =3D R128_READ( R128_PM4_STAT ); if ( ( (pm4stat & R128_PM4_FIFOCNT_MASK) > dev_priv->cce_fifo= _size ) && @@ -341,8 +356,27 @@ SET_RING_HEAD( &dev_priv->ring, 0 ); =20 if ( !dev_priv->is_pci ) { +#if defined(CONFIG_AGP_I460) && defined(__ia64__) + /* + * XXX - This is a 460GX specific hack + * + * We have to hack this right now. 460GX isn't claiming PCI + * writes from the card into the AGP aperture. Because of this, + * we have to get space outside of the aperture for RPTR_ADDR. + */ + if( dev->agp->agp_info.chipset =3D INTEL_460GX ) { + unsigned long alt_rh_off; + + alt_rh_off =3D __get_free_page(GFP_KERNEL | GFP_DMA); + atomic_inc(&virt_to_page(alt_rh_off)->count); + set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags); + + dev_priv->ring.head =3D (__volatile__ u32 *) alt_rh_off; + SET_RING_HEAD( &dev_priv->ring, 0 ); + } +#endif R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR, - dev_priv->ring_rptr->offset ); + __pa( dev_priv->ring.head ) ); } else { drm_sg_mem_t *entry =3D dev->sg; unsigned long tmp_ofs, page_ofs; @@ -350,11 +384,20 @@ tmp_ofs =3D dev_priv->ring_rptr->offset - dev->sg->handle; page_ofs =3D tmp_ofs >> PAGE_SHIFT; =20 +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR, entry->busaddr[page_ofs]); DRM_DEBUG( "ring rptr: offset=3D0x%08x handle=3D0x%08lx\n", entry->busaddr[page_ofs], entry->handle + tmp_ofs ); +#else + R128_WRITE( R128_PM4_BUFFER_DL_RPTR_ADDR, + page_to_bus(entry->pagelist[page_ofs])); + + DRM_DEBUG( "ring rptr: offset=3D0x%08lx handle=3D0x%08lx\n", + page_to_bus(entry->pagelist[page_ofs]), + entry->handle + tmp_ofs ); +#endif } =20 /* Set watermark control */ @@ -550,9 +593,9 @@ init->sarea_priv_offset); =20 if ( !dev_priv->is_pci ) { - DRM_IOREMAP( dev_priv->cce_ring ); - DRM_IOREMAP( dev_priv->ring_rptr ); - DRM_IOREMAP( dev_priv->buffers ); + DRM_IOREMAP( dev_priv->cce_ring, dev ); + DRM_IOREMAP( dev_priv->ring_rptr, dev ); + DRM_IOREMAP( dev_priv->buffers, dev ); if(!dev_priv->cce_ring->handle || !dev_priv->ring_rptr->handle || !dev_priv->buffers->handle) { @@ -624,9 +667,9 @@ drm_r128_private_t *dev_priv =3D dev->dev_private; =20 if ( !dev_priv->is_pci ) { - DRM_IOREMAPFREE( dev_priv->cce_ring ); - DRM_IOREMAPFREE( dev_priv->ring_rptr ); - DRM_IOREMAPFREE( dev_priv->buffers ); + DRM_IOREMAPFREE( dev_priv->cce_ring, dev ); + DRM_IOREMAPFREE( dev_priv->ring_rptr, dev ); + DRM_IOREMAPFREE( dev_priv->buffers, dev ); } else { if (!DRM(ati_pcigart_cleanup)( dev, dev_priv->phys_pci_gart, @@ -634,6 +677,21 @@ DRM_ERROR( "failed to cleanup PCI GART!\n" ); } =20 +#if defined(CONFIG_AGP_I460) && defined(__ia64__) + /* + * Free the page we grabbed for RPTR_ADDR + */ + if( !dev_priv->is_pci && dev->agp->agp_info.chipset =3D INTEL_460GX ) { + unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head; + + atomic_dec(&virt_to_page(alt_rh_off)->count); + clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags); + wake_up(&virt_to_page(alt_rh_off)->wait); + free_page(alt_rh_off); + } +#endif +=09 DRM(free)( dev->dev_private, sizeof(drm_r128_private_t), DRM_MEM_DRIVER ); dev->dev_private =3D NULL; diff -urN linux-2.4.13/drivers/char/drm/radeon_cp.c linux-2.4.13-lia/driver= s/char/drm/radeon_cp.c --- linux-2.4.13/drivers/char/drm/radeon_cp.c Mon Sep 24 15:06:58 2001 +++ linux-2.4.13-lia/drivers/char/drm/radeon_cp.c Thu Oct 4 00:21:52 2001 @@ -612,8 +612,27 @@ dev_priv->ring.tail =3D cur_read_ptr; =20 if ( !dev_priv->is_pci ) { +#if defined(CONFIG_AGP_I460) && defined(__ia64__) + /* + * XXX - This is a 460GX specific hack + * + * We have to hack this right now. 460GX isn't claiming PCI + * writes from the card into the AGP aperture. Because of this, + * we have to get space outside of the aperture for RPTR_ADDR. + */ + if( dev->agp->agp_info.chipset =3D INTEL_460GX ) { + unsigned long alt_rh_off; + + alt_rh_off =3D __get_free_page(GFP_KERNEL | GFP_DMA); + atomic_inc(&virt_to_page(alt_rh_off)->count); + set_bit(PG_locked, &virt_to_page(alt_rh_off)->flags); + + dev_priv->ring.head =3D (__volatile__ u32 *) alt_rh_off; + *dev_priv->ring.head =3D cur_read_ptr; + } +#endif RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR, - dev_priv->ring_rptr->offset ); + __pa( dev_priv->ring.head ) ); } else { drm_sg_mem_t *entry =3D dev->sg; unsigned long tmp_ofs, page_ofs; @@ -621,11 +640,19 @@ tmp_ofs =3D dev_priv->ring_rptr->offset - dev->sg->handle; page_ofs =3D tmp_ofs >> PAGE_SHIFT; =20 +#if defined(__alpha__) && (LINUX_VERSION_CODE >=3D 0x020400) + RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR, + entry->busaddr[page_ofs]); + DRM_DEBUG( "ring rptr: offset=3D0x%08x handle=3D0x%08lx\n", + entry->busaddr[page_ofs], + entry->handle + tmp_ofs ); +#else RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR, entry->busaddr[page_ofs]); DRM_DEBUG( "ring rptr: offset=3D0x%08x handle=3D0x%08lx\n", entry->busaddr[page_ofs], entry->handle + tmp_ofs ); +#endif } =20 /* Set ring buffer size */ @@ -836,9 +863,9 @@ init->sarea_priv_offset); =20 if ( !dev_priv->is_pci ) { - DRM_IOREMAP( dev_priv->cp_ring ); - DRM_IOREMAP( dev_priv->ring_rptr ); - DRM_IOREMAP( dev_priv->buffers ); + DRM_IOREMAP( dev_priv->cp_ring, dev ); + DRM_IOREMAP( dev_priv->ring_rptr, dev ); + DRM_IOREMAP( dev_priv->buffers, dev ); if(!dev_priv->cp_ring->handle || !dev_priv->ring_rptr->handle || !dev_priv->buffers->handle) { @@ -983,9 +1010,9 @@ drm_radeon_private_t *dev_priv =3D dev->dev_private; =20 if ( !dev_priv->is_pci ) { - DRM_IOREMAPFREE( dev_priv->cp_ring ); - DRM_IOREMAPFREE( dev_priv->ring_rptr ); - DRM_IOREMAPFREE( dev_priv->buffers ); + DRM_IOREMAPFREE( dev_priv->cp_ring, dev ); + DRM_IOREMAPFREE( dev_priv->ring_rptr, dev ); + DRM_IOREMAPFREE( dev_priv->buffers, dev ); } else { if (!DRM(ati_pcigart_cleanup)( dev, dev_priv->phys_pci_gart, @@ -993,6 +1020,21 @@ DRM_ERROR( "failed to cleanup PCI GART!\n" ); } =20 +#if defined(CONFIG_AGP_I460) && defined(__ia64__) + /* + * Free the page we grabbed for RPTR_ADDR + */ + if( !dev_priv->is_pci && dev->agp->agp_info.chipset =3D INTEL_460GX ) { + unsigned long alt_rh_off + (unsigned long) dev_priv->ring.head; + + atomic_dec(&virt_to_page(alt_rh_off)->count); + clear_bit(PG_locked, &virt_to_page(alt_rh_off)->flags); + wake_up(&virt_to_page(alt_rh_off)->wait); + free_page(alt_rh_off); + } +#endif +=09 DRM(free)( dev->dev_private, sizeof(drm_radeon_private_t), DRM_MEM_DRIVER ); dev->dev_private =3D NULL; diff -urN linux-2.4.13/drivers/char/drm-4.0/Config.in linux-2.4.13-lia/driv= ers/char/drm-4.0/Config.in --- linux-2.4.13/drivers/char/drm-4.0/Config.in Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/Config.in Thu Oct 4 00:21:40 2001 @@ -0,0 +1,13 @@ +# +# drm device configuration +# +# This driver provides support for the +# Direct Rendering Infrastructure (DRI) in XFree86 4.x. +# + +tristate ' 3dfx Banshee/Voodoo3+' CONFIG_DRM40_TDFX +tristate ' 3dlabs GMX 2000' CONFIG_DRM40_GAMMA +dep_tristate ' ATI Rage 128' CONFIG_DRM40_R128 $CONFIG_AGP +dep_tristate ' ATI Radeon' CONFIG_DRM40_RADEON $CONFIG_AGP +dep_tristate ' Intel I810' CONFIG_DRM40_I810 $CONFIG_AGP +dep_tristate ' Matrox g200/g400' CONFIG_DRM40_MGA $CONFIG_AGP diff -urN linux-2.4.13/drivers/char/drm-4.0/Makefile linux-2.4.13-lia/drive= rs/char/drm-4.0/Makefile --- linux-2.4.13/drivers/char/drm-4.0/Makefile Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/Makefile Thu Oct 4 00:21:40 2001 @@ -0,0 +1,104 @@ +# +# Makefile for the drm device driver. This driver provides support for +# the Direct Rendering Infrastructure (DRI) in XFree86 4.x. +# + +O_TARGET :=3D drm.o + +export-objs :=3D gamma_drv.o tdfx_drv.o r128_drv.o ffb_drv.o mga_drv.o= \ + i810_drv.o + +# lib-objs are included in every module so that radical changes to the +# architecture of the DRM support library can be made at a later time. +# +# The downside is that each module is larger, and a system that uses +# more than one module (i.e., a dual-head system) will use more memory +# (but a system that uses exactly one module will use the same amount of +# memory). +# +# The upside is that if the DRM support library ever becomes insufficient +# for new families of cards, a new library can be implemented for those new +# cards without impacting the drivers for the old cards. This is signific= ant, +# because testing architectural changes to old cards may be impossible, and +# may delay the implementation of a better architecture. We've traded sli= ght +# memory waste (in the dual-head case) for greatly improved long-term +# maintainability. +# +# NOTE: lib-objs will be eliminated in future versions, thereby +# eliminating the need to compile the .o files into every module, but +# for now we still need them. +# + +lib-objs :=3D init.o memory.o proc.o auth.o context.o drawable.o bufs.o +lib-objs +=3D lists.o lock.o ioctl.o fops.o vm.o dma.o ctxbitmap.o + +ifeq ($(CONFIG_AGP),y) + lib-objs +=3D agpsupport.o +else + ifeq ($(CONFIG_AGP),m) + lib-objs +=3D agpsupport.o + endif +endif + +list-multi :=3D gamma.o tdfx.o r128.o ffb.o mga.o i810.o +gamma-objs :=3D gamma_drv.o gamma_dma.o +tdfx-objs :=3D tdfx_drv.o tdfx_context.o +r128-objs :=3D r128_drv.o r128_cce.o r128_context.o r128_bufs.o r12= 8_state.o +ffb-objs :=3D ffb_drv.o ffb_context.o +mga-objs :=3D mga_drv.o mga_dma.o mga_context.o mga_bufs.o mga= _state.o +i810-objs :=3D i810_drv.o i810_dma.o i810_context.o i810_bufs.o +radeon-objs :=3D radeon_drv.o radeon_cp.o radeon_context.o radeon_bufs.o= radeon_state.o + +obj-$(CONFIG_DRM40_GAMMA) +=3D gamma.o +obj-$(CONFIG_DRM40_TDFX) +=3D tdfx.o +obj-$(CONFIG_DRM40_R128) +=3D r128.o +obj-$(CONFIG_DRM40_RADEON)+=3D radeon.o +obj-$(CONFIG_DRM40_FFB) +=3D ffb.o +obj-$(CONFIG_DRM40_MGA) +=3D mga.o +obj-$(CONFIG_DRM40_I810) +=3D i810.o + + +# When linking into the kernel, link the library just once.=20 +# If making modules, we include the library into each module + +lib-objs-mod :=3D $(patsubst %.o,%-mod.o,$(lib-objs)) + +ifdef MAKING_MODULES + lib =3D drmlib-mod.a +else + obj-y +=3D drmlib.a +endif + +include $(TOPDIR)/Rules.make + +$(patsubst %.o,%.c,$(lib-objs-mod)):=20 + @ln -sf $(subst -mod,,$@) $@ + +drmlib-mod.a: $(lib-objs-mod) + rm -f $@ + $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs-mod) + +drmlib.a: $(lib-objs) + rm -f $@ + $(AR) $(EXTRA_ARFLAGS) rcs $@ $(lib-objs) + +gamma.o: $(gamma-objs) $(lib) + $(LD) -r -o $@ $(gamma-objs) $(lib) + +tdfx.o: $(tdfx-objs) $(lib) + $(LD) -r -o $@ $(tdfx-objs) $(lib) + +mga.o: $(mga-objs) $(lib) + $(LD) -r -o $@ $(mga-objs) $(lib) + +i810.o: $(i810-objs) $(lib) + $(LD) -r -o $@ $(i810-objs) $(lib) + +r128.o: $(r128-objs) $(lib) + $(LD) -r -o $@ $(r128-objs) $(lib) + +radeon.o: $(radeon-objs) $(lib) + $(LD) -r -o $@ $(radeon-objs) $(lib) + +ffb.o: $(ffb-objs) $(lib) + $(LD) -r -o $@ $(ffb-objs) $(lib) diff -urN linux-2.4.13/drivers/char/drm-4.0/README.drm linux-2.4.13-lia/dri= vers/char/drm-4.0/README.drm --- linux-2.4.13/drivers/char/drm-4.0/README.drm Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/README.drm Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,46 @@ +************************************************************ +* For the very latest on DRI development, please see: * +* http://dri.sourceforge.net/ * +************************************************************ + +The Direct Rendering Manager (drm) is a device-independent kernel-level +device driver that provides support for the XFree86 Direct Rendering +Infrastructure (DRI). + +The DRM supports the Direct Rendering Infrastructure (DRI) in four major +ways: + + 1. The DRM provides synchronized access to the graphics hardware via + the use of an optimized two-tiered lock. + + 2. The DRM enforces the DRI security policy for access to the graphics + hardware by only allowing authenticated X11 clients access to + restricted regions of memory. + + 3. The DRM provides a generic DMA engine, complete with multiple + queues and the ability to detect the need for an OpenGL context + switch. + + 4. The DRM is extensible via the use of small device-specific modules + that rely extensively on the API exported by the DRM module. + + +Documentation on the DRI is available from: + http://precisioninsight.com/piinsights.html + +For specific information about kernel-level support, see: + + The Direct Rendering Manager, Kernel Support for the Direct Rendering + Infrastructure + http://precisioninsight.com/dr/drm.html + + Hardware Locking for the Direct Rendering Infrastructure + http://precisioninsight.com/dr/locking.html + + A Security Analysis of the Direct Rendering Infrastructure + http://precisioninsight.com/dr/security.html + +************************************************************ +* For the very latest on DRI development, please see: * +* http://dri.sourceforge.net/ * +************************************************************ diff -urN linux-2.4.13/drivers/char/drm-4.0/agpsupport.c linux-2.4.13-lia/d= rivers/char/drm-4.0/agpsupport.c --- linux-2.4.13/drivers/char/drm-4.0/agpsupport.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/agpsupport.c Thu Oct 4 00:21:40 = 2001 @@ -0,0 +1,349 @@ +/* agpsupport.c -- DRM support for AGP/GART backend -*- linux-c -*- + * Created: Mon Dec 13 09:56:45 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Author: Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include +#include +#if LINUX_VERSION_CODE < 0x020400 +#include "agpsupport-pre24.h" +#else +#define DRM_AGP_GET (drm_agp_t *)inter_module_get("drm_agp") +#define DRM_AGP_PUT inter_module_put("drm_agp") +#endif + +static const drm_agp_t *drm_agp =3D NULL; + +int drm_agp_info(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + agp_kern_info *kern; + drm_agp_info_t info; + + if (!dev->agp->acquired || !drm_agp->copy_info) return -EINVAL; + + kern =3D &dev->agp->agp_info; + info.agp_version_major =3D kern->version.major; + info.agp_version_minor =3D kern->version.minor; + info.mode =3D kern->mode; + info.aperture_base =3D kern->aper_base; + info.aperture_size =3D kern->aper_size * 1024 * 1024; + info.memory_allowed =3D kern->max_memory << PAGE_SHIFT; + info.memory_used =3D kern->current_memory << PAGE_SHIFT; + info.id_vendor =3D kern->device->vendor; + info.id_device =3D kern->device->device; + + if (copy_to_user((drm_agp_info_t *)arg, &info, sizeof(info))) + return -EFAULT; + return 0; +} + +int drm_agp_acquire(struct inode *inode, struct file *filp, unsigned int c= md, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int retcode; + + if (dev->agp->acquired || !drm_agp->acquire) return -EINVAL; + if ((retcode =3D drm_agp->acquire())) return retcode; + dev->agp->acquired =3D 1; + return 0; +} + +int drm_agp_release(struct inode *inode, struct file *filp, unsigned int c= md, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + + if (!dev->agp->acquired || !drm_agp->release) return -EINVAL; + drm_agp->release(); + dev->agp->acquired =3D 0; + return 0; +=09 +} + +void _drm_agp_release(void) +{ + if (drm_agp->release) drm_agp->release(); +} + +int drm_agp_enable(struct inode *inode, struct file *filp, unsigned int cm= d, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_agp_mode_t mode; + + if (!dev->agp->acquired || !drm_agp->enable) return -EINVAL; + + if (copy_from_user(&mode, (drm_agp_mode_t *)arg, sizeof(mode))) + return -EFAULT; +=09 + dev->agp->mode =3D mode.mode; + drm_agp->enable(mode.mode); + dev->agp->base =3D dev->agp->agp_info.aper_base; + dev->agp->enabled =3D 1; + return 0; +} + +int drm_agp_alloc(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_agp_buffer_t request; + drm_agp_mem_t *entry; + agp_memory *memory; + unsigned long pages; + u32 type; + if (!dev->agp->acquired) return -EINVAL; + if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request))) + return -EFAULT; + if (!(entry =3D drm_alloc(sizeof(*entry), DRM_MEM_AGPLISTS))) + return -ENOMEM; + =20 + memset(entry, 0, sizeof(*entry)); + + pages =3D (request.size + PAGE_SIZE - 1) / PAGE_SIZE; + type =3D (u32) request.type; + + if (!(memory =3D drm_alloc_agp(pages, type))) { + drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS); + return -ENOMEM; + } +=09 + entry->handle =3D (unsigned long)memory->memory; + entry->memory =3D memory; + entry->bound =3D 0; + entry->pages =3D pages; + entry->prev =3D NULL; + entry->next =3D dev->agp->memory; + if (dev->agp->memory) dev->agp->memory->prev =3D entry; + dev->agp->memory =3D entry; + + request.handle =3D entry->handle; + request.physical =3D memory->physical; + + if (copy_to_user((drm_agp_buffer_t *)arg, &request, sizeof(request))) { + dev->agp->memory =3D entry->next; + dev->agp->memory->prev =3D NULL; + drm_free_agp(memory, pages); + drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS); + return -EFAULT; + } + return 0; +} + +static drm_agp_mem_t *drm_agp_lookup_entry(drm_device_t *dev, + unsigned long handle) +{ + drm_agp_mem_t *entry; + + for (entry =3D dev->agp->memory; entry; entry =3D entry->next) { + if (entry->handle =3D handle) return entry; + } + return NULL; +} + +int drm_agp_unbind(struct inode *inode, struct file *filp, unsigned int cm= d, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_agp_binding_t request; + drm_agp_mem_t *entry; + + if (!dev->agp->acquired) return -EINVAL; + if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request))) + return -EFAULT; + if (!(entry =3D drm_agp_lookup_entry(dev, request.handle))) + return -EINVAL; + if (!entry->bound) return -EINVAL; + return drm_unbind_agp(entry->memory); +} + +int drm_agp_bind(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_agp_binding_t request; + drm_agp_mem_t *entry; + int retcode; + int page; +=09 + if (!dev->agp->acquired || !drm_agp->bind_memory) return -EINVAL; + if (copy_from_user(&request, (drm_agp_binding_t *)arg, sizeof(request))) + return -EFAULT; + if (!(entry =3D drm_agp_lookup_entry(dev, request.handle))) + return -EINVAL; + if (entry->bound) return -EINVAL; + page =3D (request.offset + PAGE_SIZE - 1) / PAGE_SIZE; + if ((retcode =3D drm_bind_agp(entry->memory, page))) return retcode; + entry->bound =3D dev->agp->base + (page << PAGE_SHIFT); + DRM_DEBUG("base =3D 0x%lx entry->bound =3D 0x%lx\n",=20 + dev->agp->base, entry->bound); + return 0; +} + +int drm_agp_free(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_agp_buffer_t request; + drm_agp_mem_t *entry; +=09 + if (!dev->agp->acquired) return -EINVAL; + if (copy_from_user(&request, (drm_agp_buffer_t *)arg, sizeof(request))) + return -EFAULT; + if (!(entry =3D drm_agp_lookup_entry(dev, request.handle))) + return -EINVAL; + if (entry->bound) drm_unbind_agp(entry->memory); + =20 + if (entry->prev) entry->prev->next =3D entry->next; + else dev->agp->memory =3D entry->next; + if (entry->next) entry->next->prev =3D entry->prev; + drm_free_agp(entry->memory, entry->pages); + drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS); + return 0; +} + +drm_agp_head_t *drm_agp_init(void) +{ + drm_agp_head_t *head =3D NULL; + + drm_agp =3D DRM_AGP_GET; + if (drm_agp) { + if (!(head =3D drm_alloc(sizeof(*head), DRM_MEM_AGPLISTS))) + return NULL; + memset((void *)head, 0, sizeof(*head)); + drm_agp->copy_info(&head->agp_info); + if (head->agp_info.chipset =3D NOT_SUPPORTED) { + drm_free(head, sizeof(*head), DRM_MEM_AGPLISTS); + return NULL; + } + head->memory =3D NULL; + switch (head->agp_info.chipset) { + case INTEL_GENERIC: head->chipset =3D "Intel"; break; + case INTEL_LX: head->chipset =3D "Intel 440LX"; break; + case INTEL_BX: head->chipset =3D "Intel 440BX"; break; + case INTEL_GX: head->chipset =3D "Intel 440GX"; break; + case INTEL_I810: head->chipset =3D "Intel i810"; break; + +#if LINUX_VERSION_CODE >=3D 0x020400 + case INTEL_I840: head->chipset =3D "Intel i840"; break; +#endif + case INTEL_460GX: head->chipset =3D "Intel 460GX"; break; + + case VIA_GENERIC: head->chipset =3D "VIA"; break; + case VIA_VP3: head->chipset =3D "VIA VP3"; break; + case VIA_MVP3: head->chipset =3D "VIA MVP3"; break; + +#if LINUX_VERSION_CODE >=3D 0x020400 + case VIA_MVP4: head->chipset =3D "VIA MVP4"; break; + case VIA_APOLLO_KX133: head->chipset =3D "VIA Apollo KX133";=20 + break; + case VIA_APOLLO_KT133: head->chipset =3D "VIA Apollo KT133";=20 + break; +#endif + + case VIA_APOLLO_PRO: head->chipset =3D "VIA Apollo Pro"; + break; + case SIS_GENERIC: head->chipset =3D "SiS"; break; + case AMD_GENERIC: head->chipset =3D "AMD"; break; + case AMD_IRONGATE: head->chipset =3D "AMD Irongate"; break; + case ALI_GENERIC: head->chipset =3D "ALi"; break; + case ALI_M1541: head->chipset =3D "ALi M1541"; break; + case ALI_M1621: head->chipset =3D "ALi M1621"; break; + case ALI_M1631: head->chipset =3D "ALi M1631"; break; + case ALI_M1632: head->chipset =3D "ALi M1632"; break; + case ALI_M1641: head->chipset =3D "ALi M1641"; break; + case ALI_M1647: head->chipset =3D "ALi M1647"; break; + case ALI_M1651: head->chipset =3D "ALi M1651"; break; + case SVWRKS_GENERIC: head->chipset =3D "Serverworks Generic"; + break; + case SVWRKS_HE: head->chipset =3D "Serverworks HE"; break; + case SVWRKS_LE: head->chipset =3D "Serverworks LE"; break; + + default: head->chipset =3D "Unknown"; break; + } +#if LINUX_VERSION_CODE <=3D 0x020408 + head->cant_use_aperture =3D 0; + head->page_mask =3D ~(0xfff); +#else + head->cant_use_aperture =3D head->agp_info.cant_use_aperture; + head->page_mask =3D head->agp_info.page_mask; +#endif + + DRM_INFO("AGP %d.%d on %s @ 0x%08lx %ZuMB\n", + head->agp_info.version.major, + head->agp_info.version.minor, + head->chipset, + head->agp_info.aper_base, + head->agp_info.aper_size); + } + return head; +} + +void drm_agp_uninit(void) +{ + DRM_AGP_PUT; + drm_agp =3D NULL; +} + +agp_memory *drm_agp_allocate_memory(size_t pages, u32 type) +{ + if (!drm_agp->allocate_memory) return NULL; + return drm_agp->allocate_memory(pages, type); +} + +int drm_agp_free_memory(agp_memory *handle) +{ + if (!handle || !drm_agp->free_memory) return 0; + drm_agp->free_memory(handle); + return 1; +} + +int drm_agp_bind_memory(agp_memory *handle, off_t start) +{ + if (!handle || !drm_agp->bind_memory) return -EINVAL; + return drm_agp->bind_memory(handle, start); +} + +int drm_agp_unbind_memory(agp_memory *handle) +{ + if (!handle || !drm_agp->unbind_memory) return -EINVAL; + return drm_agp->unbind_memory(handle); +} diff -urN linux-2.4.13/drivers/char/drm-4.0/auth.c linux-2.4.13-lia/drivers= /char/drm-4.0/auth.c --- linux-2.4.13/drivers/char/drm-4.0/auth.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/auth.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,162 @@ +/* auth.c -- IOCTLs for authentication -*- linux-c -*- + * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +static int drm_hash_magic(drm_magic_t magic) +{ + return magic & (DRM_HASH_SIZE-1); +} + +static drm_file_t *drm_find_file(drm_device_t *dev, drm_magic_t magic) +{ + drm_file_t *retval =3D NULL; + drm_magic_entry_t *pt; + int hash =3D drm_hash_magic(magic); + + down(&dev->struct_sem); + for (pt =3D dev->magiclist[hash].head; pt; pt =3D pt->next) { + if (pt->magic =3D magic) { + retval =3D pt->priv; + break; + } + } + up(&dev->struct_sem); + return retval; +} + +int drm_add_magic(drm_device_t *dev, drm_file_t *priv, drm_magic_t magic) +{ + int hash; + drm_magic_entry_t *entry; +=09 + DRM_DEBUG("%d\n", magic); +=09 + hash =3D drm_hash_magic(magic); + entry =3D drm_alloc(sizeof(*entry), DRM_MEM_MAGIC); + if (!entry) return -ENOMEM; + entry->magic =3D magic; + entry->priv =3D priv; + entry->next =3D NULL; + + down(&dev->struct_sem); + if (dev->magiclist[hash].tail) { + dev->magiclist[hash].tail->next =3D entry; + dev->magiclist[hash].tail =3D entry; + } else { + dev->magiclist[hash].head =3D entry; + dev->magiclist[hash].tail =3D entry; + } + up(&dev->struct_sem); +=09 + return 0; +} + +int drm_remove_magic(drm_device_t *dev, drm_magic_t magic) +{ + drm_magic_entry_t *prev =3D NULL; + drm_magic_entry_t *pt; + int hash; +=09 + DRM_DEBUG("%d\n", magic); + hash =3D drm_hash_magic(magic); +=09 + down(&dev->struct_sem); + for (pt =3D dev->magiclist[hash].head; pt; prev =3D pt, pt =3D pt->next) { + if (pt->magic =3D magic) { + if (dev->magiclist[hash].head =3D pt) { + dev->magiclist[hash].head =3D pt->next; + } + if (dev->magiclist[hash].tail =3D pt) { + dev->magiclist[hash].tail =3D prev; + } + if (prev) { + prev->next =3D pt->next; + } + up(&dev->struct_sem); + return 0; + } + } + up(&dev->struct_sem); + + drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC); +=09 + return -EINVAL; +} + +int drm_getmagic(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + static drm_magic_t sequence =3D 0; + static spinlock_t lock =3D SPIN_LOCK_UNLOCKED; + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_auth_t auth; + + /* Find unique magic */ + if (priv->magic) { + auth.magic =3D priv->magic; + } else { + do { + spin_lock(&lock); + if (!sequence) ++sequence; /* reserve 0 */ + auth.magic =3D sequence++; + spin_unlock(&lock); + } while (drm_find_file(dev, auth.magic)); + priv->magic =3D auth.magic; + drm_add_magic(dev, priv, auth.magic); + } +=09 + DRM_DEBUG("%u\n", auth.magic); + if (copy_to_user((drm_auth_t *)arg, &auth, sizeof(auth))) + return -EFAULT; + return 0; +} + +int drm_authmagic(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_auth_t auth; + drm_file_t *file; + + if (copy_from_user(&auth, (drm_auth_t *)arg, sizeof(auth))) + return -EFAULT; + DRM_DEBUG("%u\n", auth.magic); + if ((file =3D drm_find_file(dev, auth.magic))) { + file->authenticated =3D 1; + drm_remove_magic(dev, auth.magic); + return 0; + } + return -EINVAL; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/bufs.c linux-2.4.13-lia/drivers= /char/drm-4.0/bufs.c --- linux-2.4.13/drivers/char/drm-4.0/bufs.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/bufs.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,543 @@ +/* bufs.c -- IOCTLs to manage buffers -*- linux-c -*- + * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com + * + * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include +#include "drmP.h" +#include "linux/un.h" + + /* Compute order. Can be made faster. */ +int drm_order(unsigned long size) +{ + int order; + unsigned long tmp; + + for (order =3D 0, tmp =3D size; tmp >>=3D 1; ++order); + if (size & ~(1 << order)) ++order; + return order; +} + +int drm_addmap(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_map_t *map; +=09 + if (!(filp->f_mode & 3)) return -EACCES; /* Require read/write */ + + map =3D drm_alloc(sizeof(*map), DRM_MEM_MAPS); + if (!map) return -ENOMEM; + if (copy_from_user(map, (drm_map_t *)arg, sizeof(*map))) { + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + return -EFAULT; + } + + DRM_DEBUG("offset =3D 0x%08lx, size =3D 0x%08lx, type =3D %d\n", + map->offset, map->size, map->type); + if ((map->offset & (~PAGE_MASK)) || (map->size & (~PAGE_MASK))) { + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + return -EINVAL; + } + map->mtrr =3D -1; + map->handle =3D 0; + + switch (map->type) { + case _DRM_REGISTERS: + case _DRM_FRAME_BUFFER: +#if !defined(__sparc__) && !defined(__ia64__) + if (map->offset + map->size < map->offset + || map->offset < virt_to_phys(high_memory)) { + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + return -EINVAL; + } +#endif +#ifdef CONFIG_MTRR + if (map->type =3D _DRM_FRAME_BUFFER + || (map->flags & _DRM_WRITE_COMBINING)) { + map->mtrr =3D mtrr_add(map->offset, map->size, + MTRR_TYPE_WRCOMB, 1); + } +#endif + map->handle =3D drm_ioremap(map->offset, map->size, dev); + break; + =09 + + case _DRM_SHM: + map->handle =3D (void *)drm_alloc_pages(drm_order(map->size) + - PAGE_SHIFT, + DRM_MEM_SAREA); + DRM_DEBUG("%ld %d %p\n", map->size, drm_order(map->size), + map->handle); + if (!map->handle) { + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + return -ENOMEM; + } + map->offset =3D (unsigned long)map->handle; + if (map->flags & _DRM_CONTAINS_LOCK) { + dev->lock.hw_lock =3D map->handle; /* Pointer to lock */ + } + break; +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) + case _DRM_AGP: + map->offset =3D map->offset + dev->agp->base; + break; +#endif + default: + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + return -EINVAL; + } + + down(&dev->struct_sem); + if (dev->maplist) { + ++dev->map_count; + dev->maplist =3D drm_realloc(dev->maplist, + (dev->map_count-1) + * sizeof(*dev->maplist), + dev->map_count + * sizeof(*dev->maplist), + DRM_MEM_MAPS); + } else { + dev->map_count =3D 1; + dev->maplist =3D drm_alloc(dev->map_count*sizeof(*dev->maplist), + DRM_MEM_MAPS); + } + dev->maplist[dev->map_count-1] =3D map; + up(&dev->struct_sem); + + if (copy_to_user((drm_map_t *)arg, map, sizeof(*map))) + return -EFAULT; + if (map->type !=3D _DRM_SHM) { + if (copy_to_user(&((drm_map_t *)arg)->handle, + &map->offset, + sizeof(map->offset))) + return -EFAULT; + } =09 + return 0; +} + +int drm_addbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + int count; + int order; + int size; + int total; + int page_order; + drm_buf_entry_t *entry; + unsigned long page; + drm_buf_t *buf; + int alignment; + unsigned long offset; + int i; + int byte_count; + int page_count; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + count =3D request.count; + order =3D drm_order(request.size); + size =3D 1 << order; +=09 + DRM_DEBUG("count =3D %d, size =3D %d (%d), order =3D %d, queue_count =3D = %d\n", + request.count, request.size, size, order, dev->queue_count); + + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + if (dev->queue_count) return -EBUSY; /* Not while in use */ + + alignment =3D (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size; + page_order =3D order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0; + total =3D PAGE_SIZE << page_order; + + spin_lock(&dev->count_lock); + if (dev->buf_use) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + atomic_inc(&dev->buf_alloc); + spin_unlock(&dev->count_lock); +=09 + down(&dev->struct_sem); + entry =3D &dma->bufs[order]; + if (entry->buf_count) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; /* May only call once for each order */ + } +=09 + if(count < 0 || count > 4096) + { + up(&dev->struct_sem); + return -EINVAL; + } +=09 + entry->buflist =3D drm_alloc(count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + if (!entry->buflist) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->buflist, 0, count * sizeof(*entry->buflist)); + + entry->seglist =3D drm_alloc(count * sizeof(*entry->seglist), + DRM_MEM_SEGS); + if (!entry->seglist) { + drm_free(entry->buflist, + count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->seglist, 0, count * sizeof(*entry->seglist)); + + dma->pagelist =3D drm_realloc(dma->pagelist, + dma->page_count * sizeof(*dma->pagelist), + (dma->page_count + (count << page_order)) + * sizeof(*dma->pagelist), + DRM_MEM_PAGES); + DRM_DEBUG("pagelist: %d entries\n", + dma->page_count + (count << page_order)); + + + entry->buf_size =3D size; + entry->page_order =3D page_order; + byte_count =3D 0; + page_count =3D 0; + while (entry->buf_count < count) { + if (!(page =3D drm_alloc_pages(page_order, DRM_MEM_DMA))) break; + entry->seglist[entry->seg_count++] =3D page; + for (i =3D 0; i < (1 << page_order); i++) { + DRM_DEBUG("page %d @ 0x%08lx\n", + dma->page_count + page_count, + page + PAGE_SIZE * i); + dma->pagelist[dma->page_count + page_count++] + =3D page + PAGE_SIZE * i; + } + for (offset =3D 0; + offset + size <=3D total && entry->buf_count < count; + offset +=3D alignment, ++entry->buf_count) { + buf =3D &entry->buflist[entry->buf_count]; + buf->idx =3D dma->buf_count + entry->buf_count; + buf->total =3D alignment; + buf->order =3D order; + buf->used =3D 0; + buf->offset =3D (dma->byte_count + byte_count + offset); + buf->address =3D (void *)(page + offset); + buf->next =3D NULL; + buf->waiting =3D 0; + buf->pending =3D 0; + init_waitqueue_head(&buf->dma_wait); + buf->pid =3D 0; +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D 0; + buf->time_dispatched =3D 0; + buf->time_completed =3D 0; + buf->time_freed =3D 0; +#endif + DRM_DEBUG("buffer %d @ %p\n", + entry->buf_count, buf->address); + } + byte_count +=3D PAGE_SIZE << page_order; + } + + dma->buflist =3D drm_realloc(dma->buflist, + dma->buf_count * sizeof(*dma->buflist), + (dma->buf_count + entry->buf_count) + * sizeof(*dma->buflist), + DRM_MEM_BUFS); + for (i =3D dma->buf_count; i < dma->buf_count + entry->buf_count; i++) + dma->buflist[i] =3D &entry->buflist[i - dma->buf_count]; + + dma->buf_count +=3D entry->buf_count; + dma->seg_count +=3D entry->seg_count; + dma->page_count +=3D entry->seg_count << page_order; + dma->byte_count +=3D PAGE_SIZE * (entry->seg_count << page_order); +=09 + drm_freelist_create(&entry->freelist, entry->buf_count); + for (i =3D 0; i < entry->buf_count; i++) { + drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]); + } +=09 + up(&dev->struct_sem); + + request.count =3D entry->buf_count; + request.size =3D size; + + if (copy_to_user((drm_buf_desc_t *)arg, + &request, + sizeof(request))) + return -EFAULT; +=09 + atomic_dec(&dev->buf_alloc); + return 0; +} + +int drm_infobufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_info_t request; + int i; + int count; + + if (!dma) return -EINVAL; + + spin_lock(&dev->count_lock); + if (atomic_read(&dev->buf_alloc)) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + ++dev->buf_use; /* Can't allocate more after this call */ + spin_unlock(&dev->count_lock); + + if (copy_from_user(&request, + (drm_buf_info_t *)arg, + sizeof(request))) + return -EFAULT; + + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) ++count; + } +=09 + DRM_DEBUG("count =3D %d\n", count); +=09 + if (request.count >=3D count) { + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) { + if (copy_to_user(&request.list[count].count, + &dma->bufs[i].buf_count, + sizeof(dma->bufs[0] + .buf_count)) || + copy_to_user(&request.list[count].size, + &dma->bufs[i].buf_size, + sizeof(dma->bufs[0].buf_size)) || + copy_to_user(&request.list[count].low_mark, + &dma->bufs[i] + .freelist.low_mark, + sizeof(dma->bufs[0] + .freelist.low_mark)) || + copy_to_user(&request.list[count] + .high_mark, + &dma->bufs[i] + .freelist.high_mark, + sizeof(dma->bufs[0] + .freelist.high_mark))) + return -EFAULT; + + DRM_DEBUG("%d %d %d %d %d\n", + i, + dma->bufs[i].buf_count, + dma->bufs[i].buf_size, + dma->bufs[i].freelist.low_mark, + dma->bufs[i].freelist.high_mark); + ++count; + } + } + } + request.count =3D count; + + if (copy_to_user((drm_buf_info_t *)arg, + &request, + sizeof(request))) + return -EFAULT; +=09 + return 0; +} + +int drm_markbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + int order; + drm_buf_entry_t *entry; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + DRM_DEBUG("%d, %d, %d\n", + request.size, request.low_mark, request.high_mark); + order =3D drm_order(request.size); + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + entry =3D &dma->bufs[order]; + + if (request.low_mark < 0 || request.low_mark > entry->buf_count) + return -EINVAL; + if (request.high_mark < 0 || request.high_mark > entry->buf_count) + return -EINVAL; + + entry->freelist.low_mark =3D request.low_mark; + entry->freelist.high_mark =3D request.high_mark; +=09 + return 0; +} + +int drm_freebufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_free_t request; + int i; + int idx; + drm_buf_t *buf; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_free_t *)arg, + sizeof(request))) + return -EFAULT; + + DRM_DEBUG("%d\n", request.count); + for (i =3D 0; i < request.count; i++) { + if (copy_from_user(&idx, + &request.list[i], + sizeof(idx))) + return -EFAULT; + if (idx < 0 || idx >=3D dma->buf_count) { + DRM_ERROR("Index %d (of %d max)\n", + idx, dma->buf_count - 1); + return -EINVAL; + } + buf =3D dma->buflist[idx]; + if (buf->pid !=3D current->pid) { + DRM_ERROR("Process %d freeing buffer owned by %d\n", + current->pid, buf->pid); + return -EINVAL; + } + drm_free_buffer(dev, buf); + } +=09 + return 0; +} + +int drm_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + int retcode =3D 0; + const int zero =3D 0; + unsigned long virtual; + unsigned long address; + drm_buf_map_t request; + int i; + + if (!dma) return -EINVAL; +=09 + DRM_DEBUG("\n"); + + spin_lock(&dev->count_lock); + if (atomic_read(&dev->buf_alloc)) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + ++dev->buf_use; /* Can't allocate more after this call */ + spin_unlock(&dev->count_lock); + + if (copy_from_user(&request, + (drm_buf_map_t *)arg, + sizeof(request))) + return -EFAULT; + + if (request.count >=3D dma->buf_count) { + down_write(¤t->mm->mmap_sem); + virtual =3D do_mmap(filp, 0, dma->byte_count, + PROT_READ|PROT_WRITE, MAP_SHARED, 0); + up_write(¤t->mm->mmap_sem); + if (virtual > -1024UL) { + /* Real error */ + retcode =3D (signed long)virtual; + goto done; + } + request.virtual =3D (void *)virtual; + + for (i =3D 0; i < dma->buf_count; i++) { + if (copy_to_user(&request.list[i].idx, + &dma->buflist[i]->idx, + sizeof(request.list[0].idx))) { + retcode =3D -EFAULT; + goto done; + } + if (copy_to_user(&request.list[i].total, + &dma->buflist[i]->total, + sizeof(request.list[0].total))) { + retcode =3D -EFAULT; + goto done; + } + if (copy_to_user(&request.list[i].used, + &zero, + sizeof(zero))) { + retcode =3D -EFAULT; + goto done; + } + address =3D virtual + dma->buflist[i]->offset; + if (copy_to_user(&request.list[i].address, + &address, + sizeof(address))) { + retcode =3D -EFAULT; + goto done; + } + } + } +done: + request.count =3D dma->buf_count; + DRM_DEBUG("%d buffers, retcode =3D %d\n", request.count, retcode); + + if (copy_to_user((drm_buf_map_t *)arg, + &request, + sizeof(request))) + return -EFAULT; + + return retcode; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/context.c linux-2.4.13-lia/driv= ers/char/drm-4.0/context.c --- linux-2.4.13/drivers/char/drm-4.0/context.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/context.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,321 @@ +/* context.c -- IOCTLs for contexts and DMA queues -*- linux-c -*- + * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +static int drm_init_queue(drm_device_t *dev, drm_queue_t *q, drm_ctx_t *ct= x) +{ + DRM_DEBUG("\n"); +=09 + if (atomic_read(&q->use_count) !=3D 1 + || atomic_read(&q->finalization) + || atomic_read(&q->block_count)) { + DRM_ERROR("New queue is already in use: u%d f%d b%d\n", + atomic_read(&q->use_count), + atomic_read(&q->finalization), + atomic_read(&q->block_count)); + } + =20 + atomic_set(&q->finalization, 0); + atomic_set(&q->block_count, 0); + atomic_set(&q->block_read, 0); + atomic_set(&q->block_write, 0); + atomic_set(&q->total_queued, 0); + atomic_set(&q->total_flushed, 0); + atomic_set(&q->total_locks, 0); + + init_waitqueue_head(&q->write_queue); + init_waitqueue_head(&q->read_queue); + init_waitqueue_head(&q->flush_queue); + + q->flags =3D ctx->flags; + + drm_waitlist_create(&q->waitlist, dev->dma->buf_count); + + return 0; +} + + +/* drm_alloc_queue: +PRE: 1) dev->queuelist[0..dev->queue_count] is allocated and will not + disappear (so all deallocation must be done after IOCTLs are off) + 2) dev->queue_count < dev->queue_slots + 3) dev->queuelist[i].use_count =3D 0 and + dev->queuelist[i].finalization =3D 0 if i not in use=20 +POST: 1) dev->queuelist[i].use_count =3D 1 + 2) dev->queue_count < dev->queue_slots */ + =09 +static int drm_alloc_queue(drm_device_t *dev) +{ + int i; + drm_queue_t *queue; + int oldslots; + int newslots; + /* Check for a free queue */ + for (i =3D 0; i < dev->queue_count; i++) { + atomic_inc(&dev->queuelist[i]->use_count); + if (atomic_read(&dev->queuelist[i]->use_count) =3D 1 + && !atomic_read(&dev->queuelist[i]->finalization)) { + DRM_DEBUG("%d (free)\n", i); + return i; + } + atomic_dec(&dev->queuelist[i]->use_count); + } + /* Allocate a new queue */ +=09 + queue =3D drm_alloc(sizeof(*queue), DRM_MEM_QUEUES); + if(queue =3D NULL) + return -ENOMEM;=09 + + memset(queue, 0, sizeof(*queue)); + down(&dev->struct_sem); + atomic_set(&queue->use_count, 1); +=09 + ++dev->queue_count; + if (dev->queue_count >=3D dev->queue_slots) { + oldslots =3D dev->queue_slots * sizeof(*dev->queuelist); + if (!dev->queue_slots) dev->queue_slots =3D 1; + dev->queue_slots *=3D 2; + newslots =3D dev->queue_slots * sizeof(*dev->queuelist); + + dev->queuelist =3D drm_realloc(dev->queuelist, + oldslots, + newslots, + DRM_MEM_QUEUES); + if (!dev->queuelist) { + up(&dev->struct_sem); + DRM_DEBUG("out of memory\n"); + return -ENOMEM; + } + } + dev->queuelist[dev->queue_count-1] =3D queue; +=09 + up(&dev->struct_sem); + DRM_DEBUG("%d (new)\n", dev->queue_count - 1); + return dev->queue_count - 1; +} + +int drm_resctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_res_t res; + drm_ctx_t ctx; + int i; + + DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS); + if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res))) + return -EFAULT; + if (res.count >=3D DRM_RESERVED_CONTEXTS) { + memset(&ctx, 0, sizeof(ctx)); + for (i =3D 0; i < DRM_RESERVED_CONTEXTS; i++) { + ctx.handle =3D i; + if (copy_to_user(&res.contexts[i], + &i, + sizeof(i))) + return -EFAULT; + } + } + res.count =3D DRM_RESERVED_CONTEXTS; + if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res))) + return -EFAULT; + return 0; +} + + +int drm_addctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + if ((ctx.handle =3D drm_alloc_queue(dev)) =3D DRM_KERNEL_CONTEXT) { + /* Init kernel's context and get a new one. */ + drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx); + ctx.handle =3D drm_alloc_queue(dev); + } + drm_init_queue(dev, dev->queuelist[ctx.handle], &ctx); + DRM_DEBUG("%d\n", ctx.handle); + if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int drm_modctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + drm_queue_t *q; + =09 + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; +=09 + DRM_DEBUG("%d\n", ctx.handle); +=09 + if (ctx.handle < 0 || ctx.handle >=3D dev->queue_count) return -EINVAL; + q =3D dev->queuelist[ctx.handle]; +=09 + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) =3D 1) { + /* No longer in use */ + atomic_dec(&q->use_count); + return -EINVAL; + } + + if (DRM_BUFCOUNT(&q->waitlist)) { + atomic_dec(&q->use_count); + return -EBUSY; + } +=09 + q->flags =3D ctx.flags; +=09 + atomic_dec(&q->use_count); + return 0; +} + +int drm_getctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + drm_queue_t *q; + =09 + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; +=09 + DRM_DEBUG("%d\n", ctx.handle); +=09 + if (ctx.handle >=3D dev->queue_count) return -EINVAL; + q =3D dev->queuelist[ctx.handle]; +=09 + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) =3D 1) { + /* No longer in use */ + atomic_dec(&q->use_count); + return -EINVAL; + } +=09 + ctx.flags =3D q->flags; + atomic_dec(&q->use_count); +=09 + if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx))) + return -EFAULT; +=09 + return 0; +} + +int drm_switchctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + return drm_context_switch(dev, dev->last_context, ctx.handle); +} + +int drm_newctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + drm_context_switch_complete(dev, ctx.handle); + + return 0; +} + +int drm_rmctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + drm_queue_t *q; + drm_buf_t *buf; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); +=09 + if (ctx.handle >=3D dev->queue_count) return -EINVAL; + q =3D dev->queuelist[ctx.handle]; +=09 + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) =3D 1) { + /* No longer in use */ + atomic_dec(&q->use_count); + return -EINVAL; + } +=09 + atomic_inc(&q->finalization); /* Mark queue in finalization state */ + atomic_sub(2, &q->use_count); /* Mark queue as unused (pending + finalization) */ + + while (test_and_set_bit(0, &dev->interrupt_flag)) { + schedule(); + if (signal_pending(current)) { + clear_bit(0, &dev->interrupt_flag); + return -EINTR; + } + } + /* Remove queued buffers */ + while ((buf =3D drm_waitlist_get(&q->waitlist))) { + drm_free_buffer(dev, buf); + } + clear_bit(0, &dev->interrupt_flag); +=09 + /* Wakeup blocked processes */ + wake_up_interruptible(&q->read_queue); + wake_up_interruptible(&q->write_queue); + wake_up_interruptible(&q->flush_queue); +=09 + /* Finalization over. Queue is made + available when both use_count and + finalization become 0, which won't + happen until all the waiting processes + stop waiting. */ + atomic_dec(&q->finalization); + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c linux-2.4.13-lia/dr= ivers/char/drm-4.0/ctxbitmap.c --- linux-2.4.13/drivers/char/drm-4.0/ctxbitmap.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/ctxbitmap.c Thu Oct 4 00:21:40 2= 001 @@ -0,0 +1,85 @@ +/* ctxbitmap.c -- Context bitmap management -*- linux-c -*- + * Created: Thu Jan 6 03:56:42 2000 by jhartmann@precisioninsight.com + *=20 + * Copyright 2000 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Author: Jeff Hartmann + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle) +{ + if (ctx_handle < 0) goto failed; + + if (ctx_handle < DRM_MAX_CTXBITMAP) { + clear_bit(ctx_handle, dev->ctx_bitmap); + return; + } +failed: + DRM_ERROR("Attempt to free invalid context handle: %d\n", + ctx_handle); + return; +} + +int drm_ctxbitmap_next(drm_device_t *dev) +{ + int bit; + + bit =3D find_first_zero_bit(dev->ctx_bitmap, DRM_MAX_CTXBITMAP); + if (bit < DRM_MAX_CTXBITMAP) { + set_bit(bit, dev->ctx_bitmap); + DRM_DEBUG("drm_ctxbitmap_next bit : %d\n", bit); + return bit; + } + return -1; +} + +int drm_ctxbitmap_init(drm_device_t *dev) +{ + int i; + int temp; + + dev->ctx_bitmap =3D (unsigned long *) drm_alloc(PAGE_SIZE,=20 + DRM_MEM_CTXBITMAP); + if(dev->ctx_bitmap =3D NULL) { + return -ENOMEM; + } + memset((void *) dev->ctx_bitmap, 0, PAGE_SIZE); + for(i =3D 0; i < DRM_RESERVED_CONTEXTS; i++) { + temp =3D drm_ctxbitmap_next(dev); + DRM_DEBUG("drm_ctxbitmap_init : %d\n", temp); + } + + return 0; +} + +void drm_ctxbitmap_cleanup(drm_device_t *dev) +{ + drm_free((void *)dev->ctx_bitmap, PAGE_SIZE, + DRM_MEM_CTXBITMAP); +} + diff -urN linux-2.4.13/drivers/char/drm-4.0/dma.c linux-2.4.13-lia/drivers/= char/drm-4.0/dma.c --- linux-2.4.13/drivers/char/drm-4.0/dma.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/dma.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,546 @@ +/* dma.c -- DMA IOCTL and function support -*- linux-c -*- + * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com + * + * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +#include /* For task queue support */ + +void drm_dma_setup(drm_device_t *dev) +{ + int i; +=09 + if (!(dev->dma =3D drm_alloc(sizeof(*dev->dma), DRM_MEM_DRIVER))) { + printk(KERN_ERR "drm_dma_setup: can't drm_alloc dev->dma"); + return; + } =20 + memset(dev->dma, 0, sizeof(*dev->dma)); + for (i =3D 0; i <=3D DRM_MAX_ORDER; i++) + memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0])); +} + +void drm_dma_takedown(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + int i, j; + + if (!dma) return; +=09 + /* Clear dma buffers */ + for (i =3D 0; i <=3D DRM_MAX_ORDER; i++) { + if (dma->bufs[i].seg_count) { + DRM_DEBUG("order %d: buf_count =3D %d," + " seg_count =3D %d\n", + i, + dma->bufs[i].buf_count, + dma->bufs[i].seg_count); + for (j =3D 0; j < dma->bufs[i].seg_count; j++) { + drm_free_pages(dma->bufs[i].seglist[j], + dma->bufs[i].page_order, + DRM_MEM_DMA); + } + drm_free(dma->bufs[i].seglist, + dma->bufs[i].seg_count + * sizeof(*dma->bufs[0].seglist), + DRM_MEM_SEGS); + } + if(dma->bufs[i].buf_count) { + for(j =3D 0; j < dma->bufs[i].buf_count; j++) { + if(dma->bufs[i].buflist[j].dev_private) { + drm_free(dma->bufs[i].buflist[j].dev_private, + dma->bufs[i].buflist[j].dev_priv_size, + DRM_MEM_BUFS); + } + } + drm_free(dma->bufs[i].buflist, + dma->bufs[i].buf_count * + sizeof(*dma->bufs[0].buflist), + DRM_MEM_BUFS); + drm_freelist_destroy(&dma->bufs[i].freelist); + } + } +=09 + if (dma->buflist) { + drm_free(dma->buflist, + dma->buf_count * sizeof(*dma->buflist), + DRM_MEM_BUFS); + } + + if (dma->pagelist) { + drm_free(dma->pagelist, + dma->page_count * sizeof(*dma->pagelist), + DRM_MEM_PAGES); + } + drm_free(dev->dma, sizeof(*dev->dma), DRM_MEM_DRIVER); + dev->dma =3D NULL; +} + +#if DRM_DMA_HISTOGRAM +/* This is slow, but is useful for debugging. */ +int drm_histogram_slot(unsigned long count) +{ + int value =3D DRM_DMA_HISTOGRAM_INITIAL; + int slot; + + for (slot =3D 0; + slot < DRM_DMA_HISTOGRAM_SLOTS; + ++slot, value =3D DRM_DMA_HISTOGRAM_NEXT(value)) { + if (count < value) return slot; + } + return DRM_DMA_HISTOGRAM_SLOTS - 1; +} + +void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf) +{ + cycles_t queued_to_dispatched; + cycles_t dispatched_to_completed; + cycles_t completed_to_freed; + int q2d, d2c, c2f, q2c, q2f; +=09 + if (buf->time_queued) { + queued_to_dispatched =3D (buf->time_dispatched + - buf->time_queued); + dispatched_to_completed =3D (buf->time_completed + - buf->time_dispatched); + completed_to_freed =3D (buf->time_freed + - buf->time_completed); + + q2d =3D drm_histogram_slot(queued_to_dispatched); + d2c =3D drm_histogram_slot(dispatched_to_completed); + c2f =3D drm_histogram_slot(completed_to_freed); + + q2c =3D drm_histogram_slot(queued_to_dispatched + + dispatched_to_completed); + q2f =3D drm_histogram_slot(queued_to_dispatched + + dispatched_to_completed + + completed_to_freed); + =09 + atomic_inc(&dev->histo.total); + atomic_inc(&dev->histo.queued_to_dispatched[q2d]); + atomic_inc(&dev->histo.dispatched_to_completed[d2c]); + atomic_inc(&dev->histo.completed_to_freed[c2f]); + =09 + atomic_inc(&dev->histo.queued_to_completed[q2c]); + atomic_inc(&dev->histo.queued_to_freed[q2f]); + + } + buf->time_queued =3D 0; + buf->time_dispatched =3D 0; + buf->time_completed =3D 0; + buf->time_freed =3D 0; +} +#endif + +void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf) +{ + drm_device_dma_t *dma =3D dev->dma; + + if (!buf) return; +=09 + buf->waiting =3D 0; + buf->pending =3D 0; + buf->pid =3D 0; + buf->used =3D 0; +#if DRM_DMA_HISTOGRAM + buf->time_completed =3D get_cycles(); +#endif + if (waitqueue_active(&buf->dma_wait)) { + wake_up_interruptible(&buf->dma_wait); + } else { + /* If processes are waiting, the last one + to wake will put the buffer on the free + list. If no processes are waiting, we + put the buffer on the freelist here. */ + drm_freelist_put(dev, &dma->bufs[buf->order].freelist, buf); + } +} + +void drm_reclaim_buffers(drm_device_t *dev, pid_t pid) +{ + drm_device_dma_t *dma =3D dev->dma; + int i; + + if (!dma) return; + for (i =3D 0; i < dma->buf_count; i++) { + if (dma->buflist[i]->pid =3D pid) { + switch (dma->buflist[i]->list) { + case DRM_LIST_NONE: + drm_free_buffer(dev, dma->buflist[i]); + break; + case DRM_LIST_WAIT: + dma->buflist[i]->list =3D DRM_LIST_RECLAIM; + break; + default: + /* Buffer already on hardware. */ + break; + } + } + } +} + +int drm_context_switch(drm_device_t *dev, int old, int new) +{ + char buf[64]; + drm_queue_t *q; + + atomic_inc(&dev->total_ctx); + + if (test_and_set_bit(0, &dev->context_flag)) { + DRM_ERROR("Reentering -- FIXME\n"); + return -EBUSY; + } + +#if DRM_DMA_HISTOGRAM + dev->ctx_start =3D get_cycles(); +#endif +=09 + DRM_DEBUG("Context switch from %d to %d\n", old, new); + + if (new >=3D dev->queue_count) { + clear_bit(0, &dev->context_flag); + return -EINVAL; + } + + if (new =3D dev->last_context) { + clear_bit(0, &dev->context_flag); + return 0; + } +=09 + q =3D dev->queuelist[new]; + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) =3D 1) { + atomic_dec(&q->use_count); + clear_bit(0, &dev->context_flag); + return -EINVAL; + } + + if (drm_flags & DRM_FLAG_NOCTX) { + drm_context_switch_complete(dev, new); + } else { + sprintf(buf, "C %d %d\n", old, new); + drm_write_string(dev, buf); + } +=09 + atomic_dec(&q->use_count); +=09 + return 0; +} + +int drm_context_switch_complete(drm_device_t *dev, int new) +{ + drm_device_dma_t *dma =3D dev->dma; +=09 + dev->last_context =3D new; /* PRE/POST: This is the _only_ writer. */ + dev->last_switch =3D jiffies; +=09 + if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("Lock isn't held after context switch\n"); + } + + if (!dma || !(dma->next_buffer && dma->next_buffer->while_locked)) { + if (drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + DRM_ERROR("Cannot free lock\n"); + } + } +=09 +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles() + - dev->ctx_start)]); + =20 +#endif + clear_bit(0, &dev->context_flag); + wake_up_interruptible(&dev->context_wait); +=09 + return 0; +} + +void drm_clear_next_buffer(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; +=09 + dma->next_buffer =3D NULL; + if (dma->next_queue && !DRM_BUFCOUNT(&dma->next_queue->waitlist)) { + wake_up_interruptible(&dma->next_queue->flush_queue); + } + dma->next_queue =3D NULL; +} + + +int drm_select_queue(drm_device_t *dev, void (*wrapper)(unsigned long)) +{ + int i; + int candidate =3D -1; + int j =3D jiffies; + + if (!dev) { + DRM_ERROR("No device\n"); + return -1; + } + if (!dev->queuelist || !dev->queuelist[DRM_KERNEL_CONTEXT]) { + /* This only happens between the time the + interrupt is initialized and the time + the queues are initialized. */ + return -1; + } + + /* Doing "while locked" DMA? */ + if (DRM_WAITCOUNT(dev, DRM_KERNEL_CONTEXT)) { + return DRM_KERNEL_CONTEXT; + } + + /* If there are buffers on the last_context + queue, and we have not been executing + this context very long, continue to + execute this context. */ + if (dev->last_switch <=3D j + && dev->last_switch + DRM_TIME_SLICE > j + && DRM_WAITCOUNT(dev, dev->last_context)) { + return dev->last_context; + } + + /* Otherwise, find a candidate */ + for (i =3D dev->last_checked + 1; i < dev->queue_count; i++) { + if (DRM_WAITCOUNT(dev, i)) { + candidate =3D dev->last_checked =3D i; + break; + } + } + + if (candidate < 0) { + for (i =3D 0; i < dev->queue_count; i++) { + if (DRM_WAITCOUNT(dev, i)) { + candidate =3D dev->last_checked =3D i; + break; + } + } + } + + if (wrapper + && candidate >=3D 0 + && candidate !=3D dev->last_context + && dev->last_switch <=3D j + && dev->last_switch + DRM_TIME_SLICE > j) { + if (dev->timer.expires !=3D dev->last_switch + DRM_TIME_SLICE) { + del_timer(&dev->timer); + dev->timer.function =3D wrapper; + dev->timer.data =3D (unsigned long)dev; + dev->timer.expires =3D dev->last_switch+DRM_TIME_SLICE; + add_timer(&dev->timer); + } + return -1; + } + + return candidate; +} + + +int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *d) +{ + int i; + drm_queue_t *q; + drm_buf_t *buf; + int idx; + int while_locked =3D 0; + drm_device_dma_t *dma =3D dev->dma; + DECLARE_WAITQUEUE(entry, current); + + DRM_DEBUG("%d\n", d->send_count); + + if (d->flags & _DRM_DMA_WHILE_LOCKED) { + int context =3D dev->lock.hw_lock->lock; + =09 + if (!_DRM_LOCK_IS_HELD(context)) { + DRM_ERROR("No lock held during \"while locked\"" + " request\n"); + return -EINVAL; + } + if (d->context !=3D _DRM_LOCKING_CONTEXT(context) + && _DRM_LOCKING_CONTEXT(context) !=3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Lock held by %d while %d makes" + " \"while locked\" request\n", + _DRM_LOCKING_CONTEXT(context), + d->context); + return -EINVAL; + } + q =3D dev->queuelist[DRM_KERNEL_CONTEXT]; + while_locked =3D 1; + } else { + q =3D dev->queuelist[d->context]; + } + + + atomic_inc(&q->use_count); + if (atomic_read(&q->block_write)) { + add_wait_queue(&q->write_queue, &entry); + atomic_inc(&q->block_count); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!atomic_read(&q->block_write)) break; + schedule(); + if (signal_pending(current)) { + atomic_dec(&q->use_count); + remove_wait_queue(&q->write_queue, &entry); + return -EINTR; + } + } + atomic_dec(&q->block_count); + current->state =3D TASK_RUNNING; + remove_wait_queue(&q->write_queue, &entry); + } +=09 + for (i =3D 0; i < d->send_count; i++) { + idx =3D d->send_indices[i]; + if (idx < 0 || idx >=3D dma->buf_count) { + atomic_dec(&q->use_count); + DRM_ERROR("Index %d (of %d max)\n", + d->send_indices[i], dma->buf_count - 1); + return -EINVAL; + } + buf =3D dma->buflist[ idx ]; + if (buf->pid !=3D current->pid) { + atomic_dec(&q->use_count); + DRM_ERROR("Process %d using buffer owned by %d\n", + current->pid, buf->pid); + return -EINVAL; + } + if (buf->list !=3D DRM_LIST_NONE) { + atomic_dec(&q->use_count); + DRM_ERROR("Process %d using buffer %d on list %d\n", + current->pid, buf->idx, buf->list); + } + buf->used =3D d->send_sizes[i]; + buf->while_locked =3D while_locked; + buf->context =3D d->context; + if (!buf->used) { + DRM_ERROR("Queueing 0 length buffer\n"); + } + if (buf->pending) { + atomic_dec(&q->use_count); + DRM_ERROR("Queueing pending buffer:" + " buffer %d, offset %d\n", + d->send_indices[i], i); + return -EINVAL; + } + if (buf->waiting) { + atomic_dec(&q->use_count); + DRM_ERROR("Queueing waiting buffer:" + " buffer %d, offset %d\n", + d->send_indices[i], i); + return -EINVAL; + } + buf->waiting =3D 1; + if (atomic_read(&q->use_count) =3D 1 + || atomic_read(&q->finalization)) { + drm_free_buffer(dev, buf); + } else { + drm_waitlist_put(&q->waitlist, buf); + atomic_inc(&q->total_queued); + } + } + atomic_dec(&q->use_count); +=09 + return 0; +} + +static int drm_dma_get_buffers_of_order(drm_device_t *dev, drm_dma_t *d, + int order) +{ + int i; + drm_buf_t *buf; + drm_device_dma_t *dma =3D dev->dma; +=09 + for (i =3D d->granted_count; i < d->request_count; i++) { + buf =3D drm_freelist_get(&dma->bufs[order].freelist, + d->flags & _DRM_DMA_WAIT); + if (!buf) break; + if (buf->pending || buf->waiting) { + DRM_ERROR("Free buffer %d in use by %d (w%d, p%d)\n", + buf->idx, + buf->pid, + buf->waiting, + buf->pending); + } + buf->pid =3D current->pid; + if (copy_to_user(&d->request_indices[i], + &buf->idx, + sizeof(buf->idx))) + return -EFAULT; + + if (copy_to_user(&d->request_sizes[i], + &buf->total, + sizeof(buf->total))) + return -EFAULT; + + ++d->granted_count; + } + return 0; +} + + +int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma) +{ + int order; + int retcode =3D 0; + int tmp_order; +=09 + order =3D drm_order(dma->request_size); + + dma->granted_count =3D 0; + retcode =3D drm_dma_get_buffers_of_order(dev, dma, order); + + if (dma->granted_count < dma->request_count + && (dma->flags & _DRM_DMA_SMALLER_OK)) { + for (tmp_order =3D order - 1; + !retcode + && dma->granted_count < dma->request_count + && tmp_order >=3D DRM_MIN_ORDER; + --tmp_order) { + =09 + retcode =3D drm_dma_get_buffers_of_order(dev, dma, + tmp_order); + } + } + + if (dma->granted_count < dma->request_count + && (dma->flags & _DRM_DMA_LARGER_OK)) { + for (tmp_order =3D order + 1; + !retcode + && dma->granted_count < dma->request_count + && tmp_order <=3D DRM_MAX_ORDER; + ++tmp_order) { + =09 + retcode =3D drm_dma_get_buffers_of_order(dev, dma, + tmp_order); + } + } + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/drawable.c linux-2.4.13-lia/dri= vers/char/drm-4.0/drawable.c --- linux-2.4.13/drivers/char/drm-4.0/drawable.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/drawable.c Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,51 @@ +/* drawable.c -- IOCTLs for drawables -*- linux-c -*- + * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +int drm_adddraw(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_draw_t draw; + + draw.handle =3D 0; /* NOOP */ + DRM_DEBUG("%d\n", draw.handle); + if (copy_to_user((drm_draw_t *)arg, &draw, sizeof(draw))) + return -EFAULT; + return 0; +} + +int drm_rmdraw(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + return 0; /* NOOP */ +} diff -urN linux-2.4.13/drivers/char/drm-4.0/drm.h linux-2.4.13-lia/drivers/= char/drm-4.0/drm.h --- linux-2.4.13/drivers/char/drm-4.0/drm.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/drm.h Thu Oct 4 00:21:40 2001 @@ -0,0 +1,414 @@ +/* drm.h -- Header for Direct Rendering Manager -*- linux-c -*- + * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * + * Acknowledgements: + * Dec 1999, Richard Henderson , move to generic cmpxchg. + * + */ + +#ifndef _DRM_H_ +#define _DRM_H_ + +#include +#if defined(__linux__) +#include /* For _IO* macros */ +#define DRM_IOCTL_NR(n) _IOC_NR(n) +#elif defined(__FreeBSD__) +#include +#define DRM_IOCTL_NR(n) ((n) & 0xff) +#endif + +#define DRM_PROC_DEVICES "/proc/devices" +#define DRM_PROC_MISC "/proc/misc" +#define DRM_PROC_DRM "/proc/drm" +#define DRM_DEV_DRM "/dev/drm" +#define DRM_DEV_MODE (S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP) +#define DRM_DEV_UID 0 +#define DRM_DEV_GID 0 + + +#define DRM_NAME "drm" /* Name in kernel, /dev, and /proc */ +#define DRM_MIN_ORDER 5 /* At least 2^5 bytes =3D 32 bytes */ +#define DRM_MAX_ORDER 22 /* Up to 2^22 bytes =3D 4MB */ +#define DRM_RAM_PERCENT 10 /* How much system ram can we lock? */ + +#define _DRM_LOCK_HELD 0x80000000 /* Hardware lock is held */ +#define _DRM_LOCK_CONT 0x40000000 /* Hardware lock is contended */ +#define _DRM_LOCK_IS_HELD(lock) ((lock) & _DRM_LOCK_HELD) +#define _DRM_LOCK_IS_CONT(lock) ((lock) & _DRM_LOCK_CONT) +#define _DRM_LOCKING_CONTEXT(lock) ((lock) & ~(_DRM_LOCK_HELD|_DRM_LOCK_CO= NT)) + +typedef unsigned long drm_handle_t; +typedef unsigned int drm_context_t; +typedef unsigned int drm_drawable_t; +typedef unsigned int drm_magic_t; + +/* Warning: If you change this structure, make sure you change + * XF86DRIClipRectRec in the server as well */ + +typedef struct drm_clip_rect { + unsigned short x1; + unsigned short y1; + unsigned short x2; + unsigned short y2; +} drm_clip_rect_t; + +/* Seperate include files for the i810/mga/r128 specific structures */ +#include "mga_drm.h" +#include "i810_drm.h" +#include "r128_drm.h" +#include "radeon_drm.h" +#ifdef CONFIG_DRM40_SIS +#include "sis_drm.h" +#endif + +typedef struct drm_version { + int version_major; /* Major version */ + int version_minor; /* Minor version */ + int version_patchlevel;/* Patch level */ + size_t name_len; /* Length of name buffer */ + char *name; /* Name of driver */ + size_t date_len; /* Length of date buffer */ + char *date; /* User-space buffer to hold date */ + size_t desc_len; /* Length of desc buffer */ + char *desc; /* User-space buffer to hold desc */ +} drm_version_t; + +typedef struct drm_unique { + size_t unique_len; /* Length of unique */ + char *unique; /* Unique name for driver instantiation */ +} drm_unique_t; + +typedef struct drm_list { + int count; /* Length of user-space structures */ + drm_version_t *version; +} drm_list_t; + +typedef struct drm_block { + int unused; +} drm_block_t; + +typedef struct drm_control { + enum { + DRM_ADD_COMMAND, + DRM_RM_COMMAND, + DRM_INST_HANDLER, + DRM_UNINST_HANDLER + } func; + int irq; +} drm_control_t; + +typedef enum drm_map_type { + _DRM_FRAME_BUFFER =3D 0, /* WC (no caching), no core dump */ + _DRM_REGISTERS =3D 1, /* no caching, no core dump */ + _DRM_SHM =3D 2, /* shared, cached */ + _DRM_AGP =3D 3 /* AGP/GART */ +} drm_map_type_t; + +typedef enum drm_map_flags { + _DRM_RESTRICTED =3D 0x01, /* Cannot be mapped to user-virtual */ + _DRM_READ_ONLY =3D 0x02, + _DRM_LOCKED =3D 0x04, /* shared, cached, locked */ + _DRM_KERNEL =3D 0x08, /* kernel requires access */ + _DRM_WRITE_COMBINING =3D 0x10, /* use write-combining if available */ + _DRM_CONTAINS_LOCK =3D 0x20 /* SHM page that contains lock */ +} drm_map_flags_t; + +typedef struct drm_map { + unsigned long offset; /* Requested physical address (0 for SAREA)*/ + unsigned long size; /* Requested physical size (bytes) */ + drm_map_type_t type; /* Type of memory to map */ + drm_map_flags_t flags; /* Flags */ + void *handle; /* User-space: "Handle" to pass to mmap */ + /* Kernel-space: kernel-virtual address */ + int mtrr; /* MTRR slot used */ + /* Private data */ +} drm_map_t; + +typedef enum drm_lock_flags { + _DRM_LOCK_READY =3D 0x01, /* Wait until hardware is ready for DMA */ + _DRM_LOCK_QUIESCENT =3D 0x02, /* Wait until hardware quiescent */ + _DRM_LOCK_FLUSH =3D 0x04, /* Flush this context's DMA queue first */ + _DRM_LOCK_FLUSH_ALL =3D 0x08, /* Flush all DMA queues first */ + /* These *HALT* flags aren't supported yet + -- they will be used to support the + full-screen DGA-like mode. */ + _DRM_HALT_ALL_QUEUES =3D 0x10, /* Halt all current and future queues */ + _DRM_HALT_CUR_QUEUES =3D 0x20 /* Halt all current queues */ +} drm_lock_flags_t; + +typedef struct drm_lock { + int context; + drm_lock_flags_t flags; +} drm_lock_t; + +typedef enum drm_dma_flags { /* These values *MUST* match xf86drm.h = */ + /* Flags for DMA buffer dispatch */ + _DRM_DMA_BLOCK =3D 0x01, /* Block until buffer dispatched. + Note, the buffer may not yet have + been processed by the hardware -- + getting a hardware lock with the + hardware quiescent will ensure + that the buffer has been + processed. */ + _DRM_DMA_WHILE_LOCKED =3D 0x02, /* Dispatch while lock held */ + _DRM_DMA_PRIORITY =3D 0x04, /* High priority dispatch */ + + /* Flags for DMA buffer request */ + _DRM_DMA_WAIT =3D 0x10, /* Wait for free buffers */ + _DRM_DMA_SMALLER_OK =3D 0x20, /* Smaller-than-requested buffers ok */ + _DRM_DMA_LARGER_OK =3D 0x40 /* Larger-than-requested buffers ok */ +} drm_dma_flags_t; + +typedef struct drm_buf_desc { + int count; /* Number of buffers of this size */ + int size; /* Size in bytes */ + int low_mark; /* Low water mark */ + int high_mark; /* High water mark */ + enum { + _DRM_PAGE_ALIGN =3D 0x01, /* Align on page boundaries for DMA */ + _DRM_AGP_BUFFER =3D 0x02 /* Buffer is in agp space */ + } flags; + unsigned long agp_start; /* Start address of where the agp buffers + * are in the agp aperture */ +} drm_buf_desc_t; + +typedef struct drm_buf_info { + int count; /* Entries in list */ + drm_buf_desc_t *list; +} drm_buf_info_t; + +typedef struct drm_buf_free { + int count; + int *list; +} drm_buf_free_t; + +typedef struct drm_buf_pub { + int idx; /* Index into master buflist */ + int total; /* Buffer size */ + int used; /* Amount of buffer in use (for DMA) */ + void *address; /* Address of buffer */ +} drm_buf_pub_t; + +typedef struct drm_buf_map { + int count; /* Length of buflist */ + void *virtual; /* Mmaped area in user-virtual */ + drm_buf_pub_t *list; /* Buffer information */ +} drm_buf_map_t; + +typedef struct drm_dma { + /* Indices here refer to the offset into + buflist in drm_buf_get_t. */ + int context; /* Context handle */ + int send_count; /* Number of buffers to send */ + int *send_indices; /* List of handles to buffers */ + int *send_sizes; /* Lengths of data to send */ + drm_dma_flags_t flags; /* Flags */ + int request_count; /* Number of buffers requested */ + int request_size; /* Desired size for buffers */ + int *request_indices; /* Buffer information */ + int *request_sizes; + int granted_count; /* Number of buffers granted */ +} drm_dma_t; + +typedef enum { + _DRM_CONTEXT_PRESERVED =3D 0x01, + _DRM_CONTEXT_2DONLY =3D 0x02 +} drm_ctx_flags_t; + +typedef struct drm_ctx { + drm_context_t handle; + drm_ctx_flags_t flags; +} drm_ctx_t; + +typedef struct drm_ctx_res { + int count; + drm_ctx_t *contexts; +} drm_ctx_res_t; + +typedef struct drm_draw { + drm_drawable_t handle; +} drm_draw_t; + +typedef struct drm_auth { + drm_magic_t magic; +} drm_auth_t; + +typedef struct drm_irq_busid { + int irq; + int busnum; + int devnum; + int funcnum; +} drm_irq_busid_t; + +typedef struct drm_agp_mode { + unsigned long mode; +} drm_agp_mode_t; + + /* For drm_agp_alloc -- allocated a buffer */ +typedef struct drm_agp_buffer { + unsigned long size; /* In bytes -- will round to page boundary */ + unsigned long handle; /* Used for BIND/UNBIND ioctls */ + unsigned long type; /* Type of memory to allocate */ + unsigned long physical; /* Physical used by i810 */ +} drm_agp_buffer_t; + + /* For drm_agp_bind */ +typedef struct drm_agp_binding { + unsigned long handle; /* From drm_agp_buffer */ + unsigned long offset; /* In bytes -- will round to page boundary */ +} drm_agp_binding_t; + +typedef struct drm_agp_info { + int agp_version_major; + int agp_version_minor; + unsigned long mode; + unsigned long aperture_base; /* physical address */ + unsigned long aperture_size; /* bytes */ + unsigned long memory_allowed; /* bytes */ + unsigned long memory_used; + + /* PCI information */ + unsigned short id_vendor; + unsigned short id_device; +} drm_agp_info_t; + +#define DRM_IOCTL_BASE 'd' +#define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr) +#define DRM_IOR(nr,size) _IOR(DRM_IOCTL_BASE,nr,size) +#define DRM_IOW(nr,size) _IOW(DRM_IOCTL_BASE,nr,size) +#define DRM_IOWR(nr,size) _IOWR(DRM_IOCTL_BASE,nr,size) + + +#define DRM_IOCTL_VERSION DRM_IOWR(0x00, drm_version_t) +#define DRM_IOCTL_GET_UNIQUE DRM_IOWR(0x01, drm_unique_t) +#define DRM_IOCTL_GET_MAGIC DRM_IOR( 0x02, drm_auth_t) +#define DRM_IOCTL_IRQ_BUSID DRM_IOWR(0x03, drm_irq_busid_t) + +#define DRM_IOCTL_SET_UNIQUE DRM_IOW( 0x10, drm_unique_t) +#define DRM_IOCTL_AUTH_MAGIC DRM_IOW( 0x11, drm_auth_t) +#define DRM_IOCTL_BLOCK DRM_IOWR(0x12, drm_block_t) +#define DRM_IOCTL_UNBLOCK DRM_IOWR(0x13, drm_block_t) +#define DRM_IOCTL_CONTROL DRM_IOW( 0x14, drm_control_t) +#define DRM_IOCTL_ADD_MAP DRM_IOWR(0x15, drm_map_t) +#define DRM_IOCTL_ADD_BUFS DRM_IOWR(0x16, drm_buf_desc_t) +#define DRM_IOCTL_MARK_BUFS DRM_IOW( 0x17, drm_buf_desc_t) +#define DRM_IOCTL_INFO_BUFS DRM_IOWR(0x18, drm_buf_info_t) +#define DRM_IOCTL_MAP_BUFS DRM_IOWR(0x19, drm_buf_map_t) +#define DRM_IOCTL_FREE_BUFS DRM_IOW( 0x1a, drm_buf_free_t) + +#define DRM_IOCTL_ADD_CTX DRM_IOWR(0x20, drm_ctx_t) +#define DRM_IOCTL_RM_CTX DRM_IOWR(0x21, drm_ctx_t) +#define DRM_IOCTL_MOD_CTX DRM_IOW( 0x22, drm_ctx_t) +#define DRM_IOCTL_GET_CTX DRM_IOWR(0x23, drm_ctx_t) +#define DRM_IOCTL_SWITCH_CTX DRM_IOW( 0x24, drm_ctx_t) +#define DRM_IOCTL_NEW_CTX DRM_IOW( 0x25, drm_ctx_t) +#define DRM_IOCTL_RES_CTX DRM_IOWR(0x26, drm_ctx_res_t) +#define DRM_IOCTL_ADD_DRAW DRM_IOWR(0x27, drm_draw_t) +#define DRM_IOCTL_RM_DRAW DRM_IOWR(0x28, drm_draw_t) +#define DRM_IOCTL_DMA DRM_IOWR(0x29, drm_dma_t) +#define DRM_IOCTL_LOCK DRM_IOW( 0x2a, drm_lock_t) +#define DRM_IOCTL_UNLOCK DRM_IOW( 0x2b, drm_lock_t) +#define DRM_IOCTL_FINISH DRM_IOW( 0x2c, drm_lock_t) + +#define DRM_IOCTL_AGP_ACQUIRE DRM_IO( 0x30) +#define DRM_IOCTL_AGP_RELEASE DRM_IO( 0x31) +#define DRM_IOCTL_AGP_ENABLE DRM_IOW( 0x32, drm_agp_mode_t) +#define DRM_IOCTL_AGP_INFO DRM_IOR( 0x33, drm_agp_info_t) +#define DRM_IOCTL_AGP_ALLOC DRM_IOWR(0x34, drm_agp_buffer_t) +#define DRM_IOCTL_AGP_FREE DRM_IOW( 0x35, drm_agp_buffer_t) +#define DRM_IOCTL_AGP_BIND DRM_IOW( 0x36, drm_agp_binding_t) +#define DRM_IOCTL_AGP_UNBIND DRM_IOW( 0x37, drm_agp_binding_t) + +/* Mga specific ioctls */ +#define DRM_IOCTL_MGA_INIT DRM_IOW( 0x40, drm_mga_init_t) +#define DRM_IOCTL_MGA_SWAP DRM_IOW( 0x41, drm_mga_swap_t) +#define DRM_IOCTL_MGA_CLEAR DRM_IOW( 0x42, drm_mga_clear_t) +#define DRM_IOCTL_MGA_ILOAD DRM_IOW( 0x43, drm_mga_iload_t) +#define DRM_IOCTL_MGA_VERTEX DRM_IOW( 0x44, drm_mga_vertex_t) +#define DRM_IOCTL_MGA_FLUSH DRM_IOW( 0x45, drm_lock_t ) +#define DRM_IOCTL_MGA_INDICES DRM_IOW( 0x46, drm_mga_indices_t) +#define DRM_IOCTL_MGA_BLIT DRM_IOW( 0x47, drm_mga_blit_t) + +/* I810 specific ioctls */ +#define DRM_IOCTL_I810_INIT DRM_IOW( 0x40, drm_i810_init_t) +#define DRM_IOCTL_I810_VERTEX DRM_IOW( 0x41, drm_i810_vertex_t) +#define DRM_IOCTL_I810_CLEAR DRM_IOW( 0x42, drm_i810_clear_t) +#define DRM_IOCTL_I810_FLUSH DRM_IO( 0x43) +#define DRM_IOCTL_I810_GETAGE DRM_IO( 0x44) +#define DRM_IOCTL_I810_GETBUF DRM_IOWR(0x45, drm_i810_dma_t) +#define DRM_IOCTL_I810_SWAP DRM_IO( 0x46) +#define DRM_IOCTL_I810_COPY DRM_IOW( 0x47, drm_i810_copy_t) +#define DRM_IOCTL_I810_DOCOPY DRM_IO( 0x48) + +/* Rage 128 specific ioctls */ +#define DRM_IOCTL_R128_INIT DRM_IOW( 0x40, drm_r128_init_t) +#define DRM_IOCTL_R128_CCE_START DRM_IO( 0x41) +#define DRM_IOCTL_R128_CCE_STOP DRM_IOW( 0x42, drm_r128_cce_stop_t) +#define DRM_IOCTL_R128_CCE_RESET DRM_IO( 0x43) +#define DRM_IOCTL_R128_CCE_IDLE DRM_IO( 0x44) +#define DRM_IOCTL_R128_RESET DRM_IO( 0x46) +#define DRM_IOCTL_R128_SWAP DRM_IO( 0x47) +#define DRM_IOCTL_R128_CLEAR DRM_IOW( 0x48, drm_r128_clear_t) +#define DRM_IOCTL_R128_VERTEX DRM_IOW( 0x49, drm_r128_vertex_t) +#define DRM_IOCTL_R128_INDICES DRM_IOW( 0x4a, drm_r128_indices_t) +#define DRM_IOCTL_R128_BLIT DRM_IOW( 0x4b, drm_r128_blit_t) +#define DRM_IOCTL_R128_DEPTH DRM_IOW( 0x4c, drm_r128_depth_t) +#define DRM_IOCTL_R128_STIPPLE DRM_IOW( 0x4d, drm_r128_stipple_t) +#define DRM_IOCTL_R128_PACKET DRM_IOWR(0x4e, drm_r128_packet_t) + +/* Radeon specific ioctls */ +#define DRM_IOCTL_RADEON_CP_INIT DRM_IOW( 0x40, drm_radeon_init_t) +#define DRM_IOCTL_RADEON_CP_START DRM_IO( 0x41) +#define DRM_IOCTL_RADEON_CP_STOP DRM_IOW( 0x42, drm_radeon_cp_stop_t) +#define DRM_IOCTL_RADEON_CP_RESET DRM_IO( 0x43) +#define DRM_IOCTL_RADEON_CP_IDLE DRM_IO( 0x44) +#define DRM_IOCTL_RADEON_RESET DRM_IO( 0x45) +#define DRM_IOCTL_RADEON_FULLSCREEN DRM_IOW( 0x46, drm_radeon_fullscreen_t) +#define DRM_IOCTL_RADEON_SWAP DRM_IO( 0x47) +#define DRM_IOCTL_RADEON_CLEAR DRM_IOW( 0x48, drm_radeon_clear_t) +#define DRM_IOCTL_RADEON_VERTEX DRM_IOW( 0x49, drm_radeon_vertex_t) +#define DRM_IOCTL_RADEON_INDICES DRM_IOW( 0x4a, drm_radeon_indices_t) +#define DRM_IOCTL_RADEON_BLIT DRM_IOW( 0x4b, drm_radeon_blit_t) +#define DRM_IOCTL_RADEON_STIPPLE DRM_IOW( 0x4c, drm_radeon_stipple_t) +#define DRM_IOCTL_RADEON_INDIRECT DRM_IOWR(0x4d, drm_radeon_indirect_t) + +#ifdef CONFIG_DRM40_SIS +/* SiS specific ioctls */ +#define SIS_IOCTL_FB_ALLOC DRM_IOWR(0x44, drm_sis_mem_t) +#define SIS_IOCTL_FB_FREE DRM_IOW( 0x45, drm_sis_mem_t) +#define SIS_IOCTL_AGP_INIT DRM_IOWR(0x53, drm_sis_agp_t) +#define SIS_IOCTL_AGP_ALLOC DRM_IOWR(0x54, drm_sis_mem_t) +#define SIS_IOCTL_AGP_FREE DRM_IOW( 0x55, drm_sis_mem_t) +#define SIS_IOCTL_FLIP DRM_IOW( 0x48, drm_sis_flip_t) +#define SIS_IOCTL_FLIP_INIT DRM_IO( 0x49) +#define SIS_IOCTL_FLIP_FINAL DRM_IO( 0x50) +#endif + +#endif diff -urN linux-2.4.13/drivers/char/drm-4.0/drmP.h linux-2.4.13-lia/drivers= /char/drm-4.0/drmP.h --- linux-2.4.13/drivers/char/drm-4.0/drmP.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/drmP.h Wed Oct 24 18:34:24 2001 @@ -0,0 +1,839 @@ +/* drmP.h -- Private header for Direct Rendering Manager -*- linux-c -*- + * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#ifndef _DRM_P_H_ +#define _DRM_P_H_ + +#ifdef __KERNEL__ +#ifdef __alpha__ +/* add include of current.h so that "current" is defined + * before static inline funcs in wait.h. Doing this so we + * can build the DRM (part of PI DRI). 4/21/2000 S + B */ +#include +#endif /* __alpha__ */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include /* For (un)lock_kernel */ +#include +#ifdef __alpha__ +#include /* For pte_wrprotect */ +#endif +#include +#include +#include +#ifdef CONFIG_MTRR +#include +#endif +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) +#include +#include +#endif +#if LINUX_VERSION_CODE >=3D 0x020100 /* KERNEL_VERSION(2,1,0) */ +#include +#include +#endif +#if LINUX_VERSION_CODE < 0x020400 +#include "compat-pre24.h" +#endif +#include "drm.h" + +#define DRM_DEBUG_CODE 2 /* Include debugging code (if > 1, then + also include looping detection. */ +#define DRM_DMA_HISTOGRAM 1 /* Make histogram of DMA latency. */ + +#define DRM_HASH_SIZE 16 /* Size of key hash table */ +#define DRM_KERNEL_CONTEXT 0 /* Change drm_resctx if changed */ +#define DRM_RESERVED_CONTEXTS 1 /* Change drm_resctx if changed */ +#define DRM_LOOPING_LIMIT 5000000 +#define DRM_BSZ 1024 /* Buffer size for /dev/drm? output */ +#define DRM_TIME_SLICE (HZ/20) /* Time slice for GLXContexts */ +#define DRM_LOCK_SLICE 1 /* Time slice for lock, in jiffies */ + +#define DRM_FLAG_DEBUG 0x01 +#define DRM_FLAG_NOCTX 0x02 + +#define DRM_MEM_DMA 0 +#define DRM_MEM_SAREA 1 +#define DRM_MEM_DRIVER 2 +#define DRM_MEM_MAGIC 3 +#define DRM_MEM_IOCTLS 4 +#define DRM_MEM_MAPS 5 +#define DRM_MEM_VMAS 6 +#define DRM_MEM_BUFS 7 +#define DRM_MEM_SEGS 8 +#define DRM_MEM_PAGES 9 +#define DRM_MEM_FILES 10 +#define DRM_MEM_QUEUES 11 +#define DRM_MEM_CMDS 12 +#define DRM_MEM_MAPPINGS 13 +#define DRM_MEM_BUFLISTS 14 +#define DRM_MEM_AGPLISTS 15 +#define DRM_MEM_TOTALAGP 16 +#define DRM_MEM_BOUNDAGP 17 +#define DRM_MEM_CTXBITMAP 18 + +#define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8) + + /* Backward compatibility section */ + /* _PAGE_WT changed to _PAGE_PWT in 2.2.6 */ +#ifndef _PAGE_PWT +#define _PAGE_PWT _PAGE_WT +#endif + /* Wait queue declarations changed in 2.3.1 */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(w,c) struct wait_queue w =3D { c, NULL } +typedef struct wait_queue *wait_queue_head_t; +#define init_waitqueue_head(q) *q =3D NULL; +#endif + + /* _PAGE_4M changed to _PAGE_PSE in 2.3.23 */ +#ifndef _PAGE_PSE +#define _PAGE_PSE _PAGE_4M +#endif + + /* vm_offset changed to vm_pgoff in 2.3.25 */ +#if LINUX_VERSION_CODE < 0x020319 +#define VM_OFFSET(vma) ((vma)->vm_offset) +#else +#define VM_OFFSET(vma) ((vma)->vm_pgoff << PAGE_SHIFT) +#endif + + /* *_nopage return values defined in 2.3.26 */ +#ifndef NOPAGE_SIGBUS +#define NOPAGE_SIGBUS 0 +#endif +#ifndef NOPAGE_OOM +#define NOPAGE_OOM 0 +#endif + + /* module_init/module_exit added in 2.3.13 */ +#ifndef module_init +#define module_init(x) int init_module(void) { return x(); } +#endif +#ifndef module_exit +#define module_exit(x) void cleanup_module(void) { x(); } +#endif + + /* Generic cmpxchg added in 2.3.x */ +#ifndef __HAVE_ARCH_CMPXCHG + /* Include this here so that driver can be + used with older kernels. */ +#if defined(__alpha__) +static __inline__ unsigned long +__cmpxchg_u32(volatile int *m, int old, int new) +{ + unsigned long prev, cmp; + + __asm__ __volatile__( + "1: ldl_l %0,%2\n" + " cmpeq %0,%3,%1\n" + " beq %1,2f\n" + " mov %4,%1\n" + " stl_c %1,%2\n" + " beq %1,3f\n" + "2: mb\n" + ".subsection 2\n" + "3: br 1b\n" + ".previous" + : "=3D&r"(prev), "=3D&r"(cmp), "=3Dm"(*m) + : "r"((long) old), "r"(new), "m"(*m)); + + return prev; +} + +static __inline__ unsigned long +__cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new) +{ + unsigned long prev, cmp; + + __asm__ __volatile__( + "1: ldq_l %0,%2\n" + " cmpeq %0,%3,%1\n" + " beq %1,2f\n" + " mov %4,%1\n" + " stq_c %1,%2\n" + " beq %1,3f\n" + "2: mb\n" + ".subsection 2\n" + "3: br 1b\n" + ".previous" + : "=3D&r"(prev), "=3D&r"(cmp), "=3Dm"(*m) + : "r"((long) old), "r"(new), "m"(*m)); + + return prev; +} + +static __inline__ unsigned long +__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int si= ze) +{ + switch (size) { + case 4: + return __cmpxchg_u32(ptr, old, new); + case 8: + return __cmpxchg_u64(ptr, old, new); + } + return old; +} +#define cmpxchg(ptr,o,n) \ + ({ \ + __typeof__(*(ptr)) _o_ =3D (o); \ + __typeof__(*(ptr)) _n_ =3D (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ + (unsigned long)_n_, sizeof(*(ptr))); \ + }) + +#elif __i386__ +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long ol= d, + unsigned long new, int size) +{ + unsigned long prev; + switch (size) { + case 1: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgb %b1,%2" + : "=3Da"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + case 2: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgw %w1,%2" + : "=3Da"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + case 4: + __asm__ __volatile__(LOCK_PREFIX "cmpxchgl %1,%2" + : "=3Da"(prev) + : "q"(new), "m"(*__xg(ptr)), "0"(old) + : "memory"); + return prev; + } + return old; +} + +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o), \ + (unsigned long)(n),sizeof(*(ptr)))) +#endif /* i386 & alpha */ +#endif + + /* Macros to make printk easier */ +#define DRM_ERROR(fmt, arg...) \ + printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ "] *ERROR* " fmt , ##arg) +#define DRM_MEM_ERROR(area, fmt, arg...) \ + printk(KERN_ERR "[" DRM_NAME ":" __FUNCTION__ ":%s] *ERROR* " fmt , \ + drm_mem_stats[area].name , ##arg) +#define DRM_INFO(fmt, arg...) printk(KERN_INFO "[" DRM_NAME "] " fmt , ##= arg) + +#if DRM_DEBUG_CODE +#define DRM_DEBUG(fmt, arg...) \ + do { \ + if (drm_flags&DRM_FLAG_DEBUG) \ + printk(KERN_DEBUG \ + "[" DRM_NAME ":" __FUNCTION__ "] " fmt , \ + ##arg); \ + } while (0) +#else +#define DRM_DEBUG(fmt, arg...) do { } while (0) +#endif + +#define DRM_PROC_LIMIT (PAGE_SIZE-80) + +#define DRM_PROC_PRINT(fmt, arg...) \ + len +=3D sprintf(&buf[len], fmt , ##arg); \ + if (len > DRM_PROC_LIMIT) return len; + +#define DRM_PROC_PRINT_RET(ret, fmt, arg...) \ + len +=3D sprintf(&buf[len], fmt , ##arg); \ + if (len > DRM_PROC_LIMIT) { ret; return len; } + + /* Internal types and structures */ +#define DRM_ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0])) +#define DRM_MIN(a,b) ((a)<(b)?(a):(b)) +#define DRM_MAX(a,b) ((a)>(b)?(a):(b)) + +#define DRM_LEFTCOUNT(x) (((x)->rp + (x)->count - (x)->wp) % ((x)->count += 1)) +#define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x)) +#define DRM_WAITCOUNT(dev,idx) DRM_BUFCOUNT(&dev->queuelist[idx]->waitlist) + +typedef int drm_ioctl_t(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + +typedef struct drm_ioctl_desc { + drm_ioctl_t *func; + int auth_needed; + int root_only; +} drm_ioctl_desc_t; + +typedef struct drm_devstate { + pid_t owner; /* X server pid holding x_lock */ +=09 +} drm_devstate_t; + +typedef struct drm_magic_entry { + drm_magic_t magic; + struct drm_file *priv; + struct drm_magic_entry *next; +} drm_magic_entry_t; + +typedef struct drm_magic_head { + struct drm_magic_entry *head; + struct drm_magic_entry *tail; +} drm_magic_head_t; + +typedef struct drm_vma_entry { + struct vm_area_struct *vma; + struct drm_vma_entry *next; + pid_t pid; +} drm_vma_entry_t; + +typedef struct drm_buf { + int idx; /* Index into master buflist */ + int total; /* Buffer size */ + int order; /* log-base-2(total) */ + int used; /* Amount of buffer in use (for DMA) */ + unsigned long offset; /* Byte offset (used internally) */ + void *address; /* Address of buffer */ + unsigned long bus_address; /* Bus address of buffer */ + struct drm_buf *next; /* Kernel-only: used for free list */ + __volatile__ int waiting; /* On kernel DMA queue */ + __volatile__ int pending; /* On hardware DMA queue */ + wait_queue_head_t dma_wait; /* Processes waiting */ + pid_t pid; /* PID of holding process */ + int context; /* Kernel queue for this buffer */ + int while_locked;/* Dispatch this buffer while locked */ + enum { + DRM_LIST_NONE =3D 0, + DRM_LIST_FREE =3D 1, + DRM_LIST_WAIT =3D 2, + DRM_LIST_PEND =3D 3, + DRM_LIST_PRIO =3D 4, + DRM_LIST_RECLAIM =3D 5 + } list; /* Which list we're on */ + +#if DRM_DMA_HISTOGRAM + cycles_t time_queued; /* Queued to kernel DMA queue */ + cycles_t time_dispatched; /* Dispatched to hardware */ + cycles_t time_completed; /* Completed by hardware */ + cycles_t time_freed; /* Back on freelist */ +#endif + + int dev_priv_size; /* Size of buffer private stoarge */ + void *dev_private; /* Per-buffer private storage */ +} drm_buf_t; + +#if DRM_DMA_HISTOGRAM +#define DRM_DMA_HISTOGRAM_SLOTS 9 +#define DRM_DMA_HISTOGRAM_INITIAL 10 +#define DRM_DMA_HISTOGRAM_NEXT(current) ((current)*10) +typedef struct drm_histogram { + atomic_t total; + + atomic_t queued_to_dispatched[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t dispatched_to_completed[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t completed_to_freed[DRM_DMA_HISTOGRAM_SLOTS]; + + atomic_t queued_to_completed[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t queued_to_freed[DRM_DMA_HISTOGRAM_SLOTS]; + + atomic_t dma[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t schedule[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t ctx[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t lacq[DRM_DMA_HISTOGRAM_SLOTS]; + atomic_t lhld[DRM_DMA_HISTOGRAM_SLOTS]; +} drm_histogram_t; +#endif + + /* bufs is one longer than it has to be */ +typedef struct drm_waitlist { + int count; /* Number of possible buffers */ + drm_buf_t **bufs; /* List of pointers to buffers */ + drm_buf_t **rp; /* Read pointer */ + drm_buf_t **wp; /* Write pointer */ + drm_buf_t **end; /* End pointer */ + spinlock_t read_lock; + spinlock_t write_lock; +} drm_waitlist_t; + +typedef struct drm_freelist { + int initialized; /* Freelist in use */ + atomic_t count; /* Number of free buffers */ + drm_buf_t *next; /* End pointer */ + + wait_queue_head_t waiting; /* Processes waiting on free bufs */ + int low_mark; /* Low water mark */ + int high_mark; /* High water mark */ + atomic_t wfh; /* If waiting for high mark */ + spinlock_t lock; +} drm_freelist_t; + +typedef struct drm_buf_entry { + int buf_size; + int buf_count; + drm_buf_t *buflist; + int seg_count; + int page_order; + unsigned long *seglist; + + drm_freelist_t freelist; +} drm_buf_entry_t; + +typedef struct drm_hw_lock { + __volatile__ unsigned int lock; + char padding[60]; /* Pad to cache line */ +} drm_hw_lock_t; + +typedef struct drm_file { + int authenticated; + int minor; + pid_t pid; + uid_t uid; + drm_magic_t magic; + unsigned long ioctl_count; + struct drm_file *next; + struct drm_file *prev; + struct drm_device *dev; + int remove_auth_on_close; +} drm_file_t; + + +typedef struct drm_queue { + atomic_t use_count; /* Outstanding uses (+1) */ + atomic_t finalization; /* Finalization in progress */ + atomic_t block_count; /* Count of processes waiting */ + atomic_t block_read; /* Queue blocked for reads */ + wait_queue_head_t read_queue; /* Processes waiting on block_read */ + atomic_t block_write; /* Queue blocked for writes */ + wait_queue_head_t write_queue; /* Processes waiting on block_write */ + atomic_t total_queued; /* Total queued statistic */ + atomic_t total_flushed;/* Total flushes statistic */ + atomic_t total_locks; /* Total locks statistics */ + drm_ctx_flags_t flags; /* Context preserving and 2D-only */ + drm_waitlist_t waitlist; /* Pending buffers */ + wait_queue_head_t flush_queue; /* Processes waiting until flush */ +} drm_queue_t; + +typedef struct drm_lock_data { + drm_hw_lock_t *hw_lock; /* Hardware lock */ + pid_t pid; /* PID of lock holder (0=3Dkernel) */ + wait_queue_head_t lock_queue; /* Queue of blocked processes */ + unsigned long lock_time; /* Time of last lock in jiffies */ +} drm_lock_data_t; + +typedef struct drm_device_dma { + /* Performance Counters */ + atomic_t total_prio; /* Total DRM_DMA_PRIORITY */ + atomic_t total_bytes; /* Total bytes DMA'd */ + atomic_t total_dmas; /* Total DMA buffers dispatched */ + + atomic_t total_missed_dma; /* Missed drm_do_dma */ + atomic_t total_missed_lock; /* Missed lock in drm_do_dma */ + atomic_t total_missed_free; /* Missed drm_free_this_buffer */ + atomic_t total_missed_sched;/* Missed drm_dma_schedule */ + + atomic_t total_tried; /* Tried next_buffer */ + atomic_t total_hit; /* Sent next_buffer */ + atomic_t total_lost; /* Lost interrupt */ + + drm_buf_entry_t bufs[DRM_MAX_ORDER+1]; + int buf_count; + drm_buf_t **buflist; /* Vector of pointers info bufs */ + int seg_count; + int page_count; + unsigned long *pagelist; + unsigned long byte_count; + enum { + _DRM_DMA_USE_AGP =3D 0x01 + } flags; + + /* DMA support */ + drm_buf_t *this_buffer; /* Buffer being sent */ + drm_buf_t *next_buffer; /* Selected buffer to send */ + drm_queue_t *next_queue; /* Queue from which buffer selected*/ + wait_queue_head_t waiting; /* Processes waiting on free bufs */ +} drm_device_dma_t; + +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) +typedef struct drm_agp_mem { + unsigned long handle; + agp_memory *memory; + unsigned long bound; /* address */ + int pages; + struct drm_agp_mem *prev; + struct drm_agp_mem *next; +} drm_agp_mem_t; + +typedef struct drm_agp_head { + agp_kern_info agp_info; + const char *chipset; + drm_agp_mem_t *memory; + unsigned long mode; + int enabled; + int acquired; + unsigned long base; + int agp_mtrr; + int cant_use_aperture; + unsigned long page_mask; +} drm_agp_head_t; +#endif + +typedef struct drm_sigdata { + int context; + drm_hw_lock_t *lock; +} drm_sigdata_t; + +typedef struct drm_device { + const char *name; /* Simple driver name */ + char *unique; /* Unique identifier: e.g., busid */ + int unique_len; /* Length of unique field */ + dev_t device; /* Device number for mknod */ + char *devname; /* For /proc/interrupts */ + + int blocked; /* Blocked due to VC switch? */ + struct proc_dir_entry *root; /* Root for this device's entries */ + + /* Locks */ + spinlock_t count_lock; /* For inuse, open_count, buf_use */ + struct semaphore struct_sem; /* For others */ + + /* Usage Counters */ + int open_count; /* Outstanding files open */ + atomic_t ioctl_count; /* Outstanding IOCTLs pending */ + atomic_t vma_count; /* Outstanding vma areas open */ + int buf_use; /* Buffers in use -- cannot alloc */ + atomic_t buf_alloc; /* Buffer allocation in progress */ + + /* Performance Counters */ + atomic_t total_open; + atomic_t total_close; + atomic_t total_ioctl; + atomic_t total_irq; /* Total interruptions */ + atomic_t total_ctx; /* Total context switches */ + + atomic_t total_locks; + atomic_t total_unlocks; + atomic_t total_contends; + atomic_t total_sleeps; + + /* Authentication */ + drm_file_t *file_first; + drm_file_t *file_last; + drm_magic_head_t magiclist[DRM_HASH_SIZE]; + + /* Memory management */ + drm_map_t **maplist; /* Vector of pointers to regions */ + int map_count; /* Number of mappable regions */ + + drm_vma_entry_t *vmalist; /* List of vmas (for debugging) */ + drm_lock_data_t lock; /* Information on hardware lock */ + + /* DMA queues (contexts) */ + int queue_count; /* Number of active DMA queues */ + int queue_reserved; /* Number of reserved DMA queues */ + int queue_slots; /* Actual length of queuelist */ + drm_queue_t **queuelist; /* Vector of pointers to DMA queues */ + drm_device_dma_t *dma; /* Optional pointer for DMA support */ + + /* Context support */ + int irq; /* Interrupt used by board */ + __volatile__ long context_flag; /* Context swapping flag */ + __volatile__ long interrupt_flag; /* Interruption handler flag */ + __volatile__ long dma_flag; /* DMA dispatch flag */ + struct timer_list timer; /* Timer for delaying ctx switch */ + wait_queue_head_t context_wait; /* Processes waiting on ctx switch */ + int last_checked; /* Last context checked for DMA */ + int last_context; /* Last current context */ + unsigned long last_switch; /* jiffies at last context switch */ + struct tq_struct tq; + cycles_t ctx_start; + cycles_t lck_start; +#if DRM_DMA_HISTOGRAM + drm_histogram_t histo; +#endif + + /* Callback to X server for context switch + and for heavy-handed reset. */ + char buf[DRM_BSZ]; /* Output buffer */ + char *buf_rp; /* Read pointer */ + char *buf_wp; /* Write pointer */ + char *buf_end; /* End pointer */ + struct fasync_struct *buf_async;/* Processes waiting for SIGIO */ + wait_queue_head_t buf_readers; /* Processes waiting to read */ + wait_queue_head_t buf_writers; /* Processes waiting to ctx switch */ + +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) + drm_agp_head_t *agp; +#endif + unsigned long *ctx_bitmap; + void *dev_private; + drm_sigdata_t sigdata; /* For block_all_signals */ + sigset_t sigmask; +} drm_device_t; + + /* Internal function definitions */ + + /* Misc. support (init.c) */ +extern int drm_flags; +extern void drm_parse_options(char *s); +extern int drm_cpu_valid(void); + + + /* Device support (fops.c) */ +extern int drm_open_helper(struct inode *inode, struct file *filp, + drm_device_t *dev); +extern int drm_flush(struct file *filp); +extern int drm_release(struct inode *inode, struct file *filp); +extern int drm_fasync(int fd, struct file *filp, int on); +extern ssize_t drm_read(struct file *filp, char *buf, size_t count, + loff_t *off); +extern int drm_write_string(drm_device_t *dev, const char *s); +extern unsigned int drm_poll(struct file *filp, struct poll_table_struct = *wait); + + /* Mapping support (vm.c) */ +#if LINUX_VERSION_CODE < 0x020317 +extern unsigned long drm_vm_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long drm_vm_shm_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long drm_vm_shm_nopage_lock(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern unsigned long drm_vm_dma_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +#else + /* Return type changed in 2.3.23 */ +extern struct page *drm_vm_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *drm_vm_shm_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *drm_vm_shm_nopage_lock(struct vm_area_struct *vma, + unsigned long address, + int write_access); +extern struct page *drm_vm_dma_nopage(struct vm_area_struct *vma, + unsigned long address, + int write_access); +#endif +extern void drm_vm_open(struct vm_area_struct *vma); +extern void drm_vm_close(struct vm_area_struct *vma); +extern int drm_mmap_dma(struct file *filp, + struct vm_area_struct *vma); +extern int drm_mmap(struct file *filp, struct vm_area_struct *vma); + + + /* Proc support (proc.c) */ +extern int drm_proc_init(drm_device_t *dev); +extern int drm_proc_cleanup(void); + + /* Memory management support (memory.c) */ +extern void drm_mem_init(void); +extern int drm_mem_info(char *buf, char **start, off_t offset, + int len, int *eof, void *data); +extern void *drm_alloc(size_t size, int area); +extern void *drm_realloc(void *oldpt, size_t oldsize, size_t size, + int area); +extern char *drm_strdup(const char *s, int area); +extern void drm_strfree(const char *s, int area); +extern void drm_free(void *pt, size_t size, int area); +extern unsigned long drm_alloc_pages(int order, int area); +extern void drm_free_pages(unsigned long address, int order, + int area); +extern void *drm_ioremap(unsigned long offset, unsigned long size, + drm_device_t *dev); +extern void drm_ioremapfree(void *pt, unsigned long size, + drm_device_t *dev); + +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) +extern agp_memory *drm_alloc_agp(int pages, u32 type); +extern int drm_free_agp(agp_memory *handle, int pages); +extern int drm_bind_agp(agp_memory *handle, unsigned int start); +extern int drm_unbind_agp(agp_memory *handle); +#endif + + + /* Buffer management support (bufs.c) */ +extern int drm_order(unsigned long size); +extern int drm_addmap(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_addbufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_infobufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_markbufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_freebufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_mapbufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Buffer list management support (lists.c) */ +extern int drm_waitlist_create(drm_waitlist_t *bl, int count); +extern int drm_waitlist_destroy(drm_waitlist_t *bl); +extern int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf); +extern drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl); + +extern int drm_freelist_create(drm_freelist_t *bl, int count); +extern int drm_freelist_destroy(drm_freelist_t *bl); +extern int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl, + drm_buf_t *buf); +extern drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block); + + /* DMA support (gen_dma.c) */ +extern void drm_dma_setup(drm_device_t *dev); +extern void drm_dma_takedown(drm_device_t *dev); +extern void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf); +extern void drm_reclaim_buffers(drm_device_t *dev, pid_t pid); +extern int drm_context_switch(drm_device_t *dev, int old, int new); +extern int drm_context_switch_complete(drm_device_t *dev, int new); +extern void drm_clear_next_buffer(drm_device_t *dev); +extern int drm_select_queue(drm_device_t *dev, + void (*wrapper)(unsigned long)); +extern int drm_dma_enqueue(drm_device_t *dev, drm_dma_t *dma); +extern int drm_dma_get_buffers(drm_device_t *dev, drm_dma_t *dma); +#if DRM_DMA_HISTOGRAM +extern int drm_histogram_slot(unsigned long count); +extern void drm_histogram_compute(drm_device_t *dev, drm_buf_t *buf); +#endif + + + /* Misc. IOCTL support (ioctl.c) */ +extern int drm_irq_busid(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_getunique(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_setunique(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Context IOCTL support (context.c) */ +extern int drm_resctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_addctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_modctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_getctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_switchctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_newctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_rmctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Drawable IOCTL support (drawable.c) */ +extern int drm_adddraw(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_rmdraw(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Authentication IOCTL support (auth.c) */ +extern int drm_add_magic(drm_device_t *dev, drm_file_t *priv, + drm_magic_t magic); +extern int drm_remove_magic(drm_device_t *dev, drm_magic_t magic); +extern int drm_getmagic(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_authmagic(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + + /* Locking IOCTL support (lock.c) */ +extern int drm_block(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_unblock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_lock_take(__volatile__ unsigned int *lock, + unsigned int context); +extern int drm_lock_transfer(drm_device_t *dev, + __volatile__ unsigned int *lock, + unsigned int context); +extern int drm_lock_free(drm_device_t *dev, + __volatile__ unsigned int *lock, + unsigned int context); +extern int drm_finish(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_flush_unblock(drm_device_t *dev, int context, + drm_lock_flags_t flags); +extern int drm_flush_block_and_flush(drm_device_t *dev, int context, + drm_lock_flags_t flags); +extern int drm_notifier(void *priv); + + /* Context Bitmap support (ctxbitmap.c) */ +extern int drm_ctxbitmap_init(drm_device_t *dev); +extern void drm_ctxbitmap_cleanup(drm_device_t *dev); +extern int drm_ctxbitmap_next(drm_device_t *dev); +extern void drm_ctxbitmap_free(drm_device_t *dev, int ctx_handle); + +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) + /* AGP/GART support (agpsupport.c) */ +extern drm_agp_head_t *drm_agp_init(void); +extern void drm_agp_uninit(void); +extern int drm_agp_acquire(struct inode *inode, struct file *fi= lp, + unsigned int cmd, unsigned long arg); +extern void _drm_agp_release(void); +extern int drm_agp_release(struct inode *inode, struct file *fi= lp, + unsigned int cmd, unsigned long arg); +extern int drm_agp_enable(struct inode *inode, struct file *fil= p, + unsigned int cmd, unsigned long arg); +extern int drm_agp_info(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_agp_alloc(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_agp_free(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int drm_agp_unbind(struct inode *inode, struct file *fil= p, + unsigned int cmd, unsigned long arg); +extern int drm_agp_bind(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern agp_memory *drm_agp_allocate_memory(size_t pages, u32 type); +extern int drm_agp_free_memory(agp_memory *handle); +extern int drm_agp_bind_memory(agp_memory *handle, off_t start); +extern int drm_agp_unbind_memory(agp_memory *handle); +#endif +#endif +#endif diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_context.c linux-2.4.13-lia/= drivers/char/drm-4.0/ffb_context.c --- linux-2.4.13/drivers/char/drm-4.0/ffb_context.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_context.c Thu Oct 4 00:21:40= 2001 @@ -0,0 +1,540 @@ +/* $Id: ffb_context.c,v 1.4 2000/08/29 07:01:55 davem Exp $ + * ffb_context.c: Creator/Creator3D DRI/DRM context switching. + * + * Copyright (C) 2000 David S. Miller (davem@redhat.com) + * + * Almost entirely stolen from tdfx_context.c, see there + * for authors. + */ + +#include +#include + +#include "drmP.h" + +#include "ffb_drv.h" + +static int ffb_alloc_queue(drm_device_t *dev, int is_2d_only) +{ + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + int i; + + for (i =3D 0; i < FFB_MAX_CTXS; i++) { + if (fpriv->hw_state[i] =3D NULL) + break; + } + if (i =3D FFB_MAX_CTXS) + return -1; + + fpriv->hw_state[i] =3D kmalloc(sizeof(struct ffb_hw_context), GFP_KERNEL); + if (fpriv->hw_state[i] =3D NULL) + return -1; + + fpriv->hw_state[i]->is_2d_only =3D is_2d_only; + + /* Plus one because 0 is the special DRM_KERNEL_CONTEXT. */ + return i + 1; +} + +static void ffb_save_context(ffb_dev_priv_t *fpriv, int idx) +{ + ffb_fbcPtr ffb =3D fpriv->regs; + struct ffb_hw_context *ctx; + int i; + + ctx =3D fpriv->hw_state[idx - 1]; + if (idx =3D 0 || ctx =3D NULL) + return; + + if (ctx->is_2d_only) { + /* 2D applications only care about certain pieces + * of state. + */ + ctx->drawop =3D upa_readl(&ffb->drawop); + ctx->ppc =3D upa_readl(&ffb->ppc); + ctx->wid =3D upa_readl(&ffb->wid); + ctx->fg =3D upa_readl(&ffb->fg); + ctx->bg =3D upa_readl(&ffb->bg); + ctx->xclip =3D upa_readl(&ffb->xclip); + ctx->fbc =3D upa_readl(&ffb->fbc); + ctx->rop =3D upa_readl(&ffb->rop); + ctx->cmp =3D upa_readl(&ffb->cmp); + ctx->matchab =3D upa_readl(&ffb->matchab); + ctx->magnab =3D upa_readl(&ffb->magnab); + ctx->pmask =3D upa_readl(&ffb->pmask); + ctx->xpmask =3D upa_readl(&ffb->xpmask); + ctx->lpat =3D upa_readl(&ffb->lpat); + ctx->fontxy =3D upa_readl(&ffb->fontxy); + ctx->fontw =3D upa_readl(&ffb->fontw); + ctx->fontinc =3D upa_readl(&ffb->fontinc); + + /* stencil/stencilctl only exists on FFB2+ and later + * due to the introduction of 3DRAM-III. + */ + if (fpriv->ffb_type =3D ffb2_vertical_plus || + fpriv->ffb_type =3D ffb2_horizontal_plus) { + ctx->stencil =3D upa_readl(&ffb->stencil); + ctx->stencilctl =3D upa_readl(&ffb->stencilctl); + } + + for (i =3D 0; i < 32; i++) + ctx->area_pattern[i] =3D upa_readl(&ffb->pattern[i]); + ctx->ucsr =3D upa_readl(&ffb->ucsr); + return; + } + + /* Fetch drawop. */ + ctx->drawop =3D upa_readl(&ffb->drawop); + + /* If we were saving the vertex registers, this is where + * we would do it. We would save 32 32-bit words starting + * at ffb->suvtx. + */ + + /* Capture rendering attributes. */ + + ctx->ppc =3D upa_readl(&ffb->ppc); /* Pixel Processor Control */ + ctx->wid =3D upa_readl(&ffb->wid); /* Current WID */ + ctx->fg =3D upa_readl(&ffb->fg); /* Constant FG color */ + ctx->bg =3D upa_readl(&ffb->bg); /* Constant BG color */ + ctx->consty =3D upa_readl(&ffb->consty); /* Constant Y */ + ctx->constz =3D upa_readl(&ffb->constz); /* Constant Z */ + ctx->xclip =3D upa_readl(&ffb->xclip); /* X plane clip */ + ctx->dcss =3D upa_readl(&ffb->dcss); /* Depth Cue Scale Slope */ + ctx->vclipmin =3D upa_readl(&ffb->vclipmin); /* Primary XY clip, minimum = */ + ctx->vclipmax =3D upa_readl(&ffb->vclipmax); /* Primary XY clip, maximum = */ + ctx->vclipzmin =3D upa_readl(&ffb->vclipzmin); /* Primary Z clip, minimum= */ + ctx->vclipzmax =3D upa_readl(&ffb->vclipzmax); /* Primary Z clip, maximum= */ + ctx->dcsf =3D upa_readl(&ffb->dcsf); /* Depth Cue Scale Front Bound */ + ctx->dcsb =3D upa_readl(&ffb->dcsb); /* Depth Cue Scale Back Bound */ + ctx->dczf =3D upa_readl(&ffb->dczf); /* Depth Cue Scale Z Front */ + ctx->dczb =3D upa_readl(&ffb->dczb); /* Depth Cue Scale Z Back */ + ctx->blendc =3D upa_readl(&ffb->blendc); /* Alpha Blend Control */ + ctx->blendc1 =3D upa_readl(&ffb->blendc1); /* Alpha Blend Color 1 */ + ctx->blendc2 =3D upa_readl(&ffb->blendc2); /* Alpha Blend Color 2 */ + ctx->fbc =3D upa_readl(&ffb->fbc); /* Frame Buffer Control */ + ctx->rop =3D upa_readl(&ffb->rop); /* Raster Operation */ + ctx->cmp =3D upa_readl(&ffb->cmp); /* Compare Controls */ + ctx->matchab =3D upa_readl(&ffb->matchab); /* Buffer A/B Match Ops */ + ctx->matchc =3D upa_readl(&ffb->matchc); /* Buffer C Match Ops */ + ctx->magnab =3D upa_readl(&ffb->magnab); /* Buffer A/B Magnitude Ops */ + ctx->magnc =3D upa_readl(&ffb->magnc); /* Buffer C Magnitude Ops */ + ctx->pmask =3D upa_readl(&ffb->pmask); /* RGB Plane Mask */ + ctx->xpmask =3D upa_readl(&ffb->xpmask); /* X Plane Mask */ + ctx->ypmask =3D upa_readl(&ffb->ypmask); /* Y Plane Mask */ + ctx->zpmask =3D upa_readl(&ffb->zpmask); /* Z Plane Mask */ + + /* Auxiliary Clips. */ + ctx->auxclip0min =3D upa_readl(&ffb->auxclip[0].min); + ctx->auxclip0max =3D upa_readl(&ffb->auxclip[0].max); + ctx->auxclip1min =3D upa_readl(&ffb->auxclip[1].min); + ctx->auxclip1max =3D upa_readl(&ffb->auxclip[1].max); + ctx->auxclip2min =3D upa_readl(&ffb->auxclip[2].min); + ctx->auxclip2max =3D upa_readl(&ffb->auxclip[2].max); + ctx->auxclip3min =3D upa_readl(&ffb->auxclip[3].min); + ctx->auxclip3max =3D upa_readl(&ffb->auxclip[3].max); + + ctx->lpat =3D upa_readl(&ffb->lpat); /* Line Pattern */ + ctx->fontxy =3D upa_readl(&ffb->fontxy); /* XY Font Coordinate */ + ctx->fontw =3D upa_readl(&ffb->fontw); /* Font Width */ + ctx->fontinc =3D upa_readl(&ffb->fontinc); /* Font X/Y Increment */ + + /* These registers/features only exist on FFB2 and later chips. */ + if (fpriv->ffb_type >=3D ffb2_prototype) { + ctx->dcss1 =3D upa_readl(&ffb->dcss1); /* Depth Cue Scale Slope 1 */ + ctx->dcss2 =3D upa_readl(&ffb->dcss2); /* Depth Cue Scale Slope 2 */ + ctx->dcss2 =3D upa_readl(&ffb->dcss3); /* Depth Cue Scale Slope 3 */ + ctx->dcs2 =3D upa_readl(&ffb->dcs2); /* Depth Cue Scale 2 */ + ctx->dcs3 =3D upa_readl(&ffb->dcs3); /* Depth Cue Scale 3 */ + ctx->dcs4 =3D upa_readl(&ffb->dcs4); /* Depth Cue Scale 4 */ + ctx->dcd2 =3D upa_readl(&ffb->dcd2); /* Depth Cue Depth 2 */ + ctx->dcd3 =3D upa_readl(&ffb->dcd3); /* Depth Cue Depth 3 */ + ctx->dcd4 =3D upa_readl(&ffb->dcd4); /* Depth Cue Depth 4 */ + + /* And stencil/stencilctl only exists on FFB2+ and later + * due to the introduction of 3DRAM-III. + */ + if (fpriv->ffb_type =3D ffb2_vertical_plus || + fpriv->ffb_type =3D ffb2_horizontal_plus) { + ctx->stencil =3D upa_readl(&ffb->stencil); + ctx->stencilctl =3D upa_readl(&ffb->stencilctl); + } + } + + /* Save the 32x32 area pattern. */ + for (i =3D 0; i < 32; i++) + ctx->area_pattern[i] =3D upa_readl(&ffb->pattern[i]); + + /* Finally, stash away the User Constol/Status Register. */ + ctx->ucsr =3D upa_readl(&ffb->ucsr); +} + +static void ffb_restore_context(ffb_dev_priv_t *fpriv, int old, int idx) +{ + ffb_fbcPtr ffb =3D fpriv->regs; + struct ffb_hw_context *ctx; + int i; + + ctx =3D fpriv->hw_state[idx - 1]; + if (idx =3D 0 || ctx =3D NULL) + return; + + if (ctx->is_2d_only) { + /* 2D applications only care about certain pieces + * of state. + */ + upa_writel(ctx->drawop, &ffb->drawop); + + /* If we were restoring the vertex registers, this is where + * we would do it. We would restore 32 32-bit words starting + * at ffb->suvtx. + */ + + upa_writel(ctx->ppc, &ffb->ppc); + upa_writel(ctx->wid, &ffb->wid); + upa_writel(ctx->fg, &ffb->fg); + upa_writel(ctx->bg, &ffb->bg); + upa_writel(ctx->xclip, &ffb->xclip); + upa_writel(ctx->fbc, &ffb->fbc); + upa_writel(ctx->rop, &ffb->rop); + upa_writel(ctx->cmp, &ffb->cmp); + upa_writel(ctx->matchab, &ffb->matchab); + upa_writel(ctx->magnab, &ffb->magnab); + upa_writel(ctx->pmask, &ffb->pmask); + upa_writel(ctx->xpmask, &ffb->xpmask); + upa_writel(ctx->lpat, &ffb->lpat); + upa_writel(ctx->fontxy, &ffb->fontxy); + upa_writel(ctx->fontw, &ffb->fontw); + upa_writel(ctx->fontinc, &ffb->fontinc); + + /* stencil/stencilctl only exists on FFB2+ and later + * due to the introduction of 3DRAM-III. + */ + if (fpriv->ffb_type =3D ffb2_vertical_plus || + fpriv->ffb_type =3D ffb2_horizontal_plus) { + upa_writel(ctx->stencil, &ffb->stencil); + upa_writel(ctx->stencilctl, &ffb->stencilctl); + upa_writel(0x80000000, &ffb->fbc); + upa_writel((ctx->stencilctl | 0x80000), + &ffb->rawstencilctl); + upa_writel(ctx->fbc, &ffb->fbc); + } + + for (i =3D 0; i < 32; i++) + upa_writel(ctx->area_pattern[i], &ffb->pattern[i]); + upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr); + return; + } + + /* Restore drawop. */ + upa_writel(ctx->drawop, &ffb->drawop); + + /* If we were restoring the vertex registers, this is where + * we would do it. We would restore 32 32-bit words starting + * at ffb->suvtx. + */ + + /* Restore rendering attributes. */ + + upa_writel(ctx->ppc, &ffb->ppc); /* Pixel Processor Control */ + upa_writel(ctx->wid, &ffb->wid); /* Current WID */ + upa_writel(ctx->fg, &ffb->fg); /* Constant FG color */ + upa_writel(ctx->bg, &ffb->bg); /* Constant BG color */ + upa_writel(ctx->consty, &ffb->consty); /* Constant Y */ + upa_writel(ctx->constz, &ffb->constz); /* Constant Z */ + upa_writel(ctx->xclip, &ffb->xclip); /* X plane clip */ + upa_writel(ctx->dcss, &ffb->dcss); /* Depth Cue Scale Slope */ + upa_writel(ctx->vclipmin, &ffb->vclipmin); /* Primary XY clip, minimum */ + upa_writel(ctx->vclipmax, &ffb->vclipmax); /* Primary XY clip, maximum */ + upa_writel(ctx->vclipzmin, &ffb->vclipzmin); /* Primary Z clip, minimum */ + upa_writel(ctx->vclipzmax, &ffb->vclipzmax); /* Primary Z clip, maximum */ + upa_writel(ctx->dcsf, &ffb->dcsf); /* Depth Cue Scale Front Bound */ + upa_writel(ctx->dcsb, &ffb->dcsb); /* Depth Cue Scale Back Bound */ + upa_writel(ctx->dczf, &ffb->dczf); /* Depth Cue Scale Z Front */ + upa_writel(ctx->dczb, &ffb->dczb); /* Depth Cue Scale Z Back */ + upa_writel(ctx->blendc, &ffb->blendc); /* Alpha Blend Control */ + upa_writel(ctx->blendc1, &ffb->blendc1); /* Alpha Blend Color 1 */ + upa_writel(ctx->blendc2, &ffb->blendc2); /* Alpha Blend Color 2 */ + upa_writel(ctx->fbc, &ffb->fbc); /* Frame Buffer Control */ + upa_writel(ctx->rop, &ffb->rop); /* Raster Operation */ + upa_writel(ctx->cmp, &ffb->cmp); /* Compare Controls */ + upa_writel(ctx->matchab, &ffb->matchab); /* Buffer A/B Match Ops */ + upa_writel(ctx->matchc, &ffb->matchc); /* Buffer C Match Ops */ + upa_writel(ctx->magnab, &ffb->magnab); /* Buffer A/B Magnitude Ops */ + upa_writel(ctx->magnc, &ffb->magnc); /* Buffer C Magnitude Ops */ + upa_writel(ctx->pmask, &ffb->pmask); /* RGB Plane Mask */ + upa_writel(ctx->xpmask, &ffb->xpmask); /* X Plane Mask */ + upa_writel(ctx->ypmask, &ffb->ypmask); /* Y Plane Mask */ + upa_writel(ctx->zpmask, &ffb->zpmask); /* Z Plane Mask */ + + /* Auxiliary Clips. */ + upa_writel(ctx->auxclip0min, &ffb->auxclip[0].min); + upa_writel(ctx->auxclip0max, &ffb->auxclip[0].max); + upa_writel(ctx->auxclip1min, &ffb->auxclip[1].min); + upa_writel(ctx->auxclip1max, &ffb->auxclip[1].max); + upa_writel(ctx->auxclip2min, &ffb->auxclip[2].min); + upa_writel(ctx->auxclip2max, &ffb->auxclip[2].max); + upa_writel(ctx->auxclip3min, &ffb->auxclip[3].min); + upa_writel(ctx->auxclip3max, &ffb->auxclip[3].max); + + upa_writel(ctx->lpat, &ffb->lpat); /* Line Pattern */ + upa_writel(ctx->fontxy, &ffb->fontxy); /* XY Font Coordinate */ + upa_writel(ctx->fontw, &ffb->fontw); /* Font Width */ + upa_writel(ctx->fontinc, &ffb->fontinc); /* Font X/Y Increment */ + + /* These registers/features only exist on FFB2 and later chips. */ + if (fpriv->ffb_type >=3D ffb2_prototype) { + upa_writel(ctx->dcss1, &ffb->dcss1); /* Depth Cue Scale Slope 1 */ + upa_writel(ctx->dcss2, &ffb->dcss2); /* Depth Cue Scale Slope 2 */ + upa_writel(ctx->dcss3, &ffb->dcss2); /* Depth Cue Scale Slope 3 */ + upa_writel(ctx->dcs2, &ffb->dcs2); /* Depth Cue Scale 2 */ + upa_writel(ctx->dcs3, &ffb->dcs3); /* Depth Cue Scale 3 */ + upa_writel(ctx->dcs4, &ffb->dcs4); /* Depth Cue Scale 4 */ + upa_writel(ctx->dcd2, &ffb->dcd2); /* Depth Cue Depth 2 */ + upa_writel(ctx->dcd3, &ffb->dcd3); /* Depth Cue Depth 3 */ + upa_writel(ctx->dcd4, &ffb->dcd4); /* Depth Cue Depth 4 */ + + /* And stencil/stencilctl only exists on FFB2+ and later + * due to the introduction of 3DRAM-III. + */ + if (fpriv->ffb_type =3D ffb2_vertical_plus || + fpriv->ffb_type =3D ffb2_horizontal_plus) { + /* Unfortunately, there is a hardware bug on + * the FFB2+ chips which prevents a normal write + * to the stencil control register from working + * as it should. + * + * The state controlled by the FFB stencilctl register + * really gets transferred to the per-buffer instances + * of the stencilctl register in the 3DRAM chips. + * + * The bug is that FFB does not update buffer C correctly, + * so we have to do it by hand for them. + */ + + /* This will update buffers A and B. */ + upa_writel(ctx->stencil, &ffb->stencil); + upa_writel(ctx->stencilctl, &ffb->stencilctl); + + /* Force FFB to use buffer C 3dram regs. */ + upa_writel(0x80000000, &ffb->fbc); + upa_writel((ctx->stencilctl | 0x80000), + &ffb->rawstencilctl); + + /* Now restore the correct FBC controls. */ + upa_writel(ctx->fbc, &ffb->fbc); + } + } + + /* Restore the 32x32 area pattern. */ + for (i =3D 0; i < 32; i++) + upa_writel(ctx->area_pattern[i], &ffb->pattern[i]); + + /* Finally, stash away the User Constol/Status Register. + * The only state we really preserve here is the picking + * control. + */ + upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr); +} + +#define FFB_UCSR_FB_BUSY 0x01000000 +#define FFB_UCSR_RP_BUSY 0x02000000 +#define FFB_UCSR_ALL_BUSY (FFB_UCSR_RP_BUSY|FFB_UCSR_FB_BUSY) + +static void FFBWait(ffb_fbcPtr ffb) +{ + int limit =3D 100000; + + do { + u32 regval =3D upa_readl(&ffb->ucsr); + + if ((regval & FFB_UCSR_ALL_BUSY) =3D 0) + break; + } while (--limit); +} + +int ffb_context_switch(drm_device_t *dev, int old, int new) +{ + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + + atomic_inc(&dev->total_ctx); + +#if DRM_DMA_HISTOGRAM + dev->ctx_start =3D get_cycles(); +#endif + =20 + DRM_DEBUG("Context switch from %d to %d\n", old, new); + + if (new =3D dev->last_context || + dev->last_context =3D 0) { + dev->last_context =3D new; + return 0; + } + =20 + FFBWait(fpriv->regs); + ffb_save_context(fpriv, old); + ffb_restore_context(fpriv, old, new); + FFBWait(fpriv->regs); + =20 + dev->last_context =3D new; + + return 0; +} + +int ffb_resctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_res_t res; + drm_ctx_t ctx; + int i; + + DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS); + if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res))) + return -EFAULT; + if (res.count >=3D DRM_RESERVED_CONTEXTS) { + memset(&ctx, 0, sizeof(ctx)); + for (i =3D 0; i < DRM_RESERVED_CONTEXTS; i++) { + ctx.handle =3D i; + if (copy_to_user(&res.contexts[i], + &i, + sizeof(i))) + return -EFAULT; + } + } + res.count =3D DRM_RESERVED_CONTEXTS; + if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res))) + return -EFAULT; + return 0; +} + + +int ffb_addctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + int idx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + idx =3D ffb_alloc_queue(dev, (ctx.flags & _DRM_CONTEXT_2DONLY)); + if (idx < 0) + return -ENFILE; + + DRM_DEBUG("%d\n", ctx.handle); + ctx.handle =3D idx; + if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int ffb_modctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + struct ffb_hw_context *hwctx; + drm_ctx_t ctx; + int idx; + + if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx))) + return -EFAULT; + + idx =3D ctx.handle; + if (idx <=3D 0 || idx >=3D FFB_MAX_CTXS) + return -EINVAL; + + hwctx =3D fpriv->hw_state[idx - 1]; + if (hwctx =3D NULL) + return -EINVAL; + + if ((ctx.flags & _DRM_CONTEXT_2DONLY) =3D 0) + hwctx->is_2d_only =3D 0; + else + hwctx->is_2d_only =3D 1; + + return 0; +} + +int ffb_getctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + struct ffb_hw_context *hwctx; + drm_ctx_t ctx; + int idx; + + if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx))) + return -EFAULT; + + idx =3D ctx.handle; + if (idx <=3D 0 || idx >=3D FFB_MAX_CTXS) + return -EINVAL; + + hwctx =3D fpriv->hw_state[idx - 1]; + if (hwctx =3D NULL) + return -EINVAL; + + if (hwctx->is_2d_only !=3D 0) + ctx.flags =3D _DRM_CONTEXT_2DONLY; + else + ctx.flags =3D 0; + + if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx))) + return -EFAULT; + + return 0; +} + +int ffb_switchctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + return ffb_context_switch(dev, dev->last_context, ctx.handle); +} + +int ffb_newctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + + return 0; +} + +int ffb_rmctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_t ctx; + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + int idx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + + idx =3D ctx.handle - 1; + if (idx < 0 || idx >=3D FFB_MAX_CTXS) + return -EINVAL; + + if (fpriv->hw_state[idx] !=3D NULL) { + kfree(fpriv->hw_state[idx]); + fpriv->hw_state[idx] =3D NULL; + } + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c linux-2.4.13-lia/driv= ers/char/drm-4.0/ffb_drv.c --- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,951 @@ +/* $Id: ffb_drv.c,v 1.14 2001/05/24 12:01:47 davem Exp $ + * ffb_drv.c: Creator/Creator3D direct rendering driver. + * + * Copyright (C) 2000 David S. Miller (davem@redhat.com) + */ + +#include "drmP.h" + +#include +#include +#include +#include +#include + +#include "ffb_drv.h" + +#define FFB_NAME "ffb" +#define FFB_DESC "Creator/Creator3D" +#define FFB_DATE "20000517" +#define FFB_MAJOR 0 +#define FFB_MINOR 0 +#define FFB_PATCHLEVEL 1 + +/* Forward declarations. */ +int ffb_init(void); +void ffb_cleanup(void); +static int ffb_version(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +static int ffb_open(struct inode *inode, struct file *filp); +static int ffb_release(struct inode *inode, struct file *filp); +static int ffb_ioctl(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +static int ffb_lock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +static int ffb_unlock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +static int ffb_mmap(struct file *filp, struct vm_area_struct *vma); +static unsigned long ffb_get_unmapped_area(struct file *, unsigned long, u= nsigned long, unsigned long, unsigned long); + +/* From ffb_context.c */ +extern int ffb_resctx(struct inode *, struct file *, unsigned int, unsigne= d long); +extern int ffb_addctx(struct inode *, struct file *, unsigned int, unsigne= d long); +extern int ffb_modctx(struct inode *, struct file *, unsigned int, unsigne= d long); +extern int ffb_getctx(struct inode *, struct file *, unsigned int, unsigne= d long); +extern int ffb_switchctx(struct inode *, struct file *, unsigned int, unsi= gned long); +extern int ffb_newctx(struct inode *, struct file *, unsigned int, unsigne= d long); +extern int ffb_rmctx(struct inode *, struct file *, unsigned int, unsigned= long); +extern int ffb_context_switch(drm_device_t *, int, int); + +static struct file_operations ffb_fops =3D { + owner: THIS_MODULE, + open: ffb_open, + flush: drm_flush, + release: ffb_release, + ioctl: ffb_ioctl, + mmap: ffb_mmap, + read: drm_read, + fasync: drm_fasync, + poll: drm_poll, + get_unmapped_area: ffb_get_unmapped_area, +}; + +/* This is just a template, we make a new copy for each FFB + * we discover at init time so that each one gets a unique + * misc device minor number. + */ +static struct miscdevice ffb_misc =3D { + minor: MISC_DYNAMIC_MINOR, + name: FFB_NAME, + fops: &ffb_fops, +}; + +static drm_ioctl_desc_t ffb_ioctls[] =3D { + [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] =3D { ffb_version, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] =3D { drm_getunique, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] =3D { drm_getmagic, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] =3D { drm_irq_busid, 0, 1 }, /* XX= X */ + + [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] =3D { drm_setunique, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] =3D { drm_block, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] =3D { drm_unblock, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] =3D { drm_authmagic, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] =3D { drm_addmap, 1, 1 }, +=09 + /* The implementation is currently a nop just like on tdfx. + * Later we can do something more clever. -DaveM + */ + [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] =3D { ffb_addctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] =3D { ffb_rmctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] =3D { ffb_modctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] =3D { ffb_getctx, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] =3D { ffb_switchctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] =3D { ffb_newctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] =3D { ffb_resctx, 1, 0 }, + + [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] =3D { drm_adddraw, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] =3D { drm_rmdraw, 1, 1 }, + + [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] =3D { ffb_lock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] =3D { ffb_unlock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] =3D { drm_finish, 1, 0 }, +}; +#define FFB_IOCTL_COUNT DRM_ARRAY_SIZE(ffb_ioctls) + +#ifdef MODULE +static char *ffb =3D NULL; +#endif + +MODULE_AUTHOR("David S. Miller (davem@redhat.com)"); +MODULE_DESCRIPTION("Sun Creator/Creator3D DRI"); + +static int ffb_takedown(drm_device_t *dev) +{ + int i; + drm_magic_entry_t *pt, *next; + drm_map_t *map; + drm_vma_entry_t *vma, *vma_next; + + DRM_DEBUG("\n"); + + down(&dev->struct_sem); + del_timer(&dev->timer); +=09 + if (dev->devname) { + drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER); + dev->devname =3D NULL; + } +=09 + if (dev->unique) { + drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER); + dev->unique =3D NULL; + dev->unique_len =3D 0; + } + + /* Clear pid list */ + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + for (pt =3D dev->magiclist[i].head; pt; pt =3D next) { + next =3D pt->next; + drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC); + } + dev->magiclist[i].head =3D dev->magiclist[i].tail =3D NULL; + } +=09 + /* Clear vma list (only built for debugging) */ + if (dev->vmalist) { + for (vma =3D dev->vmalist; vma; vma =3D vma_next) { + vma_next =3D vma->next; + drm_free(vma, sizeof(*vma), DRM_MEM_VMAS); + } + dev->vmalist =3D NULL; + } +=09 + /* Clear map area information */ + if (dev->maplist) { + for (i =3D 0; i < dev->map_count; i++) { + map =3D dev->maplist[i]; + switch (map->type) { + case _DRM_REGISTERS: + case _DRM_FRAME_BUFFER: + drm_ioremapfree(map->handle, map->size, dev); + break; + + case _DRM_SHM: + drm_free_pages((unsigned long)map->handle, + drm_order(map->size) + - PAGE_SHIFT, + DRM_MEM_SAREA); + break; + + default: + break; + }; + + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + } + + drm_free(dev->maplist, + dev->map_count * sizeof(*dev->maplist), + DRM_MEM_MAPS); + dev->maplist =3D NULL; + dev->map_count =3D 0; + } +=09 + if (dev->lock.hw_lock) { + dev->lock.hw_lock =3D NULL; /* SHM removed */ + dev->lock.pid =3D 0; + wake_up_interruptible(&dev->lock.lock_queue); + } + up(&dev->struct_sem); +=09 + return 0; +} + +drm_device_t **ffb_dev_table; +static int ffb_dev_table_size; + +static void get_ffb_type(ffb_dev_priv_t *ffb_priv, int instance) +{ + volatile unsigned char *strap_bits; + unsigned char val; + + strap_bits =3D (volatile unsigned char *) + (ffb_priv->card_phys_base + 0x00200000UL); + + /* Don't ask, you have to read the value twice for whatever + * reason to get correct contents. + */ + val =3D upa_readb(strap_bits); + val =3D upa_readb(strap_bits); + switch (val & 0x78) { + case (0x0 << 5) | (0x0 << 3): + ffb_priv->ffb_type =3D ffb1_prototype; + printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance); + break; + case (0x0 << 5) | (0x1 << 3): + ffb_priv->ffb_type =3D ffb1_standard; + printk("ffb%d: Detected FFB1\n", instance); + break; + case (0x0 << 5) | (0x3 << 3): + ffb_priv->ffb_type =3D ffb1_speedsort; + printk("ffb%d: Detected FFB1-SpeedSort\n", instance); + break; + case (0x1 << 5) | (0x0 << 3): + ffb_priv->ffb_type =3D ffb2_prototype; + printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n", instance); + break; + case (0x1 << 5) | (0x1 << 3): + ffb_priv->ffb_type =3D ffb2_vertical; + printk("ffb%d: Detected FFB2/vertical\n", instance); + break; + case (0x1 << 5) | (0x2 << 3): + ffb_priv->ffb_type =3D ffb2_vertical_plus; + printk("ffb%d: Detected FFB2+/vertical\n", instance); + break; + case (0x2 << 5) | (0x0 << 3): + ffb_priv->ffb_type =3D ffb2_horizontal; + printk("ffb%d: Detected FFB2/horizontal\n", instance); + break; + case (0x2 << 5) | (0x2 << 3): + ffb_priv->ffb_type =3D ffb2_horizontal; + printk("ffb%d: Detected FFB2+/horizontal\n", instance); + break; + default: + ffb_priv->ffb_type =3D ffb2_vertical; + printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n", instance, val); + break; + }; +} + +static void __init ffb_apply_upa_parent_ranges(int parent, struct linux_pr= om64_registers *regs) +{ + struct linux_prom64_ranges ranges[PROMREG_MAX]; + char name[128]; + int len, i; + + prom_getproperty(parent, "name", name, sizeof(name)); + if (strcmp(name, "upa") !=3D 0) + return; + + len =3D prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges= )); + if (len <=3D 0) + return; + + len /=3D sizeof(struct linux_prom64_ranges); + for (i =3D 0; i < len; i++) { + struct linux_prom64_ranges *rng =3D &ranges[i]; + u64 phys_addr =3D regs->phys_addr; + + if (phys_addr >=3D rng->ot_child_base && + phys_addr < (rng->ot_child_base + rng->or_size)) { + regs->phys_addr -=3D rng->ot_child_base; + regs->phys_addr +=3D rng->ot_parent_base; + return; + } + } + + return; +} + +static int __init ffb_init_one(int prom_node, int parent_node, int instanc= e) +{ + struct linux_prom64_registers regs[2*PROMREG_MAX]; + drm_device_t *dev; + ffb_dev_priv_t *ffb_priv; + int ret, i; + + dev =3D kmalloc(sizeof(drm_device_t) + sizeof(ffb_dev_priv_t), GFP_KERNEL= ); + if (!dev) + return -ENOMEM; + + memset(dev, 0, sizeof(*dev)); + spin_lock_init(&dev->count_lock); + sema_init(&dev->struct_sem, 1); + + ffb_priv =3D (ffb_dev_priv_t *) (dev + 1); + ffb_priv->prom_node =3D prom_node; + if (prom_getproperty(ffb_priv->prom_node, "reg", + (void *)regs, sizeof(regs)) <=3D 0) { + kfree(dev); + return -EINVAL; + } + ffb_apply_upa_parent_ranges(parent_node, ®s[0]); + ffb_priv->card_phys_base =3D regs[0].phys_addr; + ffb_priv->regs =3D (ffb_fbcPtr) + (regs[0].phys_addr + 0x00600000UL); + get_ffb_type(ffb_priv, instance); + for (i =3D 0; i < FFB_MAX_CTXS; i++) + ffb_priv->hw_state[i] =3D NULL; + + ffb_dev_table[instance] =3D dev; + +#ifdef MODULE + drm_parse_options(ffb); +#endif + + memcpy(&ffb_priv->miscdev, &ffb_misc, sizeof(ffb_misc)); + ret =3D misc_register(&ffb_priv->miscdev); + if (ret) { + ffb_dev_table[instance] =3D NULL; + kfree(dev); + return ret; + } + + dev->device =3D MKDEV(MISC_MAJOR, ffb_priv->miscdev.minor); + dev->name =3D FFB_NAME; + + drm_mem_init(); + drm_proc_init(dev); + + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d at %016lx\n", + FFB_NAME, + FFB_MAJOR, + FFB_MINOR, + FFB_PATCHLEVEL, + FFB_DATE, + ffb_priv->miscdev.minor, + ffb_priv->card_phys_base); +=09 + return 0; +} + +static int __init ffb_count_siblings(int root) +{ + int node, child, count =3D 0; + + child =3D prom_getchild(root); + for (node =3D prom_searchsiblings(child, "SUNW,ffb"); node; + node =3D prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) + count++; + + return count; +} + +static int __init ffb_init_dev_table(void) +{ + int root, total; + + total =3D ffb_count_siblings(prom_root_node); + root =3D prom_getchild(prom_root_node); + for (root =3D prom_searchsiblings(root, "upa"); root; + root =3D prom_searchsiblings(prom_getsibling(root), "upa")) + total +=3D ffb_count_siblings(root); + + if (!total) + return -ENODEV; + + ffb_dev_table =3D kmalloc(sizeof(drm_device_t *) * total, GFP_KERNEL); + if (!ffb_dev_table) + return -ENOMEM; + + ffb_dev_table_size =3D total; + + return 0; +} + +static int __init ffb_scan_siblings(int root, int instance) +{ + int node, child; + + child =3D prom_getchild(root); + for (node =3D prom_searchsiblings(child, "SUNW,ffb"); node; + node =3D prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) { + ffb_init_one(node, root, instance); + instance++; + } + + return instance; +} + +int __init ffb_init(void) +{ + int root, instance, ret; + + ret =3D ffb_init_dev_table(); + if (ret) + return ret; + + instance =3D ffb_scan_siblings(prom_root_node, 0); + + root =3D prom_getchild(prom_root_node); + for (root =3D prom_searchsiblings(root, "upa"); root; + root =3D prom_searchsiblings(prom_getsibling(root), "upa")) + instance =3D ffb_scan_siblings(root, instance); + + return 0; +} + +void __exit ffb_cleanup(void) +{ + int instance; + + DRM_DEBUG("\n"); +=09 + drm_proc_cleanup(); + for (instance =3D 0; instance < ffb_dev_table_size; instance++) { + drm_device_t *dev =3D ffb_dev_table[instance]; + ffb_dev_priv_t *ffb_priv; + + if (!dev) + continue; + + ffb_priv =3D (ffb_dev_priv_t *) (dev + 1); + if (misc_deregister(&ffb_priv->miscdev)) { + DRM_ERROR("Cannot unload module\n"); + } else { + DRM_INFO("Module unloaded\n"); + } + ffb_takedown(dev); + kfree(dev); + ffb_dev_table[instance] =3D NULL; + } + kfree(ffb_dev_table); + ffb_dev_table =3D NULL; + ffb_dev_table_size =3D 0; +} + +static int ffb_version(struct inode *inode, struct file *filp, unsigned in= t cmd, unsigned long arg) +{ + drm_version_t version; + int len, ret; + + ret =3D copy_from_user(&version, (drm_version_t *)arg, sizeof(version)); + if (ret) + return -EFAULT; + + version.version_major =3D FFB_MAJOR; + version.version_minor =3D FFB_MINOR; + version.version_patchlevel =3D FFB_PATCHLEVEL; + + len =3D strlen(FFB_NAME); + if (len > version.name_len) + len =3D version.name_len; + version.name_len =3D len; + if (len && version.name) { + ret =3D copy_to_user(version.name, FFB_NAME, len); + if (ret) + return -EFAULT; + } + + len =3D strlen(FFB_DATE); + if (len > version.date_len) + len =3D version.date_len; + version.date_len =3D len; + if (len && version.date) { + ret =3D copy_to_user(version.date, FFB_DATE, len); + if (ret) + return -EFAULT; + } + + len =3D strlen(FFB_DESC); + if (len > version.desc_len) + len =3D version.desc_len; + version.desc_len =3D len; + if (len && version.desc) { + ret =3D copy_to_user(version.desc, FFB_DESC, len); + if (ret) + return -EFAULT; + } + + ret =3D copy_to_user((drm_version_t *) arg, &version, sizeof(version)); + if (ret) + ret =3D -EFAULT; + + return ret; +} + +static int ffb_setup(drm_device_t *dev) +{ + int i; + + atomic_set(&dev->ioctl_count, 0); + atomic_set(&dev->vma_count, 0); + dev->buf_use =3D 0; + atomic_set(&dev->buf_alloc, 0); + + atomic_set(&dev->total_open, 0); + atomic_set(&dev->total_close, 0); + atomic_set(&dev->total_ioctl, 0); + atomic_set(&dev->total_irq, 0); + atomic_set(&dev->total_ctx, 0); + atomic_set(&dev->total_locks, 0); + atomic_set(&dev->total_unlocks, 0); + atomic_set(&dev->total_contends, 0); + atomic_set(&dev->total_sleeps, 0); + + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + dev->magiclist[i].head =3D NULL; + dev->magiclist[i].tail =3D NULL; + } + + dev->maplist =3D NULL; + dev->map_count =3D 0; + dev->vmalist =3D NULL; + dev->lock.hw_lock =3D NULL; + init_waitqueue_head(&dev->lock.lock_queue); + dev->queue_count =3D 0; + dev->queue_reserved =3D 0; + dev->queue_slots =3D 0; + dev->queuelist =3D NULL; + dev->irq =3D 0; + dev->context_flag =3D 0; + dev->interrupt_flag =3D 0; + dev->dma =3D 0; + dev->dma_flag =3D 0; + dev->last_context =3D 0; + dev->last_switch =3D 0; + dev->last_checked =3D 0; + init_timer(&dev->timer); + init_waitqueue_head(&dev->context_wait); + + dev->ctx_start =3D 0; + dev->lck_start =3D 0; +=09 + dev->buf_rp =3D dev->buf; + dev->buf_wp =3D dev->buf; + dev->buf_end =3D dev->buf + DRM_BSZ; + dev->buf_async =3D NULL; + init_waitqueue_head(&dev->buf_readers); + init_waitqueue_head(&dev->buf_writers); + + return 0; +} + +static int ffb_open(struct inode *inode, struct file *filp) +{ + drm_device_t *dev; + int minor, i; + int ret =3D 0; + + minor =3D MINOR(inode->i_rdev); + for (i =3D 0; i < ffb_dev_table_size; i++) { + ffb_dev_priv_t *ffb_priv; + + ffb_priv =3D (ffb_dev_priv_t *) (ffb_dev_table[i] + 1); + + if (ffb_priv->miscdev.minor =3D minor) + break; + } + + if (i >=3D ffb_dev_table_size) + return -EINVAL; + + dev =3D ffb_dev_table[i]; + if (!dev) + return -EINVAL; + + DRM_DEBUG("open_count =3D %d\n", dev->open_count); + ret =3D drm_open_helper(inode, filp, dev); + if (!ret) { + atomic_inc(&dev->total_open); + spin_lock(&dev->count_lock); + if (!dev->open_count++) { + spin_unlock(&dev->count_lock); + return ffb_setup(dev); + } + spin_unlock(&dev->count_lock); + } + + return ret; +} + +static int ffb_release(struct inode *inode, struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev; + int ret =3D 0; + + lock_kernel(); + dev =3D priv->dev; + DRM_DEBUG("open_count =3D %d\n", dev->open_count); + if (dev->lock.hw_lock !=3D NULL + && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock) + && dev->lock.pid =3D current->pid) { + ffb_dev_priv_t *fpriv =3D (ffb_dev_priv_t *) (dev + 1); + int context =3D _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock); + int idx; + + /* We have to free up the rogue hw context state + * holding error or else we will leak it. + */ + idx =3D context - 1; + if (fpriv->hw_state[idx] !=3D NULL) { + kfree(fpriv->hw_state[idx]); + fpriv->hw_state[idx] =3D NULL; + } + } + + ret =3D drm_release(inode, filp); + + if (!ret) { + atomic_inc(&dev->total_close); + spin_lock(&dev->count_lock); + if (!--dev->open_count) { + if (atomic_read(&dev->ioctl_count) || dev->blocked) { + DRM_ERROR("Device busy: %d %d\n", + atomic_read(&dev->ioctl_count), + dev->blocked); + spin_unlock(&dev->count_lock); + unlock_kernel(); + return -EBUSY; + } + spin_unlock(&dev->count_lock); + ret =3D ffb_takedown(dev); + unlock_kernel(); + return ret; + } + spin_unlock(&dev->count_lock); + } + + unlock_kernel(); + return ret; +} + +static int ffb_ioctl(struct inode *inode, struct file *filp, unsigned int = cmd, unsigned long arg) +{ + int nr =3D DRM_IOCTL_NR(cmd); + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ioctl_desc_t *ioctl; + drm_ioctl_t *func; + int ret; + + atomic_inc(&dev->ioctl_count); + atomic_inc(&dev->total_ioctl); + ++priv->ioctl_count; +=09 + DRM_DEBUG("pid =3D %d, cmd =3D 0x%02x, nr =3D 0x%02x, dev 0x%x, auth =3D = %d\n", + current->pid, cmd, nr, dev->device, priv->authenticated); + + if (nr >=3D FFB_IOCTL_COUNT) { + ret =3D -EINVAL; + } else { + ioctl =3D &ffb_ioctls[nr]; + func =3D ioctl->func; + + if (!func) { + DRM_DEBUG("no function\n"); + ret =3D -EINVAL; + } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN)) + || (ioctl->auth_needed && !priv->authenticated)) { + ret =3D -EACCES; + } else { + ret =3D (func)(inode, filp, cmd, arg); + } + } +=09 + atomic_dec(&dev->ioctl_count); + + return ret; +} + +static int ffb_lock(struct inode *inode, struct file *filp, unsigned int c= md, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + DECLARE_WAITQUEUE(entry, current); + int ret =3D 0; + drm_lock_t lock; + + ret =3D copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)); + if (ret) + return -EFAULT; + + if (lock.context =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + + DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags =3D 0x%08x\n", + lock.context, current->pid, dev->lock.hw_lock->lock, + lock.flags); + + add_wait_queue(&dev->lock.lock_queue, &entry); + for (;;) { + if (!dev->lock.hw_lock) { + /* Device has been unregistered */ + ret =3D -EINTR; + break; + } + if (drm_lock_take(&dev->lock.hw_lock->lock, + lock.context)) { + dev->lock.pid =3D current->pid; + dev->lock.lock_time =3D jiffies; + atomic_inc(&dev->total_locks); + break; /* Got lock */ + } + =20 + /* Contention */ + atomic_inc(&dev->total_sleeps); + current->state =3D TASK_INTERRUPTIBLE; + current->policy |=3D SCHED_YIELD; + schedule(); + if (signal_pending(current)) { + ret =3D -ERESTARTSYS; + break; + } + } + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev->lock.lock_queue, &entry); + + if (!ret) { + sigemptyset(&dev->sigmask); + sigaddset(&dev->sigmask, SIGSTOP); + sigaddset(&dev->sigmask, SIGTSTP); + sigaddset(&dev->sigmask, SIGTTIN); + sigaddset(&dev->sigmask, SIGTTOU); + dev->sigdata.context =3D lock.context; + dev->sigdata.lock =3D dev->lock.hw_lock; + block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask); + + if (dev->last_context !=3D lock.context) + ffb_context_switch(dev, dev->last_context, lock.context); + } + + DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock= "); + + return ret; +} + +int ffb_unlock(struct inode *inode, struct file *filp, unsigned int cmd, u= nsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_lock_t lock; + unsigned int old, new, prev, ctx; + int ret; + + ret =3D copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock)); + if (ret) + return -EFAULT; +=09 + if ((ctx =3D lock.context) =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + + DRM_DEBUG("%d frees lock (%d holds)\n", + lock.context, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + atomic_inc(&dev->total_unlocks); + if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock)) + atomic_inc(&dev->total_contends); + + /* We no longer really hold it, but if we are the next + * agent to request it then we should just be able to + * take it immediately and not eat the ioctl. + */ + dev->lock.pid =3D 0; + { + __volatile__ unsigned int *plock =3D &dev->lock.hw_lock->lock; + + do { + old =3D *plock; + new =3D ctx; + prev =3D cmpxchg(plock, old, new); + } while (prev !=3D old); + } + + wake_up_interruptible(&dev->lock.lock_queue); +=09 + unblock_all_signals(); + return 0; +} + +extern struct vm_operations_struct drm_vm_ops; +extern struct vm_operations_struct drm_vm_shm_ops; +extern struct vm_operations_struct drm_vm_shm_lock_ops; + +static int ffb_mmap(struct file *filp, struct vm_area_struct *vma) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_map_t *map =3D NULL; + ffb_dev_priv_t *ffb_priv; + int i, minor; +=09 + DRM_DEBUG("start =3D 0x%lx, end =3D 0x%lx, offset =3D 0x%lx\n", + vma->vm_start, vma->vm_end, VM_OFFSET(vma)); + + minor =3D MINOR(filp->f_dentry->d_inode->i_rdev); + ffb_priv =3D NULL; + for (i =3D 0; i < ffb_dev_table_size; i++) { + ffb_priv =3D (ffb_dev_priv_t *) (ffb_dev_table[i] + 1); + if (ffb_priv->miscdev.minor =3D minor) + break; + } + if (i >=3D ffb_dev_table_size) + return -EINVAL; + + /* We don't support/need dma mappings, so... */ + if (!VM_OFFSET(vma)) + return -EINVAL; + + for (i =3D 0; i < dev->map_count; i++) { + unsigned long off; + + map =3D dev->maplist[i]; + + /* Ok, a little hack to make 32-bit apps work. */ + off =3D (map->offset & 0xffffffff); + if (off =3D VM_OFFSET(vma)) + break; + } + + if (i >=3D dev->map_count) + return -EINVAL; + + if (!map || + ((map->flags & _DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN))) + return -EPERM; + + if (map->size !=3D (vma->vm_end - vma->vm_start)) + return -EINVAL; + + /* Set read-only attribute before mappings are created + * so it works for fb/reg maps too. + */ + if (map->flags & _DRM_READ_ONLY) + vma->vm_page_prot =3D __pgprot(pte_val(pte_wrprotect( + __pte(pgprot_val(vma->vm_page_prot))))); + + switch (map->type) { + case _DRM_FRAME_BUFFER: + /* FALLTHROUGH */ + + case _DRM_REGISTERS: + /* In order to handle 32-bit drm apps/xserver we + * play a trick. The mappings only really specify + * the 32-bit offset from the cards 64-bit base + * address, and we just add in the base here. + */ + vma->vm_flags |=3D VM_IO; + if (io_remap_page_range(vma->vm_start, + ffb_priv->card_phys_base + VM_OFFSET(vma), + vma->vm_end - vma->vm_start, + vma->vm_page_prot, 0)) + return -EAGAIN; + + vma->vm_ops =3D &drm_vm_ops; + break; + case _DRM_SHM: + if (map->flags & _DRM_CONTAINS_LOCK) + vma->vm_ops =3D &drm_vm_shm_lock_ops; + else { + vma->vm_ops =3D &drm_vm_shm_ops; + vma->vm_private_data =3D (void *) map; + } + + /* Don't let this area swap. Change when + * DRM_KERNEL advisory is supported. + */ + vma->vm_flags |=3D VM_LOCKED; + break; + default: + return -EINVAL; /* This should never happen. */ + }; + + vma->vm_flags |=3D VM_LOCKED | VM_SHM; /* Don't swap */ + + vma->vm_file =3D filp; /* Needed for drm_vm_open() */ + drm_vm_open(vma); + return 0; +} + +static drm_map_t *ffb_find_map(struct file *filp, unsigned long off) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev; + drm_map_t *map; + int i; + + if (!priv || (dev =3D priv->dev) =3D NULL) + return NULL; + + for (i =3D 0; i < dev->map_count; i++) { + unsigned long uoff; + + map =3D dev->maplist[i]; + + /* Ok, a little hack to make 32-bit apps work. */ + uoff =3D (map->offset & 0xffffffff); + if (uoff =3D off) + return map; + } + return NULL; +} + +static unsigned long ffb_get_unmapped_area(struct file *filp, unsigned lon= g hint, unsigned long len, unsigned long pgoff, unsigned long flags) +{ + drm_map_t *map =3D ffb_find_map(filp, pgoff << PAGE_SHIFT); + unsigned long addr =3D -ENOMEM; + + if (!map) + return get_unmapped_area(NULL, hint, len, pgoff, flags); + + if (map->type =3D _DRM_FRAME_BUFFER || + map->type =3D _DRM_REGISTERS) { +#ifdef HAVE_ARCH_FB_UNMAPPED_AREA + addr =3D get_fb_unmapped_area(filp, hint, len, pgoff, flags); +#else + addr =3D get_unmapped_area(NULL, hint, len, pgoff, flags); +#endif + } else if (map->type =3D _DRM_SHM && SHMLBA > PAGE_SIZE) { + unsigned long slack =3D SHMLBA - PAGE_SIZE; + + addr =3D get_unmapped_area(NULL, hint, len + slack, pgoff, flags); + if (!(addr & ~PAGE_MASK)) { + unsigned long kvirt =3D (unsigned long) map->handle; + + if ((kvirt & (SHMLBA - 1)) !=3D (addr & (SHMLBA - 1))) { + unsigned long koff, aoff; + + koff =3D kvirt & (SHMLBA - 1); + aoff =3D addr & (SHMLBA - 1); + if (koff < aoff) + koff +=3D SHMLBA; + + addr +=3D (koff - aoff); + } + } + } else { + addr =3D get_unmapped_area(NULL, hint, len, pgoff, flags); + } + + return addr; +} + +module_init(ffb_init); +module_exit(ffb_cleanup); diff -urN linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h linux-2.4.13-lia/driv= ers/char/drm-4.0/ffb_drv.h --- linux-2.4.13/drivers/char/drm-4.0/ffb_drv.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/ffb_drv.h Thu Oct 4 00:21:40 2001 @@ -0,0 +1,276 @@ +/* $Id: ffb_drv.h,v 1.1 2000/06/01 04:24:39 davem Exp $ + * ffb_drv.h: Creator/Creator3D direct rendering driver. + * + * Copyright (C) 2000 David S. Miller (davem@redhat.com) + */ + +/* Auxilliary clips. */ +typedef struct { + volatile unsigned int min; + volatile unsigned int max; +} ffb_auxclip, *ffb_auxclipPtr; + +/* FFB register set. */ +typedef struct _ffb_fbc { + /* Next vertex registers, on the right we list which drawops + * use said register and the logical name the register has in + * that context. + */ /* DESCRIPTION DRAWOP(NAME) */ +/*0x00*/unsigned int pad1[3]; /* Reserved */ +/*0x0c*/volatile unsigned int alpha; /* ALPHA Transparency */ +/*0x10*/volatile unsigned int red; /* RED */ +/*0x14*/volatile unsigned int green; /* GREEN */ +/*0x18*/volatile unsigned int blue; /* BLUE */ +/*0x1c*/volatile unsigned int z; /* DEPTH */ +/*0x20*/volatile unsigned int y; /* Y triangle(DOYF) */ + /* aadot(DYF) */ + /* ddline(DYF) */ + /* aaline(DYF) */ +/*0x24*/volatile unsigned int x; /* X triangle(DOXF) */ + /* aadot(DXF) */ + /* ddline(DXF) */ + /* aaline(DXF) */ +/*0x28*/unsigned int pad2[2]; /* Reserved */ +/*0x30*/volatile unsigned int ryf; /* Y (alias to DOYF) ddline(RYF) */ + /* aaline(RYF) */ + /* triangle(RYF) */ +/*0x34*/volatile unsigned int rxf; /* X ddline(RXF) */ + /* aaline(RXF) */ + /* triangle(RXF) */ +/*0x38*/unsigned int pad3[2]; /* Reserved */ +/*0x40*/volatile unsigned int dmyf; /* Y (alias to DOYF) triangle(DMYF) */ +/*0x44*/volatile unsigned int dmxf; /* X triangle(DMXF) */ +/*0x48*/unsigned int pad4[2]; /* Reserved */ +/*0x50*/volatile unsigned int ebyi; /* Y (alias to RYI) polygon(EBYI) */ +/*0x54*/volatile unsigned int ebxi; /* X polygon(EBXI) */ +/*0x58*/unsigned int pad5[2]; /* Reserved */ +/*0x60*/volatile unsigned int by; /* Y brline(RYI) */ + /* fastfill(OP) */ + /* polygon(YI) */ + /* rectangle(YI) */ + /* bcopy(SRCY) */ + /* vscroll(SRCY) */ +/*0x64*/volatile unsigned int bx; /* X brline(RXI) */ + /* polygon(XI) */ + /* rectangle(XI) */ + /* bcopy(SRCX) */ + /* vscroll(SRCX) */ + /* fastfill(GO) */ +/*0x68*/volatile unsigned int dy; /* destination Y fastfill(DSTY) */ + /* bcopy(DSRY) */ + /* vscroll(DSRY) */ +/*0x6c*/volatile unsigned int dx; /* destination X fastfill(DSTX) */ + /* bcopy(DSTX) */ + /* vscroll(DSTX) */ +/*0x70*/volatile unsigned int bh; /* Y (alias to RYI) brline(DYI) */ + /* dot(DYI) */ + /* polygon(ETYI) */ + /* Height fastfill(H) */ + /* bcopy(H) */ + /* vscroll(H) */ + /* Y count fastfill(NY) */ +/*0x74*/volatile unsigned int bw; /* X dot(DXI) */ + /* brline(DXI) */ + /* polygon(ETXI) */ + /* fastfill(W) */ + /* bcopy(W) */ + /* vscroll(W) */ + /* fastfill(NX) */ +/*0x78*/unsigned int pad6[2]; /* Reserved */ +/*0x80*/unsigned int pad7[32]; /* Reserved */ +=09 + /* Setup Unit's vertex state register */ +/*100*/ volatile unsigned int suvtx; +/*104*/ unsigned int pad8[63]; /* Reserved */ +=09 + /* Frame Buffer Control Registers */ +/*200*/ volatile unsigned int ppc; /* Pixel Processor Control */ +/*204*/ volatile unsigned int wid; /* Current WID */ +/*208*/ volatile unsigned int fg; /* FG data */ +/*20c*/ volatile unsigned int bg; /* BG data */ +/*210*/ volatile unsigned int consty; /* Constant Y */ +/*214*/ volatile unsigned int constz; /* Constant Z */ +/*218*/ volatile unsigned int xclip; /* X Clip */ +/*21c*/ volatile unsigned int dcss; /* Depth Cue Scale Slope */ +/*220*/ volatile unsigned int vclipmin; /* Viewclip XY Min Bounds */ +/*224*/ volatile unsigned int vclipmax; /* Viewclip XY Max Bounds */ +/*228*/ volatile unsigned int vclipzmin; /* Viewclip Z Min Bounds */ +/*22c*/ volatile unsigned int vclipzmax; /* Viewclip Z Max Bounds */ +/*230*/ volatile unsigned int dcsf; /* Depth Cue Scale Front Bound */ +/*234*/ volatile unsigned int dcsb; /* Depth Cue Scale Back Bound */ +/*238*/ volatile unsigned int dczf; /* Depth Cue Z Front */ +/*23c*/ volatile unsigned int dczb; /* Depth Cue Z Back */ +/*240*/ unsigned int pad9; /* Reserved */ +/*244*/ volatile unsigned int blendc; /* Alpha Blend Control */ +/*248*/ volatile unsigned int blendc1; /* Alpha Blend Color 1 */ +/*24c*/ volatile unsigned int blendc2; /* Alpha Blend Color 2 */ +/*250*/ volatile unsigned int fbramitc; /* FB RAM Interleave Test Control = */ +/*254*/ volatile unsigned int fbc; /* Frame Buffer Control */ +/*258*/ volatile unsigned int rop; /* Raster OPeration */ +/*25c*/ volatile unsigned int cmp; /* Frame Buffer Compare */ +/*260*/ volatile unsigned int matchab; /* Buffer AB Match Mask */ +/*264*/ volatile unsigned int matchc; /* Buffer C(YZ) Match Mask */ +/*268*/ volatile unsigned int magnab; /* Buffer AB Magnitude Mask */ +/*26c*/ volatile unsigned int magnc; /* Buffer C(YZ) Magnitude Mask */ +/*270*/ volatile unsigned int fbcfg0; /* Frame Buffer Config 0 */ +/*274*/ volatile unsigned int fbcfg1; /* Frame Buffer Config 1 */ +/*278*/ volatile unsigned int fbcfg2; /* Frame Buffer Config 2 */ +/*27c*/ volatile unsigned int fbcfg3; /* Frame Buffer Config 3 */ +/*280*/ volatile unsigned int ppcfg; /* Pixel Processor Config */ +/*284*/ volatile unsigned int pick; /* Picking Control */ +/*288*/ volatile unsigned int fillmode; /* FillMode */ +/*28c*/ volatile unsigned int fbramwac; /* FB RAM Write Address Control */ +/*290*/ volatile unsigned int pmask; /* RGB PlaneMask */ +/*294*/ volatile unsigned int xpmask; /* X PlaneMask */ +/*298*/ volatile unsigned int ypmask; /* Y PlaneMask */ +/*29c*/ volatile unsigned int zpmask; /* Z PlaneMask */ +/*2a0*/ ffb_auxclip auxclip[4]; /* Auxilliary Viewport Clip */ +=09 + /* New 3dRAM III support regs */ +/*2c0*/ volatile unsigned int rawblend2; +/*2c4*/ volatile unsigned int rawpreblend; +/*2c8*/ volatile unsigned int rawstencil; +/*2cc*/ volatile unsigned int rawstencilctl; +/*2d0*/ volatile unsigned int threedram1; +/*2d4*/ volatile unsigned int threedram2; +/*2d8*/ volatile unsigned int passin; +/*2dc*/ volatile unsigned int rawclrdepth; +/*2e0*/ volatile unsigned int rawpmask; +/*2e4*/ volatile unsigned int rawcsrc; +/*2e8*/ volatile unsigned int rawmatch; +/*2ec*/ volatile unsigned int rawmagn; +/*2f0*/ volatile unsigned int rawropblend; +/*2f4*/ volatile unsigned int rawcmp; +/*2f8*/ volatile unsigned int rawwac; +/*2fc*/ volatile unsigned int fbramid; +=09 +/*300*/ volatile unsigned int drawop; /* Draw OPeration */ +/*304*/ unsigned int pad10[2]; /* Reserved */ +/*30c*/ volatile unsigned int lpat; /* Line Pattern control */ +/*310*/ unsigned int pad11; /* Reserved */ +/*314*/ volatile unsigned int fontxy; /* XY Font coordinate */ +/*318*/ volatile unsigned int fontw; /* Font Width */ +/*31c*/ volatile unsigned int fontinc; /* Font Increment */ +/*320*/ volatile unsigned int font; /* Font bits */ +/*324*/ unsigned int pad12[3]; /* Reserved */ +/*330*/ volatile unsigned int blend2; +/*334*/ volatile unsigned int preblend; +/*338*/ volatile unsigned int stencil; +/*33c*/ volatile unsigned int stencilctl; + +/*340*/ unsigned int pad13[4]; /* Reserved */ +/*350*/ volatile unsigned int dcss1; /* Depth Cue Scale Slope 1 */ +/*354*/ volatile unsigned int dcss2; /* Depth Cue Scale Slope 2 */ +/*358*/ volatile unsigned int dcss3; /* Depth Cue Scale Slope 3 */ +/*35c*/ volatile unsigned int widpmask; +/*360*/ volatile unsigned int dcs2; +/*364*/ volatile unsigned int dcs3; +/*368*/ volatile unsigned int dcs4; +/*36c*/ unsigned int pad14; /* Reserved */ +/*370*/ volatile unsigned int dcd2; +/*374*/ volatile unsigned int dcd3; +/*378*/ volatile unsigned int dcd4; +/*37c*/ unsigned int pad15; /* Reserved */ +/*380*/ volatile unsigned int pattern[32]; /* area Pattern */ +/*400*/ unsigned int pad16[8]; /* Reserved */ +/*420*/ volatile unsigned int reset; /* chip RESET */ +/*424*/ unsigned int pad17[247]; /* Reserved */ +/*800*/ volatile unsigned int devid; /* Device ID */ +/*804*/ unsigned int pad18[63]; /* Reserved */ +/*900*/ volatile unsigned int ucsr; /* User Control & Status Register */ +/*904*/ unsigned int pad19[31]; /* Reserved */ +/*980*/ volatile unsigned int mer; /* Mode Enable Register */ +/*984*/ unsigned int pad20[1439]; /* Reserved */ +} ffb_fbc, *ffb_fbcPtr; + +struct ffb_hw_context { + int is_2d_only; + + unsigned int ppc; + unsigned int wid; + unsigned int fg; + unsigned int bg; + unsigned int consty; + unsigned int constz; + unsigned int xclip; + unsigned int dcss; + unsigned int vclipmin; + unsigned int vclipmax; + unsigned int vclipzmin; + unsigned int vclipzmax; + unsigned int dcsf; + unsigned int dcsb; + unsigned int dczf; + unsigned int dczb; + unsigned int blendc; + unsigned int blendc1; + unsigned int blendc2; + unsigned int fbc; + unsigned int rop; + unsigned int cmp; + unsigned int matchab; + unsigned int matchc; + unsigned int magnab; + unsigned int magnc; + unsigned int pmask; + unsigned int xpmask; + unsigned int ypmask; + unsigned int zpmask; + unsigned int auxclip0min; + unsigned int auxclip0max; + unsigned int auxclip1min; + unsigned int auxclip1max; + unsigned int auxclip2min; + unsigned int auxclip2max; + unsigned int auxclip3min; + unsigned int auxclip3max; + unsigned int drawop; + unsigned int lpat; + unsigned int fontxy; + unsigned int fontw; + unsigned int fontinc; + unsigned int area_pattern[32]; + unsigned int ucsr; + unsigned int stencil; + unsigned int stencilctl; + unsigned int dcss1; + unsigned int dcss2; + unsigned int dcss3; + unsigned int dcs2; + unsigned int dcs3; + unsigned int dcs4; + unsigned int dcd2; + unsigned int dcd3; + unsigned int dcd4; + unsigned int mer; +}; + +#define FFB_MAX_CTXS 32 + +enum ffb_chip_type { + ffb1_prototype =3D 0, /* Early pre-FCS FFB */ + ffb1_standard, /* First FCS FFB, 100Mhz UPA, 66MHz gclk */ + ffb1_speedsort, /* Second FCS FFB, 100Mhz UPA, 75MHz gclk */ + ffb2_prototype, /* Early pre-FCS vertical FFB2 */ + ffb2_vertical, /* First FCS FFB2/vertical, 100Mhz UPA, 100MHZ gclk, + 75(SingleBuffer)/83(DoubleBuffer) MHz fclk */ + ffb2_vertical_plus, /* Second FCS FFB2/vertical, same timings */ + ffb2_horizontal, /* First FCS FFB2/horizontal, same timings as FFB2/vert = */ + ffb2_horizontal_plus, /* Second FCS FFB2/horizontal, same timings */ + afb_m3, /* FCS Elite3D, 3 float chips */ + afb_m6 /* FCS Elite3D, 6 float chips */ +}; + +typedef struct ffb_dev_priv { + /* Misc software state. */ + int prom_node; + enum ffb_chip_type ffb_type; + u64 card_phys_base; + struct miscdevice miscdev; + + /* Controller registers. */ + ffb_fbcPtr regs; + + /* Context table. */ + struct ffb_hw_context *hw_state[FFB_MAX_CTXS]; +} ffb_dev_priv_t; diff -urN linux-2.4.13/drivers/char/drm-4.0/fops.c linux-2.4.13-lia/drivers= /char/drm-4.0/fops.c --- linux-2.4.13/drivers/char/drm-4.0/fops.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/fops.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,253 @@ +/* fops.c -- File operations for DRM -*- linux-c -*- + * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * Daryll Strauss + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include + +/* drm_open is called whenever a process opens /dev/drm. */ + +int drm_open_helper(struct inode *inode, struct file *filp, drm_device_t *= dev) +{ + kdev_t minor =3D MINOR(inode->i_rdev); + drm_file_t *priv; + + if (filp->f_flags & O_EXCL) return -EBUSY; /* No exclusive opens */ + if (!drm_cpu_valid()) return -EINVAL; + + DRM_DEBUG("pid =3D %d, minor =3D %d\n", current->pid, minor); + + priv =3D drm_alloc(sizeof(*priv), DRM_MEM_FILES); + if(priv =3D NULL) + return -ENOMEM; + memset(priv, 0, sizeof(*priv)); + + filp->private_data =3D priv; + priv->uid =3D current->euid; + priv->pid =3D current->pid; + priv->minor =3D minor; + priv->dev =3D dev; + priv->ioctl_count =3D 0; + priv->authenticated =3D capable(CAP_SYS_ADMIN); + + down(&dev->struct_sem); + if (!dev->file_last) { + priv->next =3D NULL; + priv->prev =3D NULL; + dev->file_first =3D priv; + dev->file_last =3D priv; + } else { + priv->next =3D NULL; + priv->prev =3D dev->file_last; + dev->file_last->next =3D priv; + dev->file_last =3D priv; + } + up(&dev->struct_sem); +=09 + return 0; +} + +int drm_flush(struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + + DRM_DEBUG("pid =3D %d, device =3D 0x%x, open_count =3D %d\n", + current->pid, dev->device, dev->open_count); + return 0; +} + +/* drm_release is called whenever a process closes /dev/drm*. Linux calls + this only if any mappings have been closed. */ + +int drm_release(struct inode *inode, struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + + DRM_DEBUG("pid =3D %d, device =3D 0x%x, open_count =3D %d\n", + current->pid, dev->device, dev->open_count); + + if (dev->lock.hw_lock + && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock) + && dev->lock.pid =3D current->pid) { + DRM_ERROR("Process %d dead, freeing lock for context %d\n", + current->pid, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + drm_lock_free(dev, + &dev->lock.hw_lock->lock, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + =09 + /* FIXME: may require heavy-handed reset of + hardware at this point, possibly + processed via a callback to the X + server. */ + } + drm_reclaim_buffers(dev, priv->pid); + + drm_fasync(-1, filp, 0); + + down(&dev->struct_sem); + if (priv->prev) priv->prev->next =3D priv->next; + else dev->file_first =3D priv->next; + if (priv->next) priv->next->prev =3D priv->prev; + else dev->file_last =3D priv->prev; + up(&dev->struct_sem); +=09 + drm_free(priv, sizeof(*priv), DRM_MEM_FILES); +=09 + return 0; +} + +int drm_fasync(int fd, struct file *filp, int on) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int retcode; +=09 + DRM_DEBUG("fd =3D %d, device =3D 0x%x\n", fd, dev->device); + retcode =3D fasync_helper(fd, filp, on, &dev->buf_async); + if (retcode < 0) return retcode; + return 0; +} + + +/* The drm_read and drm_write_string code (especially that which manages + the circular buffer), is based on Alessandro Rubini's LINUX DEVICE + DRIVERS (Cambridge: O'Reilly, 1998), pages 111-113. */ + +ssize_t drm_read(struct file *filp, char *buf, size_t count, loff_t *off) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int left; + int avail; + int send; + int cur; + + DRM_DEBUG("%p, %p\n", dev->buf_rp, dev->buf_wp); +=09 + while (dev->buf_rp =3D dev->buf_wp) { + DRM_DEBUG(" sleeping\n"); + if (filp->f_flags & O_NONBLOCK) { + return -EAGAIN; + } + interruptible_sleep_on(&dev->buf_readers); + if (signal_pending(current)) { + DRM_DEBUG(" interrupted\n"); + return -ERESTARTSYS; + } + DRM_DEBUG(" awake\n"); + } + + left =3D (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ; + avail =3D DRM_BSZ - left; + send =3D DRM_MIN(avail, count); + + while (send) { + if (dev->buf_wp > dev->buf_rp) { + cur =3D DRM_MIN(send, dev->buf_wp - dev->buf_rp); + } else { + cur =3D DRM_MIN(send, dev->buf_end - dev->buf_rp); + } + if (copy_to_user(buf, dev->buf_rp, cur)) + return -EFAULT; + dev->buf_rp +=3D cur; + if (dev->buf_rp =3D dev->buf_end) dev->buf_rp =3D dev->buf; + send -=3D cur; + } +=09 + wake_up_interruptible(&dev->buf_writers); + return DRM_MIN(avail, count);; +} + +int drm_write_string(drm_device_t *dev, const char *s) +{ + int left =3D (dev->buf_rp + DRM_BSZ - dev->buf_wp) % DRM_BSZ; + int send =3D strlen(s); + int count; + + DRM_DEBUG("%d left, %d to send (%p, %p)\n", + left, send, dev->buf_rp, dev->buf_wp); +=09 + if (left =3D 1 || dev->buf_wp !=3D dev->buf_rp) { + DRM_ERROR("Buffer not empty (%d left, wp =3D %p, rp =3D %p)\n", + left, + dev->buf_wp, + dev->buf_rp); + } + + while (send) { + if (dev->buf_wp >=3D dev->buf_rp) { + count =3D DRM_MIN(send, dev->buf_end - dev->buf_wp); + if (count =3D left) --count; /* Leave a hole */ + } else { + count =3D DRM_MIN(send, dev->buf_rp - dev->buf_wp - 1); + } + strncpy(dev->buf_wp, s, count); + dev->buf_wp +=3D count; + if (dev->buf_wp =3D dev->buf_end) dev->buf_wp =3D dev->buf; + send -=3D count; + } + +#if LINUX_VERSION_CODE < 0x020315 && !defined(KILLFASYNCHASTHREEPARAMETERS) + /* The extra parameter to kill_fasync was added in 2.3.21, and is + _not_ present in _stock_ 2.2.14 and 2.2.15. However, some + distributions patch 2.2.x kernels to add this parameter. The + Makefile.linux attempts to detect this addition and defines + KILLFASYNCHASTHREEPARAMETERS if three parameters are found. */ + if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO); +#else + + /* Parameter added in 2.3.21. */ +#if LINUX_VERSION_CODE < 0x020400 + if (dev->buf_async) kill_fasync(dev->buf_async, SIGIO, POLL_IN); +#else + /* Type of first parameter changed in + Linux 2.4.0-test2... */ + if (dev->buf_async) kill_fasync(&dev->buf_async, SIGIO, POLL_IN); +#endif +#endif + DRM_DEBUG("waking\n"); + wake_up_interruptible(&dev->buf_readers); + return 0; +} + +unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + + poll_wait(filp, &dev->buf_readers, wait); + if (dev->buf_wp !=3D dev->buf_rp) return POLLIN | POLLRDNORM; + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c linux-2.4.13-lia/dr= ivers/char/drm-4.0/gamma_dma.c --- linux-2.4.13/drivers/char/drm-4.0/gamma_dma.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_dma.c Thu Oct 4 00:21:40 2= 001 @@ -0,0 +1,836 @@ +/* gamma_dma.c -- DMA support for GMX 2000 -*- linux-c -*- + * Created: Fri Mar 19 14:30:16 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "gamma_drv.h" + +#include /* For task queue support */ + + +/* WARNING!!! MAGIC NUMBER!!! The number of regions already added to the + kernel must be specified here. Currently, the number is 2. This must + match the order the X server uses for instantiating register regions , + or must be passed in a new ioctl. */ +#define GAMMA_REG(reg) \ + (2 \ + + ((reg < 0x1000) \ + ? 0 \ + : ((reg < 0x10000) ? 1 : ((reg < 0x11000) ? 2 : 3)))) + +#define GAMMA_OFF(reg) \ + ((reg < 0x1000) \ + ? reg \ + : ((reg < 0x10000) \ + ? (reg - 0x1000) \ + : ((reg < 0x11000) \ + ? (reg - 0x10000) \ + : (reg - 0x11000)))) + +#define GAMMA_BASE(reg) ((unsigned long)dev->maplist[GAMMA_REG(reg)]->han= dle) +#define GAMMA_ADDR(reg) (GAMMA_BASE(reg) + GAMMA_OFF(reg)) +#define GAMMA_DEREF(reg) *(__volatile__ int *)GAMMA_ADDR(reg) +#define GAMMA_READ(reg) GAMMA_DEREF(reg) +#define GAMMA_WRITE(reg,val) do { GAMMA_DEREF(reg) =3D val; } while (0) + +#define GAMMA_BROADCASTMASK 0x9378 +#define GAMMA_COMMANDINTENABLE 0x0c48 +#define GAMMA_DMAADDRESS 0x0028 +#define GAMMA_DMACOUNT 0x0030 +#define GAMMA_FILTERMODE 0x8c00 +#define GAMMA_GCOMMANDINTFLAGS 0x0c50 +#define GAMMA_GCOMMANDMODE 0x0c40 +#define GAMMA_GCOMMANDSTATUS 0x0c60 +#define GAMMA_GDELAYTIMER 0x0c38 +#define GAMMA_GDMACONTROL 0x0060 +#define GAMMA_GINTENABLE 0x0808 +#define GAMMA_GINTFLAGS 0x0810 +#define GAMMA_INFIFOSPACE 0x0018 +#define GAMMA_OUTFIFOWORDS 0x0020 +#define GAMMA_OUTPUTFIFO 0x2000 +#define GAMMA_SYNC 0x8c40 +#define GAMMA_SYNC_TAG 0x0188 + +static inline void gamma_dma_dispatch(drm_device_t *dev, unsigned long add= ress, + unsigned long length) +{ + GAMMA_WRITE(GAMMA_DMAADDRESS, virt_to_phys((void *)address)); + while (GAMMA_READ(GAMMA_GCOMMANDSTATUS) !=3D 4) + ; + GAMMA_WRITE(GAMMA_DMACOUNT, length / 4); +} + +static inline void gamma_dma_quiescent_single(drm_device_t *dev) +{ + while (GAMMA_READ(GAMMA_DMACOUNT)) + ; + while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3) + ; + + GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10); + GAMMA_WRITE(GAMMA_SYNC, 0); +=09 + do { + while (!GAMMA_READ(GAMMA_OUTFIFOWORDS)) + ; + } while (GAMMA_READ(GAMMA_OUTPUTFIFO) !=3D GAMMA_SYNC_TAG); +} + +static inline void gamma_dma_quiescent_dual(drm_device_t *dev) +{ + while (GAMMA_READ(GAMMA_DMACOUNT)) + ; + while (GAMMA_READ(GAMMA_INFIFOSPACE) < 3) + ; + + GAMMA_WRITE(GAMMA_BROADCASTMASK, 3); + + GAMMA_WRITE(GAMMA_FILTERMODE, 1 << 10); + GAMMA_WRITE(GAMMA_SYNC, 0); +=09 + /* Read from first MX */ + do { + while (!GAMMA_READ(GAMMA_OUTFIFOWORDS)) + ; + } while (GAMMA_READ(GAMMA_OUTPUTFIFO) !=3D GAMMA_SYNC_TAG); +=09 + /* Read from second MX */ + do { + while (!GAMMA_READ(GAMMA_OUTFIFOWORDS + 0x10000)) + ; + } while (GAMMA_READ(GAMMA_OUTPUTFIFO + 0x10000) !=3D GAMMA_SYNC_TAG); +} + +static inline void gamma_dma_ready(drm_device_t *dev) +{ + while (GAMMA_READ(GAMMA_DMACOUNT)) + ; +} + +static inline int gamma_dma_is_ready(drm_device_t *dev) +{ + return !GAMMA_READ(GAMMA_DMACOUNT); +} + +static void gamma_dma_service(int irq, void *device, struct pt_regs *regs) +{ + drm_device_t *dev =3D (drm_device_t *)device; + drm_device_dma_t *dma =3D dev->dma; +=09 + atomic_inc(&dev->total_irq); + GAMMA_WRITE(GAMMA_GDELAYTIMER, 0xc350/2); /* 0x05S */ + GAMMA_WRITE(GAMMA_GCOMMANDINTFLAGS, 8); + GAMMA_WRITE(GAMMA_GINTFLAGS, 0x2001); + if (gamma_dma_is_ready(dev)) { + /* Free previous buffer */ + if (test_and_set_bit(0, &dev->dma_flag)) { + atomic_inc(&dma->total_missed_free); + return; + } + if (dma->this_buffer) { + drm_free_buffer(dev, dma->this_buffer); + dma->this_buffer =3D NULL; + } + clear_bit(0, &dev->dma_flag); + + /* Dispatch new buffer */ + queue_task(&dev->tq, &tq_immediate); + mark_bh(IMMEDIATE_BH); + } +} + +/* Only called by gamma_dma_schedule. */ +static int gamma_do_dma(drm_device_t *dev, int locked) +{ + unsigned long address; + unsigned long length; + drm_buf_t *buf; + int retcode =3D 0; + drm_device_dma_t *dma =3D dev->dma; +#if DRM_DMA_HISTOGRAM + cycles_t dma_start, dma_stop; +#endif + + if (test_and_set_bit(0, &dev->dma_flag)) { + atomic_inc(&dma->total_missed_dma); + return -EBUSY; + } +=09 +#if DRM_DMA_HISTOGRAM + dma_start =3D get_cycles(); +#endif + + if (!dma->next_buffer) { + DRM_ERROR("No next_buffer\n"); + clear_bit(0, &dev->dma_flag); + return -EINVAL; + } + + buf =3D dma->next_buffer; + address =3D (unsigned long)buf->address; + length =3D buf->used; + + DRM_DEBUG("context %d, buffer %d (%ld bytes)\n", + buf->context, buf->idx, length); + + if (buf->list =3D DRM_LIST_RECLAIM) { + drm_clear_next_buffer(dev); + drm_free_buffer(dev, buf); + clear_bit(0, &dev->dma_flag); + return -EINVAL; + } + + if (!length) { + DRM_ERROR("0 length buffer\n"); + drm_clear_next_buffer(dev); + drm_free_buffer(dev, buf); + clear_bit(0, &dev->dma_flag); + return 0; + } +=09 + if (!gamma_dma_is_ready(dev)) { + clear_bit(0, &dev->dma_flag); + return -EBUSY; + } + + if (buf->while_locked) { + if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("Dispatching buffer %d from pid %d" + " \"while locked\", but no lock held\n", + buf->idx, buf->pid); + } + } else { + if (!locked && !drm_lock_take(&dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + atomic_inc(&dma->total_missed_lock); + clear_bit(0, &dev->dma_flag); + return -EBUSY; + } + } + + if (dev->last_context !=3D buf->context + && !(dev->queuelist[buf->context]->flags + & _DRM_CONTEXT_PRESERVED)) { + /* PRE: dev->last_context !=3D buf->context */ + if (drm_context_switch(dev, dev->last_context, buf->context)) { + drm_clear_next_buffer(dev); + drm_free_buffer(dev, buf); + } + retcode =3D -EBUSY; + goto cleanup; + =09 + /* POST: we will wait for the context + switch and will dispatch on a later call + when dev->last_context =3D buf->context. + NOTE WE HOLD THE LOCK THROUGHOUT THIS + TIME! */ + } + + drm_clear_next_buffer(dev); + buf->pending =3D 1; + buf->waiting =3D 0; + buf->list =3D DRM_LIST_PEND; +#if DRM_DMA_HISTOGRAM + buf->time_dispatched =3D get_cycles(); +#endif + + gamma_dma_dispatch(dev, address, length); + drm_free_buffer(dev, dma->this_buffer); + dma->this_buffer =3D buf; + + atomic_add(length, &dma->total_bytes); + atomic_inc(&dma->total_dmas); + + if (!buf->while_locked && !dev->context_flag && !locked) { + if (drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + DRM_ERROR("\n"); + } + } +cleanup: + + clear_bit(0, &dev->dma_flag); + +#if DRM_DMA_HISTOGRAM + dma_stop =3D get_cycles(); + atomic_inc(&dev->histo.dma[drm_histogram_slot(dma_stop - dma_start)]); +#endif + + return retcode; +} + +static void gamma_dma_schedule_timer_wrapper(unsigned long dev) +{ + gamma_dma_schedule((drm_device_t *)dev, 0); +} + +static void gamma_dma_schedule_tq_wrapper(void *dev) +{ + gamma_dma_schedule(dev, 0); +} + +int gamma_dma_schedule(drm_device_t *dev, int locked) +{ + int next; + drm_queue_t *q; + drm_buf_t *buf; + int retcode =3D 0; + int processed =3D 0; + int missed; + int expire =3D 20; + drm_device_dma_t *dma =3D dev->dma; +#if DRM_DMA_HISTOGRAM + cycles_t schedule_start; +#endif + + if (test_and_set_bit(0, &dev->interrupt_flag)) { + /* Not reentrant */ + atomic_inc(&dma->total_missed_sched); + return -EBUSY; + } + missed =3D atomic_read(&dma->total_missed_sched); + +#if DRM_DMA_HISTOGRAM + schedule_start =3D get_cycles(); +#endif + +again: + if (dev->context_flag) { + clear_bit(0, &dev->interrupt_flag); + return -EBUSY; + } + if (dma->next_buffer) { + /* Unsent buffer that was previously + selected, but that couldn't be sent + because the lock could not be obtained + or the DMA engine wasn't ready. Try + again. */ + atomic_inc(&dma->total_tried); + if (!(retcode =3D gamma_do_dma(dev, locked))) { + atomic_inc(&dma->total_hit); + ++processed; + } + } else { + do { + next =3D drm_select_queue(dev, + gamma_dma_schedule_timer_wrapper); + if (next >=3D 0) { + q =3D dev->queuelist[next]; + buf =3D drm_waitlist_get(&q->waitlist); + dma->next_buffer =3D buf; + dma->next_queue =3D q; + if (buf && buf->list =3D DRM_LIST_RECLAIM) { + drm_clear_next_buffer(dev); + drm_free_buffer(dev, buf); + } + } + } while (next >=3D 0 && !dma->next_buffer); + if (dma->next_buffer) { + if (!(retcode =3D gamma_do_dma(dev, locked))) { + ++processed; + } + } + } + + if (--expire) { + if (missed !=3D atomic_read(&dma->total_missed_sched)) { + atomic_inc(&dma->total_lost); + if (gamma_dma_is_ready(dev)) goto again; + } + if (processed && gamma_dma_is_ready(dev)) { + atomic_inc(&dma->total_lost); + processed =3D 0; + goto again; + } + } +=09 + clear_bit(0, &dev->interrupt_flag); +=09 +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.schedule[drm_histogram_slot(get_cycles() + - schedule_start)]); +#endif + return retcode; +} + +static int gamma_dma_priority(drm_device_t *dev, drm_dma_t *d) +{ + unsigned long address; + unsigned long length; + int must_free =3D 0; + int retcode =3D 0; + int i; + int idx; + drm_buf_t *buf; + drm_buf_t *last_buf =3D NULL; + drm_device_dma_t *dma =3D dev->dma; + DECLARE_WAITQUEUE(entry, current); + + /* Turn off interrupt handling */ + while (test_and_set_bit(0, &dev->interrupt_flag)) { + schedule(); + if (signal_pending(current)) return -EINTR; + } + if (!(d->flags & _DRM_DMA_WHILE_LOCKED)) { + while (!drm_lock_take(&dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + schedule(); + if (signal_pending(current)) { + clear_bit(0, &dev->interrupt_flag); + return -EINTR; + } + } + ++must_free; + } + atomic_inc(&dma->total_prio); + + for (i =3D 0; i < d->send_count; i++) { + idx =3D d->send_indices[i]; + if (idx < 0 || idx >=3D dma->buf_count) { + DRM_ERROR("Index %d (of %d max)\n", + d->send_indices[i], dma->buf_count - 1); + continue; + } + buf =3D dma->buflist[ idx ]; + if (buf->pid !=3D current->pid) { + DRM_ERROR("Process %d using buffer owned by %d\n", + current->pid, buf->pid); + retcode =3D -EINVAL; + goto cleanup; + } + if (buf->list !=3D DRM_LIST_NONE) { + DRM_ERROR("Process %d using %d's buffer on list %d\n", + current->pid, buf->pid, buf->list); + retcode =3D -EINVAL; + goto cleanup; + } + /* This isn't a race condition on + buf->list, since our concern is the + buffer reclaim during the time the + process closes the /dev/drm? handle, so + it can't also be doing DMA. */ + buf->list =3D DRM_LIST_PRIO; + buf->used =3D d->send_sizes[i]; + buf->context =3D d->context; + buf->while_locked =3D d->flags & _DRM_DMA_WHILE_LOCKED; + address =3D (unsigned long)buf->address; + length =3D buf->used; + if (!length) { + DRM_ERROR("0 length buffer\n"); + } + if (buf->pending) { + DRM_ERROR("Sending pending buffer:" + " buffer %d, offset %d\n", + d->send_indices[i], i); + retcode =3D -EINVAL; + goto cleanup; + } + if (buf->waiting) { + DRM_ERROR("Sending waiting buffer:" + " buffer %d, offset %d\n", + d->send_indices[i], i); + retcode =3D -EINVAL; + goto cleanup; + } + buf->pending =3D 1; + =09 + if (dev->last_context !=3D buf->context + && !(dev->queuelist[buf->context]->flags + & _DRM_CONTEXT_PRESERVED)) { + add_wait_queue(&dev->context_wait, &entry); + current->state =3D TASK_INTERRUPTIBLE; + /* PRE: dev->last_context !=3D buf->context */ + drm_context_switch(dev, dev->last_context, + buf->context); + /* POST: we will wait for the context + switch and will dispatch on a later call + when dev->last_context =3D buf->context. + NOTE WE HOLD THE LOCK THROUGHOUT THIS + TIME! */ + schedule(); + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev->context_wait, &entry); + if (signal_pending(current)) { + retcode =3D -EINTR; + goto cleanup; + } + if (dev->last_context !=3D buf->context) { + DRM_ERROR("Context mismatch: %d %d\n", + dev->last_context, + buf->context); + } + } + +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D get_cycles(); + buf->time_dispatched =3D buf->time_queued; +#endif + gamma_dma_dispatch(dev, address, length); + atomic_add(length, &dma->total_bytes); + atomic_inc(&dma->total_dmas); + =09 + if (last_buf) { + drm_free_buffer(dev, last_buf); + } + last_buf =3D buf; + } + + +cleanup: + if (last_buf) { + gamma_dma_ready(dev); + drm_free_buffer(dev, last_buf); + } +=09 + if (must_free && !dev->context_flag) { + if (drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + DRM_ERROR("\n"); + } + } + clear_bit(0, &dev->interrupt_flag); + return retcode; +} + +static int gamma_dma_send_buffers(drm_device_t *dev, drm_dma_t *d) +{ + DECLARE_WAITQUEUE(entry, current); + drm_buf_t *last_buf =3D NULL; + int retcode =3D 0; + drm_device_dma_t *dma =3D dev->dma; + + if (d->flags & _DRM_DMA_BLOCK) { + last_buf =3D dma->buflist[d->send_indices[d->send_count-1]]; + add_wait_queue(&last_buf->dma_wait, &entry); + } +=09 + if ((retcode =3D drm_dma_enqueue(dev, d))) { + if (d->flags & _DRM_DMA_BLOCK) + remove_wait_queue(&last_buf->dma_wait, &entry); + return retcode; + } +=09 + gamma_dma_schedule(dev, 0); +=09 + if (d->flags & _DRM_DMA_BLOCK) { + DRM_DEBUG("%d waiting\n", current->pid); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!last_buf->waiting && !last_buf->pending) + break; /* finished */ + schedule(); + if (signal_pending(current)) { + retcode =3D -EINTR; /* Can't restart */ + break; + } + } + current->state =3D TASK_RUNNING; + DRM_DEBUG("%d running\n", current->pid); + remove_wait_queue(&last_buf->dma_wait, &entry); + if (!retcode + || (last_buf->list=3DDRM_LIST_PEND && !last_buf->pending)) { + if (!waitqueue_active(&last_buf->dma_wait)) { + drm_free_buffer(dev, last_buf); + } + } + if (retcode) { + DRM_ERROR("ctx%d w%d p%d c%d i%d l%d %d/%d\n", + d->context, + last_buf->waiting, + last_buf->pending, + DRM_WAITCOUNT(dev, d->context), + last_buf->idx, + last_buf->list, + last_buf->pid, + current->pid); + } + } + return retcode; +} + +int gamma_dma(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + int retcode =3D 0; + drm_dma_t d; + + if (copy_from_user(&d, (drm_dma_t *)arg, sizeof(d))) + return -EFAULT; + DRM_DEBUG("%d %d: %d send, %d req\n", + current->pid, d.context, d.send_count, d.request_count); + + if (d.context =3D DRM_KERNEL_CONTEXT || d.context >=3D dev->queue_slots) { + DRM_ERROR("Process %d using context %d\n", + current->pid, d.context); + return -EINVAL; + } + if (d.send_count < 0 || d.send_count > dma->buf_count) { + DRM_ERROR("Process %d trying to send %d buffers (of %d max)\n", + current->pid, d.send_count, dma->buf_count); + return -EINVAL; + } + if (d.request_count < 0 || d.request_count > dma->buf_count) { + DRM_ERROR("Process %d trying to get %d buffers (of %d max)\n", + current->pid, d.request_count, dma->buf_count); + return -EINVAL; + } + + if (d.send_count) { + if (d.flags & _DRM_DMA_PRIORITY) + retcode =3D gamma_dma_priority(dev, &d); + else=20 + retcode =3D gamma_dma_send_buffers(dev, &d); + } + + d.granted_count =3D 0; + + if (!retcode && d.request_count) { + retcode =3D drm_dma_get_buffers(dev, &d); + } + + DRM_DEBUG("%d returning, granted =3D %d\n", + current->pid, d.granted_count); + if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d))) + return -EFAULT; + + return retcode; +} + +int gamma_irq_install(drm_device_t *dev, int irq) +{ + int retcode; + + if (!irq) return -EINVAL; +=09 + down(&dev->struct_sem); + if (dev->irq) { + up(&dev->struct_sem); + return -EBUSY; + } + dev->irq =3D irq; + up(&dev->struct_sem); +=09 + DRM_DEBUG("%d\n", irq); + + dev->context_flag =3D 0; + dev->interrupt_flag =3D 0; + dev->dma_flag =3D 0; +=09 + dev->dma->next_buffer =3D NULL; + dev->dma->next_queue =3D NULL; + dev->dma->this_buffer =3D NULL; + + INIT_LIST_HEAD(&dev->tq.list); + dev->tq.sync =3D 0; + dev->tq.routine =3D gamma_dma_schedule_tq_wrapper; + dev->tq.data =3D dev; + + + /* Before installing handler */ + GAMMA_WRITE(GAMMA_GCOMMANDMODE, 0); + GAMMA_WRITE(GAMMA_GDMACONTROL, 0); +=09 + /* Install handler */ + if ((retcode =3D request_irq(dev->irq, + gamma_dma_service, + 0, + dev->devname, + dev))) { + down(&dev->struct_sem); + dev->irq =3D 0; + up(&dev->struct_sem); + return retcode; + } + + /* After installing handler */ + GAMMA_WRITE(GAMMA_GINTENABLE, 0x2001); + GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0x0008); + GAMMA_WRITE(GAMMA_GDELAYTIMER, 0x39090); +=09 + return 0; +} + +int gamma_irq_uninstall(drm_device_t *dev) +{ + int irq; + + down(&dev->struct_sem); + irq =3D dev->irq; + dev->irq =3D 0; + up(&dev->struct_sem); +=09 + if (!irq) return -EINVAL; +=09 + DRM_DEBUG("%d\n", irq); +=09 + GAMMA_WRITE(GAMMA_GDELAYTIMER, 0); + GAMMA_WRITE(GAMMA_COMMANDINTENABLE, 0); + GAMMA_WRITE(GAMMA_GINTENABLE, 0); + free_irq(irq, dev); + + return 0; +} + + +int gamma_control(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_control_t ctl; + int retcode; +=09 + if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl))) + return -EFAULT; +=09 + switch (ctl.func) { + case DRM_INST_HANDLER: + if ((retcode =3D gamma_irq_install(dev, ctl.irq))) + return retcode; + break; + case DRM_UNINST_HANDLER: + if ((retcode =3D gamma_irq_uninstall(dev))) + return retcode; + break; + default: + return -EINVAL; + } + return 0; +} + +int gamma_lock(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + DECLARE_WAITQUEUE(entry, current); + int ret =3D 0; + drm_lock_t lock; + drm_queue_t *q; +#if DRM_DMA_HISTOGRAM + cycles_t start; + + dev->lck_start =3D start =3D get_cycles(); +#endif + + if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock))) + return -EFAULT; + + if (lock.context =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + + DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags =3D 0x%08x\n", + lock.context, current->pid, dev->lock.hw_lock->lock, + lock.flags); + + if (lock.context < 0 || lock.context >=3D dev->queue_count) + return -EINVAL; + q =3D dev->queuelist[lock.context]; +=09 + ret =3D drm_flush_block_and_flush(dev, lock.context, lock.flags); + + if (!ret) { + if (_DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock) + !=3D lock.context) { + long j =3D jiffies - dev->lock.lock_time; + + if (j > 0 && j <=3D DRM_LOCK_SLICE) { + /* Can't take lock if we just had it and + there is contention. */ + current->state =3D TASK_INTERRUPTIBLE; + schedule_timeout(j); + } + } + add_wait_queue(&dev->lock.lock_queue, &entry); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!dev->lock.hw_lock) { + /* Device has been unregistered */ + ret =3D -EINTR; + break; + } + if (drm_lock_take(&dev->lock.hw_lock->lock, + lock.context)) { + dev->lock.pid =3D current->pid; + dev->lock.lock_time =3D jiffies; + atomic_inc(&dev->total_locks); + atomic_inc(&q->total_locks); + break; /* Got lock */ + } + =09 + /* Contention */ + atomic_inc(&dev->total_sleeps); + schedule(); + if (signal_pending(current)) { + ret =3D -ERESTARTSYS; + break; + } + } + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev->lock.lock_queue, &entry); + } + + drm_flush_unblock(dev, lock.context, lock.flags); /* cleanup phase */ +=09 + if (!ret) { + sigemptyset(&dev->sigmask); + sigaddset(&dev->sigmask, SIGSTOP); + sigaddset(&dev->sigmask, SIGTSTP); + sigaddset(&dev->sigmask, SIGTTIN); + sigaddset(&dev->sigmask, SIGTTOU); + dev->sigdata.context =3D lock.context; + dev->sigdata.lock =3D dev->lock.hw_lock; + block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask); + + if (lock.flags & _DRM_LOCK_READY) + gamma_dma_ready(dev); + if (lock.flags & _DRM_LOCK_QUIESCENT) { + if (gamma_found() =3D 1) { + gamma_dma_quiescent_single(dev); + } else { + gamma_dma_quiescent_dual(dev); + } + } + } + DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock"); + +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.lacq[drm_histogram_slot(get_cycles() - start)]); +#endif +=09 + return ret; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c linux-2.4.13-lia/dr= ivers/char/drm-4.0/gamma_drv.c --- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.c Thu Oct 4 00:21:40 2= 001 @@ -0,0 +1,571 @@ +/* gamma.c -- 3dlabs GMX 2000 driver -*- linux-c -*- + * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#include +#include "drmP.h" +#include "gamma_drv.h" + +#ifndef PCI_DEVICE_ID_3DLABS_GAMMA +#define PCI_DEVICE_ID_3DLABS_GAMMA 0x0008 +#endif +#ifndef PCI_DEVICE_ID_3DLABS_MX +#define PCI_DEVICE_ID_3DLABS_MX 0x0006 +#endif + +#define GAMMA_NAME "gamma" +#define GAMMA_DESC "3dlabs GMX 2000" +#define GAMMA_DATE "20000910" +#define GAMMA_MAJOR 1 +#define GAMMA_MINOR 0 +#define GAMMA_PATCHLEVEL 0 + +static drm_device_t gamma_device; + +static struct file_operations gamma_fops =3D { +#if LINUX_VERSION_CODE >=3D 0x020400 + /* This started being used during 2.4.0-test */ + owner: THIS_MODULE, +#endif + open: gamma_open, + flush: drm_flush, + release: gamma_release, + ioctl: gamma_ioctl, + mmap: drm_mmap, + read: drm_read, + fasync: drm_fasync, + poll: drm_poll, +}; + +static struct miscdevice gamma_misc =3D { + minor: MISC_DYNAMIC_MINOR, + name: GAMMA_NAME, + fops: &gamma_fops, +}; + +static drm_ioctl_desc_t gamma_ioctls[] =3D { + [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] =3D { gamma_version, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] =3D { drm_getunique, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] =3D { drm_getmagic, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] =3D { drm_irq_busid, 0, 1 }, + + [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] =3D { drm_setunique, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] =3D { drm_block, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] =3D { drm_unblock, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] =3D { gamma_control, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] =3D { drm_authmagic, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] =3D { drm_addmap, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] =3D { drm_addbufs, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] =3D { drm_markbufs, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] =3D { drm_infobufs, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_MAP_BUFS)] =3D { drm_mapbufs, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] =3D { drm_freebufs, 1, 0 }, + + [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] =3D { drm_addctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] =3D { drm_rmctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] =3D { drm_modctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] =3D { drm_getctx, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] =3D { drm_switchctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] =3D { drm_newctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] =3D { drm_resctx, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] =3D { drm_adddraw, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] =3D { drm_rmdraw, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_DMA)] =3D { gamma_dma, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] =3D { gamma_lock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] =3D { gamma_unlock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] =3D { drm_finish, 1, 0 }, +}; +#define GAMMA_IOCTL_COUNT DRM_ARRAY_SIZE(gamma_ioctls) + +#ifdef MODULE +static char *gamma =3D NULL; +#endif +static int devices =3D 0; + +MODULE_AUTHOR("VA Linux Systems, Inc."); +MODULE_DESCRIPTION("3dlabs GMX 2000"); +MODULE_PARM(gamma, "s"); +MODULE_PARM(devices, "i"); +MODULE_PARM_DESC(devices, + "devices=3Dx, where x is the number of MX chips on card\n"); +#ifndef MODULE +/* gamma_options is called by the kernel to parse command-line options + * passed via the boot-loader (e.g., LILO). It calls the insmod option + * routine, drm_parse_options. + */ + + +static int __init gamma_options(char *str) +{ + drm_parse_options(str); + return 1; +} + +__setup("gamma=3D", gamma_options); +#endif + +static int gamma_setup(drm_device_t *dev) +{ + int i; + + atomic_set(&dev->ioctl_count, 0); + atomic_set(&dev->vma_count, 0); + dev->buf_use =3D 0; + atomic_set(&dev->buf_alloc, 0); + + drm_dma_setup(dev); + + atomic_set(&dev->total_open, 0); + atomic_set(&dev->total_close, 0); + atomic_set(&dev->total_ioctl, 0); + atomic_set(&dev->total_irq, 0); + atomic_set(&dev->total_ctx, 0); + atomic_set(&dev->total_locks, 0); + atomic_set(&dev->total_unlocks, 0); + atomic_set(&dev->total_contends, 0); + atomic_set(&dev->total_sleeps, 0); + + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + dev->magiclist[i].head =3D NULL; + dev->magiclist[i].tail =3D NULL; + } + dev->maplist =3D NULL; + dev->map_count =3D 0; + dev->vmalist =3D NULL; + dev->lock.hw_lock =3D NULL; + init_waitqueue_head(&dev->lock.lock_queue); + dev->queue_count =3D 0; + dev->queue_reserved =3D 0; + dev->queue_slots =3D 0; + dev->queuelist =3D NULL; + dev->irq =3D 0; + dev->context_flag =3D 0; + dev->interrupt_flag =3D 0; + dev->dma_flag =3D 0; + dev->last_context =3D 0; + dev->last_switch =3D 0; + dev->last_checked =3D 0; + init_timer(&dev->timer); + init_waitqueue_head(&dev->context_wait); +#if DRM_DMA_HISTO + memset(&dev->histo, 0, sizeof(dev->histo)); +#endif + dev->ctx_start =3D 0; + dev->lck_start =3D 0; + + dev->buf_rp =3D dev->buf; + dev->buf_wp =3D dev->buf; + dev->buf_end =3D dev->buf + DRM_BSZ; + dev->buf_async =3D NULL; + init_waitqueue_head(&dev->buf_readers); + init_waitqueue_head(&dev->buf_writers); + + DRM_DEBUG("\n"); + + /* The kernel's context could be created here, but is now created + in drm_dma_enqueue. This is more resource-efficient for + hardware that does not do DMA, but may mean that + drm_select_queue fails between the time the interrupt is + initialized and the time the queues are initialized. */ + + return 0; +} + + +static int gamma_takedown(drm_device_t *dev) +{ + int i; + drm_magic_entry_t *pt, *next; + drm_map_t *map; + drm_vma_entry_t *vma, *vma_next; + + DRM_DEBUG("\n"); + + if (dev->irq) gamma_irq_uninstall(dev); + + down(&dev->struct_sem); + del_timer(&dev->timer); + + if (dev->devname) { + drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER); + dev->devname =3D NULL; + } + + if (dev->unique) { + drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER); + dev->unique =3D NULL; + dev->unique_len =3D 0; + } + /* Clear pid list */ + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + for (pt =3D dev->magiclist[i].head; pt; pt =3D next) { + next =3D pt->next; + drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC); + } + dev->magiclist[i].head =3D dev->magiclist[i].tail =3D NULL; + } + + /* Clear vma list (only built for debugging) */ + if (dev->vmalist) { + for (vma =3D dev->vmalist; vma; vma =3D vma_next) { + vma_next =3D vma->next; + drm_free(vma, sizeof(*vma), DRM_MEM_VMAS); + } + dev->vmalist =3D NULL; + } + + /* Clear map area and mtrr information */ + if (dev->maplist) { + for (i =3D 0; i < dev->map_count; i++) { + map =3D dev->maplist[i]; + switch (map->type) { + case _DRM_REGISTERS: + case _DRM_FRAME_BUFFER: +#ifdef CONFIG_MTRR + if (map->mtrr >=3D 0) { + int retcode; + retcode =3D mtrr_del(map->mtrr, + map->offset, + map->size); + DRM_DEBUG("mtrr_del =3D %d\n", retcode); + } +#endif + drm_ioremapfree(map->handle, map->size, dev); + break; + case _DRM_SHM: + drm_free_pages((unsigned long)map->handle, + drm_order(map->size) + - PAGE_SHIFT, + DRM_MEM_SAREA); + break; + case _DRM_AGP: + /* Do nothing here, because this is all + handled in the AGP/GART driver. */ + break; + } + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + } + drm_free(dev->maplist, + dev->map_count * sizeof(*dev->maplist), + DRM_MEM_MAPS); + dev->maplist =3D NULL; + dev->map_count =3D 0; + } + + if (dev->queuelist) { + for (i =3D 0; i < dev->queue_count; i++) { + drm_waitlist_destroy(&dev->queuelist[i]->waitlist); + if (dev->queuelist[i]) { + drm_free(dev->queuelist[i], + sizeof(*dev->queuelist[0]), + DRM_MEM_QUEUES); + dev->queuelist[i] =3D NULL; + } + } + drm_free(dev->queuelist, + dev->queue_slots * sizeof(*dev->queuelist), + DRM_MEM_QUEUES); + dev->queuelist =3D NULL; + } + + drm_dma_takedown(dev); + + dev->queue_count =3D 0; + if (dev->lock.hw_lock) { + dev->lock.hw_lock =3D NULL; /* SHM removed */ + dev->lock.pid =3D 0; + wake_up_interruptible(&dev->lock.lock_queue); + } + up(&dev->struct_sem); + + return 0; +} + +int gamma_found(void) +{ + return devices; +} + +int gamma_find_devices(void) +{ + struct pci_dev *d =3D NULL, *one =3D NULL, *two =3D NULL; + + d =3D pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_GAMMA,d); + if (!d) return 0; + + one =3D pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,d); + if (!one) return 0; + + /* Make sure it's on the same card, if not - no MX's found */ + if (PCI_SLOT(d->devfn) !=3D PCI_SLOT(one->devfn)) return 0; + + two =3D pci_find_device(PCI_VENDOR_ID_3DLABS,PCI_DEVICE_ID_3DLABS_MX,one); + if (!two) return 1; + + /* Make sure it's on the same card, if not - only 1 MX found */ + if (PCI_SLOT(d->devfn) !=3D PCI_SLOT(two->devfn)) return 1; + + /* Two MX's found - we don't currently support more than 2 */ + return 2; +} + +/* gamma_init is called via init_module at module load time, or via + * linux/init/main.c (this is not currently supported). */ + +static int __init gamma_init(void) +{ + int retcode; + drm_device_t *dev =3D &gamma_device; + + DRM_DEBUG("\n"); + + memset((void *)dev, 0, sizeof(*dev)); + dev->count_lock =3D SPIN_LOCK_UNLOCKED; + sema_init(&dev->struct_sem, 1); + +#ifdef MODULE + drm_parse_options(gamma); +#endif + devices =3D gamma_find_devices(); + if (devices =3D 0) return -1; + + if ((retcode =3D misc_register(&gamma_misc))) { + DRM_ERROR("Cannot register \"%s\"\n", GAMMA_NAME); + return retcode; + } + dev->device =3D MKDEV(MISC_MAJOR, gamma_misc.minor); + dev->name =3D GAMMA_NAME; + + drm_mem_init(); + drm_proc_init(dev); + + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d with %d MX devices\n", + GAMMA_NAME, + GAMMA_MAJOR, + GAMMA_MINOR, + GAMMA_PATCHLEVEL, + GAMMA_DATE, + gamma_misc.minor, + devices); + + return 0; +} + +/* gamma_cleanup is called via cleanup_module at module unload time. */ + +static void __exit gamma_cleanup(void) +{ + drm_device_t *dev =3D &gamma_device; + + DRM_DEBUG("\n"); + + drm_proc_cleanup(); + if (misc_deregister(&gamma_misc)) { + DRM_ERROR("Cannot unload module\n"); + } else { + DRM_INFO("Module unloaded\n"); + } + gamma_takedown(dev); +} + +module_init(gamma_init); +module_exit(gamma_cleanup); + + +int gamma_version(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_version_t version; + int len; + + if (copy_from_user(&version, + (drm_version_t *)arg, + sizeof(version))) + return -EFAULT; + +#define DRM_COPY(name,value) \ + len =3D strlen(value); \ + if (len > name##_len) len =3D name##_len; \ + name##_len =3D strlen(value); \ + if (len && name) { \ + if (copy_to_user(name, value, len)) \ + return -EFAULT; \ + } + + version.version_major =3D GAMMA_MAJOR; + version.version_minor =3D GAMMA_MINOR; + version.version_patchlevel =3D GAMMA_PATCHLEVEL; + + DRM_COPY(version.name, GAMMA_NAME); + DRM_COPY(version.date, GAMMA_DATE); + DRM_COPY(version.desc, GAMMA_DESC); + + if (copy_to_user((drm_version_t *)arg, + &version, + sizeof(version))) + return -EFAULT; + return 0; +} + +int gamma_open(struct inode *inode, struct file *filp) +{ + drm_device_t *dev =3D &gamma_device; + int retcode =3D 0; + + DRM_DEBUG("open_count =3D %d\n", dev->open_count); + if (!(retcode =3D drm_open_helper(inode, filp, dev))) { +#if LINUX_VERSION_CODE < 0x020333 + MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_inc(&dev->total_open); + spin_lock(&dev->count_lock); + if (!dev->open_count++) { + spin_unlock(&dev->count_lock); + return gamma_setup(dev); + } + spin_unlock(&dev->count_lock); + } + return retcode; +} + +int gamma_release(struct inode *inode, struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev; + int retcode =3D 0; + + lock_kernel(); + dev =3D priv->dev; + + DRM_DEBUG("open_count =3D %d\n", dev->open_count); + if (!(retcode =3D drm_release(inode, filp))) { +#if LINUX_VERSION_CODE < 0x020333 + MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_inc(&dev->total_close); + spin_lock(&dev->count_lock); + if (!--dev->open_count) { + if (atomic_read(&dev->ioctl_count) || dev->blocked) { + DRM_ERROR("Device busy: %d %d\n", + atomic_read(&dev->ioctl_count), + dev->blocked); + spin_unlock(&dev->count_lock); + unlock_kernel(); + return -EBUSY; + } + spin_unlock(&dev->count_lock); + unlock_kernel(); + return gamma_takedown(dev); + } + spin_unlock(&dev->count_lock); + } + unlock_kernel(); + return retcode; +} + +/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */ + +int gamma_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + int nr =3D DRM_IOCTL_NR(cmd); + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int retcode =3D 0; + drm_ioctl_desc_t *ioctl; + drm_ioctl_t *func; + + atomic_inc(&dev->ioctl_count); + atomic_inc(&dev->total_ioctl); + ++priv->ioctl_count; + + DRM_DEBUG("pid =3D %d, cmd =3D 0x%02x, nr =3D 0x%02x, dev 0x%x, auth =3D = %d\n", + current->pid, cmd, nr, dev->device, priv->authenticated); + + if (nr >=3D GAMMA_IOCTL_COUNT) { + retcode =3D -EINVAL; + } else { + ioctl =3D &gamma_ioctls[nr]; + func =3D ioctl->func; + + if (!func) { + DRM_DEBUG("no function\n"); + retcode =3D -EINVAL; + } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN)) + || (ioctl->auth_needed && !priv->authenticated)) { + retcode =3D -EACCES; + } else { + retcode =3D (func)(inode, filp, cmd, arg); + } + } + + atomic_dec(&dev->ioctl_count); + return retcode; +} + + +int gamma_unlock(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_lock_t lock; + + if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock))) + return -EFAULT; + + if (lock.context =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + + DRM_DEBUG("%d frees lock (%d holds)\n", + lock.context, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + atomic_inc(&dev->total_unlocks); + if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock)) + atomic_inc(&dev->total_contends); + drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT); + gamma_dma_schedule(dev, 1); + if (!dev->context_flag) { + if (drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + DRM_ERROR("\n"); + } + } +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles() + - dev->lck_start)]); +#endif + + unblock_all_signals(); + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h linux-2.4.13-lia/dr= ivers/char/drm-4.0/gamma_drv.h --- linux-2.4.13/drivers/char/drm-4.0/gamma_drv.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/gamma_drv.h Thu Oct 4 00:21:40 2= 001 @@ -0,0 +1,58 @@ +/* gamma_drv.h -- Private header for 3dlabs GMX 2000 driver -*- linux-c -*- + * Created: Mon Jan 4 10:05:05 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + *=20 + */ + +#ifndef _GAMMA_DRV_H_ +#define _GAMMA_DRV_H_ + + /* gamma_drv.c */ +extern int gamma_version(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int gamma_open(struct inode *inode, struct file *filp); +extern int gamma_release(struct inode *inode, struct file *filp); +extern int gamma_ioctl(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int gamma_lock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int gamma_unlock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + /* gamma_dma.c */ +extern int gamma_dma_schedule(drm_device_t *dev, int locked); +extern int gamma_dma(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int gamma_irq_install(drm_device_t *dev, int irq); +extern int gamma_irq_uninstall(drm_device_t *dev); +extern int gamma_control(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int gamma_find_devices(void); +extern int gamma_found(void); + +#endif diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c linux-2.4.13-lia/dr= ivers/char/drm-4.0/i810_bufs.c --- linux-2.4.13/drivers/char/drm-4.0/i810_bufs.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_bufs.c Thu Oct 4 00:21:40 2= 001 @@ -0,0 +1,339 @@ +/* i810_bufs.c -- IOCTLs to manage buffers -*- linux-c -*- + * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + *=20 + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "i810_drv.h" +#include "linux/un.h" + +int i810_addbufs_agp(struct inode *inode, struct file *filp, unsigned int = cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + drm_buf_entry_t *entry; + drm_buf_t *buf; + unsigned long offset; + unsigned long agp_offset; + int count; + int order; + int size; + int alignment; + int page_order; + int total; + int byte_count; + int i; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + count =3D request.count; + order =3D drm_order(request.size); + size =3D 1 << order; + agp_offset =3D request.agp_start; + alignment =3D (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size) :size; + page_order =3D order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0; + total =3D PAGE_SIZE << page_order; + byte_count =3D 0; + =20 + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + if (dev->queue_count) return -EBUSY; /* Not while in use */ + spin_lock(&dev->count_lock); + if (dev->buf_use) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + atomic_inc(&dev->buf_alloc); + spin_unlock(&dev->count_lock); + =20 + down(&dev->struct_sem); + entry =3D &dma->bufs[order]; + if (entry->buf_count) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; /* May only call once for each order */ + } + + if(count < 0 || count > 4096) + { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -EINVAL; + } + =20 + entry->buflist =3D drm_alloc(count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + if (!entry->buflist) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->buflist, 0, count * sizeof(*entry->buflist)); + =20 + entry->buf_size =3D size; + entry->page_order =3D page_order; + offset =3D 0; + =20 + while(entry->buf_count < count) { + buf =3D &entry->buflist[entry->buf_count]; + buf->idx =3D dma->buf_count + entry->buf_count; + buf->total =3D alignment; + buf->order =3D order; + buf->used =3D 0; + buf->offset =3D offset; + buf->bus_address =3D dev->agp->base + agp_offset + offset; + buf->address =3D (void *)(agp_offset + offset + dev->agp->base); + buf->next =3D NULL; + buf->waiting =3D 0; + buf->pending =3D 0; + init_waitqueue_head(&buf->dma_wait); + buf->pid =3D 0; + + buf->dev_private =3D drm_alloc(sizeof(drm_i810_buf_priv_t),=20 + DRM_MEM_BUFS); + buf->dev_priv_size =3D sizeof(drm_i810_buf_priv_t); + memset(buf->dev_private, 0, sizeof(drm_i810_buf_priv_t)); + +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D 0; + buf->time_dispatched =3D 0; + buf->time_completed =3D 0; + buf->time_freed =3D 0; +#endif + offset =3D offset + alignment; + entry->buf_count++; + byte_count +=3D PAGE_SIZE << page_order; + =20 + DRM_DEBUG("buffer %d @ %p\n", + entry->buf_count, buf->address); + } + =20 + dma->buflist =3D drm_realloc(dma->buflist, + dma->buf_count * sizeof(*dma->buflist), + (dma->buf_count + entry->buf_count) + * sizeof(*dma->buflist), + DRM_MEM_BUFS); + for (i =3D dma->buf_count; i < dma->buf_count + entry->buf_count; i++) + dma->buflist[i] =3D &entry->buflist[i - dma->buf_count]; + =20 + dma->buf_count +=3D entry->buf_count; + dma->byte_count +=3D byte_count; + drm_freelist_create(&entry->freelist, entry->buf_count); + for (i =3D 0; i < entry->buf_count; i++) { + drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]); + } + =20 + up(&dev->struct_sem); + =20 + request.count =3D entry->buf_count; + request.size =3D size; + =20 + if (copy_to_user((drm_buf_desc_t *)arg, + &request, + sizeof(request))) + return -EFAULT; + =20 + atomic_dec(&dev->buf_alloc); + dma->flags =3D _DRM_DMA_USE_AGP; + return 0; +} + +int i810_addbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_buf_desc_t request; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + if(request.flags & _DRM_AGP_BUFFER) + return i810_addbufs_agp(inode, filp, cmd, arg); + else + return -EINVAL; +} + +int i810_infobufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_info_t request; + int i; + int count; + + if (!dma) return -EINVAL; + + spin_lock(&dev->count_lock); + if (atomic_read(&dev->buf_alloc)) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + ++dev->buf_use; /* Can't allocate more after this call */ + spin_unlock(&dev->count_lock); + + if (copy_from_user(&request, + (drm_buf_info_t *)arg, + sizeof(request))) + return -EFAULT; + + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) ++count; + } +=09 + DRM_DEBUG("count =3D %d\n", count); +=09 + if (request.count >=3D count) { + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) { + if (copy_to_user(&request.list[count].count, + &dma->bufs[i].buf_count, + sizeof(dma->bufs[0] + .buf_count)) || + copy_to_user(&request.list[count].size, + &dma->bufs[i].buf_size, + sizeof(dma->bufs[0].buf_size)) || + copy_to_user(&request.list[count].low_mark, + &dma->bufs[i] + .freelist.low_mark, + sizeof(dma->bufs[0] + .freelist.low_mark)) || + copy_to_user(&request.list[count] + .high_mark, + &dma->bufs[i] + .freelist.high_mark, + sizeof(dma->bufs[0] + .freelist.high_mark))) + return -EFAULT; + + DRM_DEBUG("%d %d %d %d %d\n", + i, + dma->bufs[i].buf_count, + dma->bufs[i].buf_size, + dma->bufs[i].freelist.low_mark, + dma->bufs[i].freelist.high_mark); + ++count; + } + } + } + request.count =3D count; + + if (copy_to_user((drm_buf_info_t *)arg, + &request, + sizeof(request))) + return -EFAULT; +=09 + return 0; +} + +int i810_markbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + int order; + drm_buf_entry_t *entry; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + DRM_DEBUG("%d, %d, %d\n", + request.size, request.low_mark, request.high_mark); + order =3D drm_order(request.size); + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + entry =3D &dma->bufs[order]; + + if (request.low_mark < 0 || request.low_mark > entry->buf_count) + return -EINVAL; + if (request.high_mark < 0 || request.high_mark > entry->buf_count) + return -EINVAL; + + entry->freelist.low_mark =3D request.low_mark; + entry->freelist.high_mark =3D request.high_mark; +=09 + return 0; +} + +int i810_freebufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_free_t request; + int i; + int idx; + drm_buf_t *buf; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_free_t *)arg, + sizeof(request))) + return -EFAULT; + + DRM_DEBUG("%d\n", request.count); + for (i =3D 0; i < request.count; i++) { + if (copy_from_user(&idx, + &request.list[i], + sizeof(idx))) + return -EFAULT; + if (idx < 0 || idx >=3D dma->buf_count) { + DRM_ERROR("Index %d (of %d max)\n", + idx, dma->buf_count - 1); + return -EINVAL; + } + buf =3D dma->buflist[idx]; + if (buf->pid !=3D current->pid) { + DRM_ERROR("Process %d freeing buffer owned by %d\n", + current->pid, buf->pid); + return -EINVAL; + } + drm_free_buffer(dev, buf); + } +=09 + return 0; +} + diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_context.c linux-2.4.13-lia= /drivers/char/drm-4.0/i810_context.c --- linux-2.4.13/drivers/char/drm-4.0/i810_context.c Wed Dec 31 16:00:00 19= 69 +++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_context.c Thu Oct 4 00:21:4= 0 2001 @@ -0,0 +1,212 @@ +/* i810_context.c -- IOCTLs for i810 contexts -*- linux-c -*- + * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "i810_drv.h" + +static int i810_alloc_queue(drm_device_t *dev) +{ + int temp =3D drm_ctxbitmap_next(dev); + DRM_DEBUG("i810_alloc_queue: %d\n", temp); + return temp; +} + +int i810_context_switch(drm_device_t *dev, int old, int new) +{ + char buf[64]; + + atomic_inc(&dev->total_ctx); + + if (test_and_set_bit(0, &dev->context_flag)) { + DRM_ERROR("Reentering -- FIXME\n"); + return -EBUSY; + } + +#if DRM_DMA_HISTOGRAM + dev->ctx_start =3D get_cycles(); +#endif + =20 + DRM_DEBUG("Context switch from %d to %d\n", old, new); + + if (new =3D dev->last_context) { + clear_bit(0, &dev->context_flag); + return 0; + } + =20 + if (drm_flags & DRM_FLAG_NOCTX) { + i810_context_switch_complete(dev, new); + } else { + sprintf(buf, "C %d %d\n", old, new); + drm_write_string(dev, buf); + } + =20 + return 0; +} + +int i810_context_switch_complete(drm_device_t *dev, int new) +{ + dev->last_context =3D new; /* PRE/POST: This is the _only_ writer= . */ + dev->last_switch =3D jiffies; + =20 + if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("Lock isn't held after context switch\n"); + } + + /* If a context switch is ever initiated + when the kernel holds the lock, release + that lock here. */ +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles() + - dev->ctx_start)]); + =20 +#endif + clear_bit(0, &dev->context_flag); + wake_up(&dev->context_wait); + =20 + return 0; +} + +int i810_resctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_res_t res; + drm_ctx_t ctx; + int i; + + DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS); + if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res))) + return -EFAULT; + if (res.count >=3D DRM_RESERVED_CONTEXTS) { + memset(&ctx, 0, sizeof(ctx)); + for (i =3D 0; i < DRM_RESERVED_CONTEXTS; i++) { + ctx.handle =3D i; + if (copy_to_user(&res.contexts[i], + &i, + sizeof(i))) + return -EFAULT; + } + } + res.count =3D DRM_RESERVED_CONTEXTS; + if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res))) + return -EFAULT; + return 0; +} + +int i810_addctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + if ((ctx.handle =3D i810_alloc_queue(dev)) =3D DRM_KERNEL_CONTEXT) { + /* Skip kernel's context and get a new one. */ + ctx.handle =3D i810_alloc_queue(dev); + } + if (ctx.handle =3D -1) { + DRM_DEBUG("Not enough free contexts.\n"); + /* Should this return -EBUSY instead? */ + return -ENOMEM; + } + DRM_DEBUG("%d\n", ctx.handle); + if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int i810_modctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + /* This does nothing for the i810 */ + return 0; +} + +int i810_getctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx))) + return -EFAULT; + /* This is 0, because we don't hanlde any context flags */ + ctx.flags =3D 0; + if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int i810_switchctx(struct inode *inode, struct file *filp, unsigned int cm= d, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + return i810_context_switch(dev, dev->last_context, ctx.handle); +} + +int i810_newctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + i810_context_switch_complete(dev, ctx.handle); + + return 0; +} + +int i810_rmctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + if(ctx.handle !=3D DRM_KERNEL_CONTEXT) { + drm_ctxbitmap_free(dev, ctx.handle); + } +=09 + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_dma.c linux-2.4.13-lia/dri= vers/char/drm-4.0/i810_dma.c --- linux-2.4.13/drivers/char/drm-4.0/i810_dma.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_dma.c Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,1438 @@ +/* i810_dma.c -- DMA support for the i810 -*- linux-c -*- + * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + * Keith Whitwell + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "i810_drv.h" +#include /* For task queue support */ + +/* in case we don't have a 2.3.99-pre6 kernel or later: */ +#ifndef VM_DONTCOPY +#define VM_DONTCOPY 0 +#endif + +#define I810_BUF_FREE 2 +#define I810_BUF_CLIENT 1 +#define I810_BUF_HARDWARE 0 + +#define I810_BUF_UNMAPPED 0 +#define I810_BUF_MAPPED 1 + +#define I810_REG(reg) 2 +#define I810_BASE(reg) ((unsigned long) \ + dev->maplist[I810_REG(reg)]->handle) +#define I810_ADDR(reg) (I810_BASE(reg) + reg) +#define I810_DEREF(reg) *(__volatile__ int *)I810_ADDR(reg) +#define I810_READ(reg) I810_DEREF(reg) +#define I810_WRITE(reg,val) do { I810_DEREF(reg) =3D val; } while (0) +#define I810_DEREF16(reg) *(__volatile__ u16 *)I810_ADDR(reg) +#define I810_READ16(reg) I810_DEREF16(reg) +#define I810_WRITE16(reg,val) do { I810_DEREF16(reg) =3D val; } while (0) + +#define RING_LOCALS unsigned int outring, ringmask; volatile char *virt; + +#define BEGIN_LP_RING(n) do { \ + if (I810_VERBOSE) \ + DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", \ + n, __FUNCTION__); \ + if (dev_priv->ring.space < n*4) \ + i810_wait_ring(dev, n*4); \ + dev_priv->ring.space -=3D n*4; \ + outring =3D dev_priv->ring.tail; \ + ringmask =3D dev_priv->ring.tail_mask; \ + virt =3D dev_priv->ring.virtual_start; \ +} while (0) + +#define ADVANCE_LP_RING() do { \ + if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \ + dev_priv->ring.tail =3D outring; \ + I810_WRITE(LP_RING + RING_TAIL, outring); \ +} while(0) + +#define OUT_RING(n) do { \ + if (I810_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \ + *(volatile unsigned int *)(virt + outring) =3D n; \ + outring +=3D 4; \ + outring &=3D ringmask; \ +} while (0); + +static inline void i810_print_status_page(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + drm_i810_private_t *dev_priv =3D dev->dev_private; + u32 *temp =3D (u32 *)dev_priv->hw_status_page; + int i; + + DRM_DEBUG( "hw_status: Interrupt Status : %x\n", temp[0]); + DRM_DEBUG( "hw_status: LpRing Head ptr : %x\n", temp[1]); + DRM_DEBUG( "hw_status: IRing Head ptr : %x\n", temp[2]); + DRM_DEBUG( "hw_status: Reserved : %x\n", temp[3]); + DRM_DEBUG( "hw_status: Driver Counter : %d\n", temp[5]); + for(i =3D 6; i < dma->buf_count + 6; i++) { + DRM_DEBUG( "buffer status idx : %d used: %d\n", i - 6, temp[i]); + } +} + +static drm_buf_t *i810_freelist_get(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + int i; + int used; + =20 + /* Linear search might not be the best solution */ + + for (i =3D 0; i < dma->buf_count; i++) { + drm_buf_t *buf =3D dma->buflist[ i ]; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + /* In use is already a pointer */ + used =3D cmpxchg(buf_priv->in_use, I810_BUF_FREE,=20 + I810_BUF_CLIENT); + if(used =3D I810_BUF_FREE) { + return buf; + } + } + return NULL; +} + +/* This should only be called if the buffer is not sent to the hardware + * yet, the hardware updates in use for us once its on the ring buffer. + */ + +static int i810_freelist_put(drm_device_t *dev, drm_buf_t *buf) +{ + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + int used; + =20 + /* In use is already a pointer */ + used =3D cmpxchg(buf_priv->in_use, I810_BUF_CLIENT, I810_BUF_FREE); + if(used !=3D I810_BUF_CLIENT) { + DRM_ERROR("Freeing buffer thats not in use : %d\n", buf->idx); + return -EINVAL; + } + =20 + return 0; +} + +static struct file_operations i810_buffer_fops =3D { + open: i810_open, + flush: drm_flush, + release: i810_release, + ioctl: i810_ioctl, + mmap: i810_mmap_buffers, + read: drm_read, + fasync: drm_fasync, + poll: drm_poll, +}; + +int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev; + drm_i810_private_t *dev_priv; + drm_buf_t *buf; + drm_i810_buf_priv_t *buf_priv; + + lock_kernel(); + dev =3D priv->dev; + dev_priv =3D dev->dev_private; + buf =3D dev_priv->mmap_buffer; + buf_priv =3D buf->dev_private; + =20 + vma->vm_flags |=3D (VM_IO | VM_DONTCOPY); + vma->vm_file =3D filp; + =20 + buf_priv->currently_mapped =3D I810_BUF_MAPPED; + unlock_kernel(); + + if (remap_page_range(vma->vm_start, + VM_OFFSET(vma), + vma->vm_end - vma->vm_start, + vma->vm_page_prot)) return -EAGAIN; + return 0; +} + +static int i810_map_buffer(drm_buf_t *buf, struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + drm_i810_private_t *dev_priv =3D dev->dev_private; + struct file_operations *old_fops; + int retcode =3D 0; + + if(buf_priv->currently_mapped =3D I810_BUF_MAPPED) return -EINVAL; + + if(VM_DONTCOPY !=3D 0) { + down_write(¤t->mm->mmap_sem); + old_fops =3D filp->f_op; + filp->f_op =3D &i810_buffer_fops; + dev_priv->mmap_buffer =3D buf; + buf_priv->virtual =3D (void *)do_mmap(filp, 0, buf->total,=20 + PROT_READ|PROT_WRITE, + MAP_SHARED,=20 + buf->bus_address); + dev_priv->mmap_buffer =3D NULL; + filp->f_op =3D old_fops; + if ((unsigned long)buf_priv->virtual > -1024UL) { + /* Real error */ + DRM_DEBUG("mmap error\n"); + retcode =3D (signed int)buf_priv->virtual; + buf_priv->virtual =3D 0; + } + up_write(¤t->mm->mmap_sem); + } else { + buf_priv->virtual =3D buf_priv->kernel_virtual; + buf_priv->currently_mapped =3D I810_BUF_MAPPED; + } + return retcode; +} + +static int i810_unmap_buffer(drm_buf_t *buf) +{ + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + int retcode =3D 0; + + if(VM_DONTCOPY !=3D 0) { + if(buf_priv->currently_mapped !=3D I810_BUF_MAPPED)=20 + return -EINVAL; + down_write(¤t->mm->mmap_sem); +#if LINUX_VERSION_CODE < 0x020399 + retcode =3D do_munmap((unsigned long)buf_priv->virtual,=20 + (size_t) buf->total); +#else + retcode =3D do_munmap(current->mm,=20 + (unsigned long)buf_priv->virtual,=20 + (size_t) buf->total); +#endif + up_write(¤t->mm->mmap_sem); + } + buf_priv->currently_mapped =3D I810_BUF_UNMAPPED; + buf_priv->virtual =3D 0; + + return retcode; +} + +static int i810_dma_get_buffer(drm_device_t *dev, drm_i810_dma_t *d,=20 + struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_buf_t *buf; + drm_i810_buf_priv_t *buf_priv; + int retcode =3D 0; + + buf =3D i810_freelist_get(dev); + if (!buf) { + retcode =3D -ENOMEM; + DRM_DEBUG("retcode=3D%d\n", retcode); + return retcode; + } + =20 + retcode =3D i810_map_buffer(buf, filp); + if(retcode) { + i810_freelist_put(dev, buf); + DRM_DEBUG("mapbuf failed, retcode %d\n", retcode); + return retcode; + } + buf->pid =3D priv->pid; + buf_priv =3D buf->dev_private;=09 + d->granted =3D 1; + d->request_idx =3D buf->idx; + d->request_size =3D buf->total; + d->virtual =3D buf_priv->virtual; + + return retcode; +} + +static unsigned long i810_alloc_page(drm_device_t *dev) +{ + unsigned long address; + =20 + address =3D __get_free_page(GFP_KERNEL); + if(address =3D 0UL)=20 + return 0; +=09 + atomic_inc(&virt_to_page(address)->count); + set_bit(PG_locked, &virt_to_page(address)->flags); + =20 + return address; +} + +static void i810_free_page(drm_device_t *dev, unsigned long page) +{ + if(page =3D 0UL)=20 + return; +=09 + atomic_dec(&virt_to_page(page)->count); + clear_bit(PG_locked, &virt_to_page(page)->flags); + wake_up(&virt_to_page(page)->wait); + free_page(page); + return; +} + +static int i810_dma_cleanup(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + + if(dev->dev_private) { + int i; + drm_i810_private_t *dev_priv =3D=20 + (drm_i810_private_t *) dev->dev_private; + =20 + if(dev_priv->ring.virtual_start) { + drm_ioremapfree((void *) dev_priv->ring.virtual_start, + dev_priv->ring.Size, dev); + } + if(dev_priv->hw_status_page !=3D 0UL) { + i810_free_page(dev, dev_priv->hw_status_page); + /* Need to rewrite hardware status page */ + I810_WRITE(0x02080, 0x1ffff000); + } + drm_free(dev->dev_private, sizeof(drm_i810_private_t),=20 + DRM_MEM_DRIVER); + dev->dev_private =3D NULL; + + for (i =3D 0; i < dma->buf_count; i++) { + drm_buf_t *buf =3D dma->buflist[ i ]; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + drm_ioremapfree(buf_priv->kernel_virtual, + buf->total, dev); + } + } + return 0; +} + +static int i810_wait_ring(drm_device_t *dev, int n) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_ring_buffer_t *ring =3D &(dev_priv->ring); + int iters =3D 0; + unsigned long end; + unsigned int last_head =3D I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR; + + end =3D jiffies + (HZ*3); + while (ring->space < n) { + int i; +=09 + ring->head =3D I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR; + ring->space =3D ring->head - (ring->tail+8); + if (ring->space < 0) ring->space +=3D ring->Size; + =20 + if (ring->head !=3D last_head) + end =3D jiffies + (HZ*3); + =20 + iters++; + if((signed)(end - jiffies) <=3D 0) { + DRM_ERROR("space: %d wanted %d\n", ring->space, n); + DRM_ERROR("lockup\n"); + goto out_wait_ring; + } + + for (i =3D 0 ; i < 2000 ; i++) ; + } + +out_wait_ring: =20 + return iters; +} + +static void i810_kernel_lost_context(drm_device_t *dev) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_ring_buffer_t *ring =3D &(dev_priv->ring); + =20 + ring->head =3D I810_READ(LP_RING + RING_HEAD) & HEAD_ADDR; + ring->tail =3D I810_READ(LP_RING + RING_TAIL); + ring->space =3D ring->head - (ring->tail+8); + if (ring->space < 0) ring->space +=3D ring->Size; +} + +static int i810_freelist_init(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_privat= e; + int my_idx =3D 24; + u32 *hw_status =3D (u32 *)(dev_priv->hw_status_page + my_idx); + int i; + =20 + if(dma->buf_count > 1019) { + /* Not enough space in the status page for the freelist */ + return -EINVAL; + } + + for (i =3D 0; i < dma->buf_count; i++) { + drm_buf_t *buf =3D dma->buflist[ i ]; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + =20 + buf_priv->in_use =3D hw_status++; + buf_priv->my_use_idx =3D my_idx; + my_idx +=3D 4; + + *buf_priv->in_use =3D I810_BUF_FREE; + + buf_priv->kernel_virtual =3D drm_ioremap(buf->bus_address,=20 + buf->total, dev); + } + return 0; +} + +static int i810_dma_initialize(drm_device_t *dev,=20 + drm_i810_private_t *dev_priv, + drm_i810_init_t *init) +{ + drm_map_t *sarea_map; + + dev->dev_private =3D (void *) dev_priv; + memset(dev_priv, 0, sizeof(drm_i810_private_t)); + + if (init->ring_map_idx >=3D dev->map_count || + init->buffer_map_idx >=3D dev->map_count) { + i810_dma_cleanup(dev); + DRM_ERROR("ring_map or buffer_map are invalid\n"); + return -EINVAL; + } + =20 + dev_priv->ring_map_idx =3D init->ring_map_idx; + dev_priv->buffer_map_idx =3D init->buffer_map_idx; + sarea_map =3D dev->maplist[0]; + dev_priv->sarea_priv =3D (drm_i810_sarea_t *)=20 + ((u8 *)sarea_map->handle +=20 + init->sarea_priv_offset); + + atomic_set(&dev_priv->flush_done, 0); + init_waitqueue_head(&dev_priv->flush_queue); + =09 + dev_priv->ring.Start =3D init->ring_start; + dev_priv->ring.End =3D init->ring_end; + dev_priv->ring.Size =3D init->ring_size; + + dev_priv->ring.virtual_start =3D drm_ioremap(dev->agp->base +=20 + init->ring_start,=20 + init->ring_size, dev); + + dev_priv->ring.tail_mask =3D dev_priv->ring.Size - 1; + =20 + if (dev_priv->ring.virtual_start =3D NULL) { + i810_dma_cleanup(dev); + DRM_ERROR("can not ioremap virtual address for" + " ring buffer\n"); + return -ENOMEM; + } + + dev_priv->w =3D init->w; + dev_priv->h =3D init->h; + dev_priv->pitch =3D init->pitch; + dev_priv->back_offset =3D init->back_offset; + dev_priv->depth_offset =3D init->depth_offset; + + dev_priv->front_di1 =3D init->front_offset | init->pitch_bits; + dev_priv->back_di1 =3D init->back_offset | init->pitch_bits; + dev_priv->zi1 =3D init->depth_offset | init->pitch_bits; +=09 + =20 + /* Program Hardware Status Page */ + dev_priv->hw_status_page =3D i810_alloc_page(dev); + memset((void *) dev_priv->hw_status_page, 0, PAGE_SIZE); + if(dev_priv->hw_status_page =3D 0UL) { + i810_dma_cleanup(dev); + DRM_ERROR("Can not allocate hardware status page\n"); + return -ENOMEM; + } + DRM_DEBUG("hw status page @ %lx\n", dev_priv->hw_status_page); + =20 + I810_WRITE(0x02080, virt_to_bus((void *)dev_priv->hw_status_page)); + DRM_DEBUG("Enabled hardware status page\n"); + =20 + /* Now we need to init our freelist */ + if(i810_freelist_init(dev) !=3D 0) { + i810_dma_cleanup(dev); + DRM_ERROR("Not enough space in the status page for" + " the freelist\n"); + return -ENOMEM; + } + return 0; +} + +int i810_dma_init(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_i810_private_t *dev_priv; + drm_i810_init_t init; + int retcode =3D 0; +=09 + if (copy_from_user(&init, (drm_i810_init_t *)arg, sizeof(init))) + return -EFAULT; +=09 + switch(init.func) { + case I810_INIT_DMA: + dev_priv =3D drm_alloc(sizeof(drm_i810_private_t),=20 + DRM_MEM_DRIVER); + if(dev_priv =3D NULL) return -ENOMEM; + retcode =3D i810_dma_initialize(dev, dev_priv, &init); + break; + case I810_CLEANUP_DMA: + retcode =3D i810_dma_cleanup(dev); + break; + default: + retcode =3D -EINVAL; + break; + } + =20 + return retcode; +} + + + +/* Most efficient way to verify state for the i810 is as it is + * emitted. Non-conformant state is silently dropped. + * + * Use 'volatile' & local var tmp to force the emitted values to be + * identical to the verified ones. + */ +static void i810EmitContextVerified( drm_device_t *dev,=20 + volatile unsigned int *code )=20 +{=09 + drm_i810_private_t *dev_priv =3D dev->dev_private; + int i, j =3D 0; + unsigned int tmp; + RING_LOCALS; + + BEGIN_LP_RING( I810_CTX_SETUP_SIZE ); + + OUT_RING( GFX_OP_COLOR_FACTOR ); + OUT_RING( code[I810_CTXREG_CF1] ); + + OUT_RING( GFX_OP_STIPPLE ); + OUT_RING( code[I810_CTXREG_ST1] ); + + for ( i =3D 4 ; i < I810_CTX_SETUP_SIZE ; i++ ) { + tmp =3D code[i]; + + if ((tmp & (7<<29)) =3D (3<<29) && + (tmp & (0x1f<<24)) < (0x1d<<24))=20 + { + OUT_RING( tmp );=20 + j++; + }=20 + } + + if (j & 1)=20 + OUT_RING( 0 );=20 + + ADVANCE_LP_RING(); +} + +static void i810EmitTexVerified( drm_device_t *dev,=20 + volatile unsigned int *code )=20 +{=09 + drm_i810_private_t *dev_priv =3D dev->dev_private; + int i, j =3D 0; + unsigned int tmp; + RING_LOCALS; + + BEGIN_LP_RING( I810_TEX_SETUP_SIZE ); + + OUT_RING( GFX_OP_MAP_INFO ); + OUT_RING( code[I810_TEXREG_MI1] ); + OUT_RING( code[I810_TEXREG_MI2] ); + OUT_RING( code[I810_TEXREG_MI3] ); + + for ( i =3D 4 ; i < I810_TEX_SETUP_SIZE ; i++ ) { + tmp =3D code[i]; + + if ((tmp & (7<<29)) =3D (3<<29) && + (tmp & (0x1f<<24)) < (0x1d<<24))=20 + { + OUT_RING( tmp );=20 + j++; + } + }=20 + =09 + if (j & 1)=20 + OUT_RING( 0 );=20 + + ADVANCE_LP_RING(); +} + + +/* Need to do some additional checking when setting the dest buffer. + */ +static void i810EmitDestVerified( drm_device_t *dev,=20 + volatile unsigned int *code )=20 +{=09 + drm_i810_private_t *dev_priv =3D dev->dev_private; + unsigned int tmp; + RING_LOCALS; + + BEGIN_LP_RING( I810_DEST_SETUP_SIZE + 2 ); + + tmp =3D code[I810_DESTREG_DI1]; + if (tmp =3D dev_priv->front_di1 || tmp =3D dev_priv->back_di1) { + OUT_RING( CMD_OP_DESTBUFFER_INFO ); + OUT_RING( tmp ); + } else + DRM_DEBUG("bad di1 %x (allow %x or %x)\n", + tmp, dev_priv->front_di1, dev_priv->back_di1); + + /* invarient: + */ + OUT_RING( CMD_OP_Z_BUFFER_INFO ); + OUT_RING( dev_priv->zi1 ); + + OUT_RING( GFX_OP_DESTBUFFER_VARS ); + OUT_RING( code[I810_DESTREG_DV1] ); + + OUT_RING( GFX_OP_DRAWRECT_INFO ); + OUT_RING( code[I810_DESTREG_DR1] ); + OUT_RING( code[I810_DESTREG_DR2] ); + OUT_RING( code[I810_DESTREG_DR3] ); + OUT_RING( code[I810_DESTREG_DR4] ); + OUT_RING( 0 ); + + ADVANCE_LP_RING(); +} + + + +static void i810EmitState( drm_device_t *dev ) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_sarea_t *sarea_priv =3D dev_priv->sarea_priv; + unsigned int dirty =3D sarea_priv->dirty; + + if (dirty & I810_UPLOAD_BUFFERS) { + i810EmitDestVerified( dev, sarea_priv->BufferState ); + sarea_priv->dirty &=3D ~I810_UPLOAD_BUFFERS; + } + + if (dirty & I810_UPLOAD_CTX) { + i810EmitContextVerified( dev, sarea_priv->ContextState ); + sarea_priv->dirty &=3D ~I810_UPLOAD_CTX; + } + + if (dirty & I810_UPLOAD_TEX0) { + i810EmitTexVerified( dev, sarea_priv->TexState[0] ); + sarea_priv->dirty &=3D ~I810_UPLOAD_TEX0; + } + + if (dirty & I810_UPLOAD_TEX1) { + i810EmitTexVerified( dev, sarea_priv->TexState[1] ); + sarea_priv->dirty &=3D ~I810_UPLOAD_TEX1; + } +} + + + +/* need to verify=20 + */ +static void i810_dma_dispatch_clear( drm_device_t *dev, int flags,=20 + unsigned int clear_color, + unsigned int clear_zval ) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_sarea_t *sarea_priv =3D dev_priv->sarea_priv; + int nbox =3D sarea_priv->nbox; + drm_clip_rect_t *pbox =3D sarea_priv->boxes; + int pitch =3D dev_priv->pitch; + int cpp =3D 2; + int i; + RING_LOCALS; + + i810_kernel_lost_context(dev); + + if (nbox > I810_NR_SAREA_CLIPRECTS) + nbox =3D I810_NR_SAREA_CLIPRECTS; + + for (i =3D 0 ; i < nbox ; i++, pbox++) { + unsigned int x =3D pbox->x1; + unsigned int y =3D pbox->y1; + unsigned int width =3D (pbox->x2 - x) * cpp; + unsigned int height =3D pbox->y2 - y; + unsigned int start =3D y * pitch + x * cpp; + + if (pbox->x1 > pbox->x2 || + pbox->y1 > pbox->y2 || + pbox->x2 > dev_priv->w || + pbox->y2 > dev_priv->h) + continue; + + if ( flags & I810_FRONT ) { =20 + DRM_DEBUG("clear front\n"); + BEGIN_LP_RING( 6 ); =20 + OUT_RING( BR00_BITBLT_CLIENT |=20 + BR00_OP_COLOR_BLT | 0x3 ); + OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch ); + OUT_RING( (height << 16) | width ); + OUT_RING( start ); + OUT_RING( clear_color ); + OUT_RING( 0 ); + ADVANCE_LP_RING(); + } + + if ( flags & I810_BACK ) { + DRM_DEBUG("clear back\n"); + BEGIN_LP_RING( 6 ); =20 + OUT_RING( BR00_BITBLT_CLIENT |=20 + BR00_OP_COLOR_BLT | 0x3 ); + OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch ); + OUT_RING( (height << 16) | width ); + OUT_RING( dev_priv->back_offset + start ); + OUT_RING( clear_color ); + OUT_RING( 0 ); + ADVANCE_LP_RING(); + } + + if ( flags & I810_DEPTH ) { + DRM_DEBUG("clear depth\n"); + BEGIN_LP_RING( 6 ); =20 + OUT_RING( BR00_BITBLT_CLIENT |=20 + BR00_OP_COLOR_BLT | 0x3 ); + OUT_RING( BR13_SOLID_PATTERN | (0xF0 << 16) | pitch ); + OUT_RING( (height << 16) | width ); + OUT_RING( dev_priv->depth_offset + start ); + OUT_RING( clear_zval ); + OUT_RING( 0 ); + ADVANCE_LP_RING(); + } + } +} + +static void i810_dma_dispatch_swap( drm_device_t *dev ) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_sarea_t *sarea_priv =3D dev_priv->sarea_priv; + int nbox =3D sarea_priv->nbox; + drm_clip_rect_t *pbox =3D sarea_priv->boxes; + int pitch =3D dev_priv->pitch; + int cpp =3D 2; + int ofs =3D dev_priv->back_offset; + int i; + RING_LOCALS; + + DRM_DEBUG("swapbuffers\n"); + + i810_kernel_lost_context(dev); + + if (nbox > I810_NR_SAREA_CLIPRECTS) + nbox =3D I810_NR_SAREA_CLIPRECTS; + + for (i =3D 0 ; i < nbox; i++, pbox++)=20 + { + unsigned int w =3D pbox->x2 - pbox->x1; + unsigned int h =3D pbox->y2 - pbox->y1; + unsigned int dst =3D pbox->x1*cpp + pbox->y1*pitch; + unsigned int start =3D ofs + dst; + + if (pbox->x1 > pbox->x2 || + pbox->y1 > pbox->y2 || + pbox->x2 > dev_priv->w || + pbox->y2 > dev_priv->h) + continue; +=20 + DRM_DEBUG("dispatch swap %d,%d-%d,%d!\n", + pbox[i].x1, pbox[i].y1, + pbox[i].x2, pbox[i].y2); + + BEGIN_LP_RING( 6 ); + OUT_RING( BR00_BITBLT_CLIENT | BR00_OP_SRC_COPY_BLT | 0x4 ); + OUT_RING( pitch | (0xCC << 16)); + OUT_RING( (h << 16) | (w * cpp)); + OUT_RING( dst ); + OUT_RING( pitch );=09 + OUT_RING( start ); + ADVANCE_LP_RING(); + } +} + + +static void i810_dma_dispatch_vertex(drm_device_t *dev,=20 + drm_buf_t *buf, + int discard, + int used) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + drm_i810_sarea_t *sarea_priv =3D dev_priv->sarea_priv; + drm_clip_rect_t *box =3D sarea_priv->boxes; + int nbox =3D sarea_priv->nbox; + unsigned long address =3D (unsigned long)buf->bus_address; + unsigned long start =3D address - dev->agp->base; =20 + int i =3D 0, u; + RING_LOCALS; + + i810_kernel_lost_context(dev); + + if (nbox > I810_NR_SAREA_CLIPRECTS)=20 + nbox =3D I810_NR_SAREA_CLIPRECTS; + + if (discard) { + u =3D cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,=20 + I810_BUF_HARDWARE); + if(u !=3D I810_BUF_CLIENT) { + DRM_DEBUG("xxxx 2\n"); + } + } + + if (used > 4*1024)=20 + used =3D 0; + + if (sarea_priv->dirty) + i810EmitState( dev ); + + DRM_DEBUG("dispatch vertex addr 0x%lx, used 0x%x nbox %d\n",=20 + address, used, nbox); + + dev_priv->counter++; + DRM_DEBUG( "dispatch counter : %ld\n", dev_priv->counter); + DRM_DEBUG( "i810_dma_dispatch\n"); + DRM_DEBUG( "start : %lx\n", start); + DRM_DEBUG( "used : %d\n", used); + DRM_DEBUG( "start + used - 4 : %ld\n", start + used - 4); + + if (buf_priv->currently_mapped =3D I810_BUF_MAPPED) { + *(u32 *)buf_priv->virtual =3D (GFX_OP_PRIMITIVE | + sarea_priv->vertex_prim | + ((used/4)-2)); + =09 + if (used & 4) { + *(u32 *)((u32)buf_priv->virtual + used) =3D 0; + used +=3D 4; + } + + i810_unmap_buffer(buf); + } + =20 + if (used) { + do { + if (i < nbox) { + BEGIN_LP_RING(4); + OUT_RING( GFX_OP_SCISSOR | SC_UPDATE_SCISSOR |=20 + SC_ENABLE ); + OUT_RING( GFX_OP_SCISSOR_INFO ); + OUT_RING( box[i].x1 | (box[i].y1<<16) ); + OUT_RING( (box[i].x2-1) | ((box[i].y2-1)<<16) ); + ADVANCE_LP_RING(); + } + =09 + BEGIN_LP_RING(4); + OUT_RING( CMD_OP_BATCH_BUFFER ); + OUT_RING( start | BB1_PROTECTED ); + OUT_RING( start + used - 4 ); + OUT_RING( 0 ); + ADVANCE_LP_RING(); + =09 + } while (++i < nbox); + } + + BEGIN_LP_RING(10); + OUT_RING( CMD_STORE_DWORD_IDX ); + OUT_RING( 20 ); + OUT_RING( dev_priv->counter ); + OUT_RING( 0 ); + + if (discard) { + OUT_RING( CMD_STORE_DWORD_IDX ); + OUT_RING( buf_priv->my_use_idx ); + OUT_RING( I810_BUF_FREE ); + OUT_RING( 0 ); + } + + OUT_RING( CMD_REPORT_HEAD ); + OUT_RING( 0 ); + ADVANCE_LP_RING(); +} + + +/* Interrupts are only for flushing */ +static void i810_dma_service(int irq, void *device, struct pt_regs *regs) +{ + drm_device_t *dev =3D (drm_device_t *)device; + u16 temp; + =20 + atomic_inc(&dev->total_irq); + temp =3D I810_READ16(I810REG_INT_IDENTITY_R); + temp =3D temp & ~(0x6000); + if(temp !=3D 0) I810_WRITE16(I810REG_INT_IDENTITY_R,=20 + temp); /* Clear all interrupts */ + else + return; +=20 + queue_task(&dev->tq, &tq_immediate); + mark_bh(IMMEDIATE_BH); +} + +static void i810_dma_task_queue(void *device) +{ + drm_device_t *dev =3D (drm_device_t *) device; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_pri= vate; + + atomic_set(&dev_priv->flush_done, 1); + wake_up_interruptible(&dev_priv->flush_queue); +} + +int i810_irq_install(drm_device_t *dev, int irq) +{ + int retcode; + u16 temp; + =20 + if (!irq) return -EINVAL; +=09 + down(&dev->struct_sem); + if (dev->irq) { + up(&dev->struct_sem); + return -EBUSY; + } + dev->irq =3D irq; + up(&dev->struct_sem); +=09 + DRM_DEBUG( "Interrupt Install : %d\n", irq); + DRM_DEBUG("%d\n", irq); + + dev->context_flag =3D 0; + dev->interrupt_flag =3D 0; + dev->dma_flag =3D 0; +=09 + dev->dma->next_buffer =3D NULL; + dev->dma->next_queue =3D NULL; + dev->dma->this_buffer =3D NULL; + + INIT_LIST_HEAD(&dev->tq.list); + dev->tq.sync =3D 0; + dev->tq.routine =3D i810_dma_task_queue; + dev->tq.data =3D dev; + + /* Before installing handler */ + temp =3D I810_READ16(I810REG_HWSTAM); + temp =3D temp & 0x6000; + I810_WRITE16(I810REG_HWSTAM, temp); + =09 + temp =3D I810_READ16(I810REG_INT_MASK_R); + temp =3D temp & 0x6000; + I810_WRITE16(I810REG_INT_MASK_R, temp); /* Unmask interrupts */ + temp =3D I810_READ16(I810REG_INT_ENABLE_R); + temp =3D temp & 0x6000; + I810_WRITE16(I810REG_INT_ENABLE_R, temp); /* Disable all interrupts= */ + + /* Install handler */ + if ((retcode =3D request_irq(dev->irq, + i810_dma_service, + SA_SHIRQ, + dev->devname, + dev))) { + down(&dev->struct_sem); + dev->irq =3D 0; + up(&dev->struct_sem); + return retcode; + } + temp =3D I810_READ16(I810REG_INT_ENABLE_R); + temp =3D temp & 0x6000; + temp =3D temp | 0x0003; + I810_WRITE16(I810REG_INT_ENABLE_R,=20 + temp); /* Enable bp & user interrupts */ + return 0; +} + +int i810_irq_uninstall(drm_device_t *dev) +{ + int irq; + u16 temp; + + +/* return 0; */ + + down(&dev->struct_sem); + irq =3D dev->irq; + dev->irq =3D 0; + up(&dev->struct_sem); +=09 + if (!irq) return -EINVAL; + + DRM_DEBUG( "Interrupt UnInstall: %d\n", irq);=09 + DRM_DEBUG("%d\n", irq); + =20 + temp =3D I810_READ16(I810REG_INT_IDENTITY_R); + temp =3D temp & ~(0x6000); + if(temp !=3D 0) I810_WRITE16(I810REG_INT_IDENTITY_R,=20 + temp); /* Clear all interrupts */ + =20 + temp =3D I810_READ16(I810REG_INT_ENABLE_R); + temp =3D temp & 0x6000; + I810_WRITE16(I810REG_INT_ENABLE_R,=20 + temp); /* Disable all interrupts */ + + free_irq(irq, dev); + + return 0; +} + +int i810_control(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_control_t ctl; + int retcode; + =20 + DRM_DEBUG( "i810_control\n"); + + if (copy_from_user(&ctl, (drm_control_t *)arg, sizeof(ctl))) + return -EFAULT; +=09 + switch (ctl.func) { + case DRM_INST_HANDLER: + if ((retcode =3D i810_irq_install(dev, ctl.irq))) + return retcode; + break; + case DRM_UNINST_HANDLER: + if ((retcode =3D i810_irq_uninstall(dev))) + return retcode; + break; + default: + return -EINVAL; + } + return 0; +} + +static inline void i810_dma_emit_flush(drm_device_t *dev) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + RING_LOCALS; + + i810_kernel_lost_context(dev); + + BEGIN_LP_RING(2); + OUT_RING( CMD_REPORT_HEAD ); + OUT_RING( GFX_OP_USER_INTERRUPT ); + ADVANCE_LP_RING(); + +/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */ +/* atomic_set(&dev_priv->flush_done, 1); */ +/* wake_up_interruptible(&dev_priv->flush_queue); */ +} + +static inline void i810_dma_quiescent_emit(drm_device_t *dev) +{ + drm_i810_private_t *dev_priv =3D dev->dev_private; + RING_LOCALS; + + i810_kernel_lost_context(dev); + + BEGIN_LP_RING(4); + OUT_RING( INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE ); + OUT_RING( CMD_REPORT_HEAD ); + OUT_RING( 0 ); + OUT_RING( GFX_OP_USER_INTERRUPT ); + ADVANCE_LP_RING(); + +/* i810_wait_ring( dev, dev_priv->ring.Size - 8 ); */ +/* atomic_set(&dev_priv->flush_done, 1); */ +/* wake_up_interruptible(&dev_priv->flush_queue); */ +} + +static void i810_dma_quiescent(drm_device_t *dev) +{ + DECLARE_WAITQUEUE(entry, current); + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_private; + unsigned long end; =20 + + if(dev_priv =3D NULL) { + return; + } + atomic_set(&dev_priv->flush_done, 0); + add_wait_queue(&dev_priv->flush_queue, &entry); + end =3D jiffies + (HZ*3); + =20 + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + i810_dma_quiescent_emit(dev); + if (atomic_read(&dev_priv->flush_done) =3D 1) break; + if((signed)(end - jiffies) <=3D 0) { + DRM_ERROR("lockup\n"); + break; + } =20 + schedule_timeout(HZ*3); + if (signal_pending(current)) { + break; + } + } + =20 + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev_priv->flush_queue, &entry); + =20 + return; +} + +static int i810_flush_queue(drm_device_t *dev) +{ + DECLARE_WAITQUEUE(entry, current); + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_private; + drm_device_dma_t *dma =3D dev->dma; + unsigned long end; + int i, ret =3D 0; =20 + + if(dev_priv =3D NULL) { + return 0; + } + atomic_set(&dev_priv->flush_done, 0); + add_wait_queue(&dev_priv->flush_queue, &entry); + end =3D jiffies + (HZ*3); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + i810_dma_emit_flush(dev); + if (atomic_read(&dev_priv->flush_done) =3D 1) break; + if((signed)(end - jiffies) <=3D 0) { + DRM_ERROR("lockup\n"); + break; + } =20 + schedule_timeout(HZ*3); + if (signal_pending(current)) { + ret =3D -EINTR; /* Can't restart */ + break; + } + } + =20 + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev_priv->flush_queue, &entry); + + + for (i =3D 0; i < dma->buf_count; i++) { + drm_buf_t *buf =3D dma->buflist[ i ]; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + =20 + int used =3D cmpxchg(buf_priv->in_use, I810_BUF_HARDWARE,=20 + I810_BUF_FREE); + + if (used =3D I810_BUF_HARDWARE) + DRM_DEBUG("reclaimed from HARDWARE\n"); + if (used =3D I810_BUF_CLIENT) + DRM_DEBUG("still on client HARDWARE\n"); + } + + return ret; +} + +/* Must be called with the lock held */ +void i810_reclaim_buffers(drm_device_t *dev, pid_t pid) +{ + drm_device_dma_t *dma =3D dev->dma; + int i; + + if (!dma) return; + if (!dev->dev_private) return; + if (!dma->buflist) return; + + i810_flush_queue(dev); + + for (i =3D 0; i < dma->buf_count; i++) { + drm_buf_t *buf =3D dma->buflist[ i ]; + drm_i810_buf_priv_t *buf_priv =3D buf->dev_private; + =20 + if (buf->pid =3D pid && buf_priv) { + int used =3D cmpxchg(buf_priv->in_use, I810_BUF_CLIENT,=20 + I810_BUF_FREE); + + if (used =3D I810_BUF_CLIENT) + DRM_DEBUG("reclaimed from client\n"); + if(buf_priv->currently_mapped =3D I810_BUF_MAPPED) + buf_priv->currently_mapped =3D I810_BUF_UNMAPPED; + } + } +} + +int i810_lock(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + + DECLARE_WAITQUEUE(entry, current); + int ret =3D 0; + drm_lock_t lock; + + if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock))) + return -EFAULT; + + if (lock.context =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + =20 + DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags =3D 0x%08x\n", + lock.context, current->pid, dev->lock.hw_lock->lock, + lock.flags); + + if (lock.context < 0) { + return -EINVAL; + } + /* Only one queue: + */ + + if (!ret) { + add_wait_queue(&dev->lock.lock_queue, &entry); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!dev->lock.hw_lock) { + /* Device has been unregistered */ + ret =3D -EINTR; + break; + } + if (drm_lock_take(&dev->lock.hw_lock->lock, + lock.context)) { + dev->lock.pid =3D current->pid; + dev->lock.lock_time =3D jiffies; + atomic_inc(&dev->total_locks); + break; /* Got lock */ + } + =09 + /* Contention */ + atomic_inc(&dev->total_sleeps); + DRM_DEBUG("Calling lock schedule\n"); + schedule(); + if (signal_pending(current)) { + ret =3D -ERESTARTSYS; + break; + } + } + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev->lock.lock_queue, &entry); + } +=09 + if (!ret) { + sigemptyset(&dev->sigmask); + sigaddset(&dev->sigmask, SIGSTOP); + sigaddset(&dev->sigmask, SIGTSTP); + sigaddset(&dev->sigmask, SIGTTIN); + sigaddset(&dev->sigmask, SIGTTOU); + dev->sigdata.context =3D lock.context; + dev->sigdata.lock =3D dev->lock.hw_lock; + block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask); + + if (lock.flags & _DRM_LOCK_QUIESCENT) { + DRM_DEBUG("_DRM_LOCK_QUIESCENT\n"); + DRM_DEBUG("fred\n"); + i810_dma_quiescent(dev); + } + } + DRM_DEBUG("%d %s\n", lock.context, ret ? "interrupted" : "has lock"); + return ret; +} + +int i810_flush_ioctl(struct inode *inode, struct file *filp,=20 + unsigned int cmd, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + =20 + DRM_DEBUG("i810_flush_ioctl\n"); + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_flush_ioctl called without lock held\n"); + return -EINVAL; + } + + i810_flush_queue(dev); + return 0; +} + + +int i810_dma_vertex(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_privat= e; + u32 *hw_status =3D (u32 *)dev_priv->hw_status_page; + drm_i810_sarea_t *sarea_priv =3D (drm_i810_sarea_t *)=20 + dev_priv->sarea_priv;=20 + drm_i810_vertex_t vertex; + + if (copy_from_user(&vertex, (drm_i810_vertex_t *)arg, sizeof(vertex))) + return -EFAULT; + + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_dma_vertex called without lock held\n"); + return -EINVAL; + } + + DRM_DEBUG("i810 dma vertex, idx %d used %d discard %d\n", + vertex.idx, vertex.used, vertex.discard); + + if(vertex.idx < 0 || vertex.idx > dma->buf_count) return -EINVAL; + + i810_dma_dispatch_vertex( dev,=20 + dma->buflist[ vertex.idx ],=20 + vertex.discard, vertex.used ); + + atomic_add(vertex.used, &dma->total_bytes); + atomic_inc(&dma->total_dmas); + sarea_priv->last_enqueue =3D dev_priv->counter-1; + sarea_priv->last_dispatch =3D (int) hw_status[5]; + =20 + return 0; +} + + + +int i810_clear_bufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_i810_clear_t clear; + + if (copy_from_user(&clear, (drm_i810_clear_t *)arg, sizeof(clear))) + return -EFAULT; + =20 + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_clear_bufs called without lock held\n"); + return -EINVAL; + } + + i810_dma_dispatch_clear( dev, clear.flags,=20 + clear.clear_color,=20 + clear.clear_depth ); + return 0; +} + +int i810_swap_bufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + =20 + DRM_DEBUG("i810_swap_bufs\n"); + + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_swap_buf called without lock held\n"); + return -EINVAL; + } + + i810_dma_dispatch_swap( dev ); + return 0; +} + +int i810_getage(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_privat= e; + u32 *hw_status =3D (u32 *)dev_priv->hw_status_page; + drm_i810_sarea_t *sarea_priv =3D (drm_i810_sarea_t *)=20 + dev_priv->sarea_priv;=20 + + sarea_priv->last_dispatch =3D (int) hw_status[5]; + return 0; +} + +int i810_getbuf(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int retcode =3D 0; + drm_i810_dma_t d; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_privat= e; + u32 *hw_status =3D (u32 *)dev_priv->hw_status_page; + drm_i810_sarea_t *sarea_priv =3D (drm_i810_sarea_t *)=20 + dev_priv->sarea_priv;=20 + + DRM_DEBUG("getbuf\n"); + if (copy_from_user(&d, (drm_i810_dma_t *)arg, sizeof(d))) + return -EFAULT; + =20 + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_dma called without lock held\n"); + return -EINVAL; + } +=09 + d.granted =3D 0; + + retcode =3D i810_dma_get_buffer(dev, &d, filp); + + DRM_DEBUG("i810_dma: %d returning %d, granted =3D %d\n", + current->pid, retcode, d.granted); + + if (copy_to_user((drm_dma_t *)arg, &d, sizeof(d))) + return -EFAULT; + sarea_priv->last_dispatch =3D (int) hw_status[5]; + + return retcode; +} + +int i810_copybuf(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_i810_copy_t d; + drm_i810_private_t *dev_priv =3D (drm_i810_private_t *)dev->dev_privat= e; + u32 *hw_status =3D (u32 *)dev_priv->hw_status_page; + drm_i810_sarea_t *sarea_priv =3D (drm_i810_sarea_t *)=20 + dev_priv->sarea_priv;=20 + drm_buf_t *buf; + drm_i810_buf_priv_t *buf_priv; + drm_device_dma_t *dma =3D dev->dma; + + if(!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("i810_dma called without lock held\n"); + return -EINVAL; + } + =20 + if (copy_from_user(&d, (drm_i810_copy_t *)arg, sizeof(d))) + return -EFAULT; + + if(d.idx < 0 || d.idx > dma->buf_count) return -EINVAL; + buf =3D dma->buflist[ d.idx ]; + buf_priv =3D buf->dev_private; + if (buf_priv->currently_mapped !=3D I810_BUF_MAPPED) return -EPERM; + + /* Stopping end users copying their data to the entire kernel + is good.. */ + if (d.used < 0 || d.used > buf->total) + return -EINVAL; + =09 + if (copy_from_user(buf_priv->virtual, d.address, d.used)) + return -EFAULT; + + sarea_priv->last_dispatch =3D (int) hw_status[5]; + + return 0; +} + +int i810_docopy(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + if(VM_DONTCOPY =3D 0) return 1; + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drm.h linux-2.4.13-lia/dri= vers/char/drm-4.0/i810_drm.h --- linux-2.4.13/drivers/char/drm-4.0/i810_drm.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drm.h Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,194 @@ +#ifndef _I810_DRM_H_ +#define _I810_DRM_H_ + +/* WARNING: These defines must be the same as what the Xserver uses. + * if you change them, you must change the defines in the Xserver. + */ + +#ifndef _I810_DEFINES_ +#define _I810_DEFINES_ + +#define I810_DMA_BUF_ORDER 12 +#define I810_DMA_BUF_SZ (1< + * Jeff Hartmann + * + */ + +#include +#include "drmP.h" +#include "i810_drv.h" + +#define I810_NAME "i810" +#define I810_DESC "Intel I810" +#define I810_DATE "20000928" +#define I810_MAJOR 1 +#define I810_MINOR 1 +#define I810_PATCHLEVEL 0 + +static drm_device_t i810_device; +drm_ctx_t i810_res_ctx; + +static struct file_operations i810_fops =3D { +#if LINUX_VERSION_CODE >=3D 0x020400 + /* This started being used during 2.4.0-test */ + owner: THIS_MODULE, +#endif + open: i810_open, + flush: drm_flush, + release: i810_release, + ioctl: i810_ioctl, + mmap: drm_mmap, + read: drm_read, + fasync: drm_fasync, + poll: drm_poll, +}; + +static struct miscdevice i810_misc =3D { + minor: MISC_DYNAMIC_MINOR, + name: I810_NAME, + fops: &i810_fops, +}; + +static drm_ioctl_desc_t i810_ioctls[] =3D { + [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] =3D { i810_version, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] =3D { drm_getunique, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] =3D { drm_getmagic, 0, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] =3D { drm_irq_busid, 0, 1 }, + + [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] =3D { drm_setunique, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] =3D { drm_block, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] =3D { drm_unblock, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] =3D { i810_control, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] =3D { drm_authmagic, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] =3D { drm_addmap, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] =3D { i810_addbufs, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] =3D { i810_markbufs, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] =3D { i810_infobufs, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] =3D { i810_freebufs, 1, 0 }, + + [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] =3D { i810_addctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] =3D { i810_rmctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] =3D { i810_modctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] =3D { i810_getctx, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] =3D { i810_switchctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] =3D { i810_newctx, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] =3D { i810_resctx, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] =3D { drm_adddraw, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] =3D { drm_rmdraw, 1, 1 }, + + [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] =3D { i810_lock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] =3D { i810_unlock, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] =3D { drm_finish, 1, 0 }, + + [DRM_IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE)] =3D { drm_agp_acquire, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_RELEASE)] =3D { drm_agp_release, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_ENABLE)] =3D { drm_agp_enable, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_INFO)] =3D { drm_agp_info, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_ALLOC)] =3D { drm_agp_alloc, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_FREE)] =3D { drm_agp_free, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_BIND)] =3D { drm_agp_bind, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_AGP_UNBIND)] =3D { drm_agp_unbind, 1, 1 }, + + [DRM_IOCTL_NR(DRM_IOCTL_I810_INIT)] =3D { i810_dma_init, 1, 1 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_VERTEX)] =3D { i810_dma_vertex, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_CLEAR)] =3D { i810_clear_bufs, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_FLUSH)] =3D { i810_flush_ioctl,1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_GETAGE)] =3D { i810_getage, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_GETBUF)] =3D { i810_getbuf, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_SWAP)] =3D { i810_swap_bufs, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_COPY)] =3D { i810_copybuf, 1, 0 }, + [DRM_IOCTL_NR(DRM_IOCTL_I810_DOCOPY)] =3D { i810_docopy, 1, 0 }, +}; + +#define I810_IOCTL_COUNT DRM_ARRAY_SIZE(i810_ioctls) + +#ifdef MODULE +static char *i810 =3D NULL; +#endif + +MODULE_AUTHOR("VA Linux Systems, Inc."); +MODULE_DESCRIPTION("Intel I810"); +MODULE_PARM(i810, "s"); + +#ifndef MODULE +/* i810_options is called by the kernel to parse command-line options + * passed via the boot-loader (e.g., LILO). It calls the insmod option + * routine, drm_parse_drm. + */ + +static int __init i810_options(char *str) +{ + drm_parse_options(str); + return 1; +} + +__setup("i810=3D", i810_options); +#endif + +static int i810_setup(drm_device_t *dev) +{ + int i; + + atomic_set(&dev->ioctl_count, 0); + atomic_set(&dev->vma_count, 0); + dev->buf_use =3D 0; + atomic_set(&dev->buf_alloc, 0); + + drm_dma_setup(dev); + + atomic_set(&dev->total_open, 0); + atomic_set(&dev->total_close, 0); + atomic_set(&dev->total_ioctl, 0); + atomic_set(&dev->total_irq, 0); + atomic_set(&dev->total_ctx, 0); + atomic_set(&dev->total_locks, 0); + atomic_set(&dev->total_unlocks, 0); + atomic_set(&dev->total_contends, 0); + atomic_set(&dev->total_sleeps, 0); + + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + dev->magiclist[i].head =3D NULL; + dev->magiclist[i].tail =3D NULL; + } + dev->maplist =3D NULL; + dev->map_count =3D 0; + dev->vmalist =3D NULL; + dev->lock.hw_lock =3D NULL; + init_waitqueue_head(&dev->lock.lock_queue); + dev->queue_count =3D 0; + dev->queue_reserved =3D 0; + dev->queue_slots =3D 0; + dev->queuelist =3D NULL; + dev->irq =3D 0; + dev->context_flag =3D 0; + dev->interrupt_flag =3D 0; + dev->dma_flag =3D 0; + dev->last_context =3D 0; + dev->last_switch =3D 0; + dev->last_checked =3D 0; + init_timer(&dev->timer); + init_waitqueue_head(&dev->context_wait); +#if DRM_DMA_HISTO + memset(&dev->histo, 0, sizeof(dev->histo)); +#endif + dev->ctx_start =3D 0; + dev->lck_start =3D 0; + + dev->buf_rp =3D dev->buf; + dev->buf_wp =3D dev->buf; + dev->buf_end =3D dev->buf + DRM_BSZ; + dev->buf_async =3D NULL; + init_waitqueue_head(&dev->buf_readers); + init_waitqueue_head(&dev->buf_writers); + + DRM_DEBUG("\n"); + + /* The kernel's context could be created here, but is now created + in drm_dma_enqueue. This is more resource-efficient for + hardware that does not do DMA, but may mean that + drm_select_queue fails between the time the interrupt is + initialized and the time the queues are initialized. */ + + return 0; +} + + +static int i810_takedown(drm_device_t *dev) +{ + int i; + drm_magic_entry_t *pt, *next; + drm_map_t *map; + drm_vma_entry_t *vma, *vma_next; + + DRM_DEBUG("\n"); + + if (dev->irq) i810_irq_uninstall(dev); + + down(&dev->struct_sem); + del_timer(&dev->timer); + + if (dev->devname) { + drm_free(dev->devname, strlen(dev->devname)+1, DRM_MEM_DRIVER); + dev->devname =3D NULL; + } + + if (dev->unique) { + drm_free(dev->unique, strlen(dev->unique)+1, DRM_MEM_DRIVER); + dev->unique =3D NULL; + dev->unique_len =3D 0; + } + /* Clear pid list */ + for (i =3D 0; i < DRM_HASH_SIZE; i++) { + for (pt =3D dev->magiclist[i].head; pt; pt =3D next) { + next =3D pt->next; + drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC); + } + dev->magiclist[i].head =3D dev->magiclist[i].tail =3D NULL; + } + /* Clear AGP information */ + if (dev->agp) { + drm_agp_mem_t *entry; + drm_agp_mem_t *nexte; + + /* Remove AGP resources, but leave dev->agp + intact until r128_cleanup is called. */ + for (entry =3D dev->agp->memory; entry; entry =3D nexte) { + nexte =3D entry->next; + if (entry->bound) drm_unbind_agp(entry->memory); + drm_free_agp(entry->memory, entry->pages); + drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS); + } + dev->agp->memory =3D NULL; + + if (dev->agp->acquired) _drm_agp_release(); + + dev->agp->acquired =3D 0; + dev->agp->enabled =3D 0; + } + /* Clear vma list (only built for debugging) */ + if (dev->vmalist) { + for (vma =3D dev->vmalist; vma; vma =3D vma_next) { + vma_next =3D vma->next; + drm_free(vma, sizeof(*vma), DRM_MEM_VMAS); + } + dev->vmalist =3D NULL; + } + + /* Clear map area and mtrr information */ + if (dev->maplist) { + for (i =3D 0; i < dev->map_count; i++) { + map =3D dev->maplist[i]; + switch (map->type) { + case _DRM_REGISTERS: + case _DRM_FRAME_BUFFER: +#ifdef CONFIG_MTRR + if (map->mtrr >=3D 0) { + int retcode; + retcode =3D mtrr_del(map->mtrr, + map->offset, + map->size); + DRM_DEBUG("mtrr_del =3D %d\n", retcode); + } +#endif + drm_ioremapfree(map->handle, map->size, dev); + break; + case _DRM_SHM: + drm_free_pages((unsigned long)map->handle, + drm_order(map->size) + - PAGE_SHIFT, + DRM_MEM_SAREA); + break; + case _DRM_AGP: + break; + } + drm_free(map, sizeof(*map), DRM_MEM_MAPS); + } + drm_free(dev->maplist, + dev->map_count * sizeof(*dev->maplist), + DRM_MEM_MAPS); + dev->maplist =3D NULL; + dev->map_count =3D 0; + } + + if (dev->queuelist) { + for (i =3D 0; i < dev->queue_count; i++) { + drm_waitlist_destroy(&dev->queuelist[i]->waitlist); + if (dev->queuelist[i]) { + drm_free(dev->queuelist[i], + sizeof(*dev->queuelist[0]), + DRM_MEM_QUEUES); + dev->queuelist[i] =3D NULL; + } + } + drm_free(dev->queuelist, + dev->queue_slots * sizeof(*dev->queuelist), + DRM_MEM_QUEUES); + dev->queuelist =3D NULL; + } + + drm_dma_takedown(dev); + + dev->queue_count =3D 0; + if (dev->lock.hw_lock) { + dev->lock.hw_lock =3D NULL; /* SHM removed */ + dev->lock.pid =3D 0; + wake_up_interruptible(&dev->lock.lock_queue); + } + up(&dev->struct_sem); + + return 0; +} + +/* i810_init is called via init_module at module load time, or via + * linux/init/main.c (this is not currently supported). */ + +static int __init i810_init(void) +{ + int retcode; + drm_device_t *dev =3D &i810_device; + + DRM_DEBUG("\n"); + + memset((void *)dev, 0, sizeof(*dev)); + dev->count_lock =3D SPIN_LOCK_UNLOCKED; + sema_init(&dev->struct_sem, 1); + +#ifdef MODULE + drm_parse_options(i810); +#endif + DRM_DEBUG("doing misc_register\n"); + if ((retcode =3D misc_register(&i810_misc))) { + DRM_ERROR("Cannot register \"%s\"\n", I810_NAME); + return retcode; + } + dev->device =3D MKDEV(MISC_MAJOR, i810_misc.minor); + dev->name =3D I810_NAME; + + DRM_DEBUG("doing mem init\n"); + drm_mem_init(); + DRM_DEBUG("doing proc init\n"); + drm_proc_init(dev); + DRM_DEBUG("doing agp init\n"); + dev->agp =3D drm_agp_init(); + if(dev->agp =3D NULL) { + DRM_INFO("The i810 drm module requires the agpgart module" + " to function correctly\nPlease load the agpgart" + " module before you load the i810 module\n"); + drm_proc_cleanup(); + misc_deregister(&i810_misc); + i810_takedown(dev); + return -ENOMEM; + } + DRM_DEBUG("doing ctxbitmap init\n"); + if((retcode =3D drm_ctxbitmap_init(dev))) { + DRM_ERROR("Cannot allocate memory for context bitmap.\n"); + drm_proc_cleanup(); + misc_deregister(&i810_misc); + i810_takedown(dev); + return retcode; + } + + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", + I810_NAME, + I810_MAJOR, + I810_MINOR, + I810_PATCHLEVEL, + I810_DATE, + i810_misc.minor); + + return 0; +} + +/* i810_cleanup is called via cleanup_module at module unload time. */ + +static void __exit i810_cleanup(void) +{ + drm_device_t *dev =3D &i810_device; + + DRM_DEBUG("\n"); + + drm_proc_cleanup(); + if (misc_deregister(&i810_misc)) { + DRM_ERROR("Cannot unload module\n"); + } else { + DRM_INFO("Module unloaded\n"); + } + drm_ctxbitmap_cleanup(dev); + i810_takedown(dev); + if (dev->agp) { + drm_agp_uninit(); + drm_free(dev->agp, sizeof(*dev->agp), DRM_MEM_AGPLISTS); + dev->agp =3D NULL; + } +} + +module_init(i810_init); +module_exit(i810_cleanup); + + +int i810_version(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_version_t version; + int len; + + if (copy_from_user(&version, + (drm_version_t *)arg, + sizeof(version))) + return -EFAULT; + +#define DRM_COPY(name,value) \ + len =3D strlen(value); \ + if (len > name##_len) len =3D name##_len; \ + name##_len =3D strlen(value); \ + if (len && name) { \ + if (copy_to_user(name, value, len)) \ + return -EFAULT; \ + } + + version.version_major =3D I810_MAJOR; + version.version_minor =3D I810_MINOR; + version.version_patchlevel =3D I810_PATCHLEVEL; + + DRM_COPY(version.name, I810_NAME); + DRM_COPY(version.date, I810_DATE); + DRM_COPY(version.desc, I810_DESC); + + if (copy_to_user((drm_version_t *)arg, + &version, + sizeof(version))) + return -EFAULT; + return 0; +} + +int i810_open(struct inode *inode, struct file *filp) +{ + drm_device_t *dev =3D &i810_device; + int retcode =3D 0; + + DRM_DEBUG("open_count =3D %d\n", dev->open_count); + if (!(retcode =3D drm_open_helper(inode, filp, dev))) { +#if LINUX_VERSION_CODE < 0x020333 + MOD_INC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_inc(&dev->total_open); + spin_lock(&dev->count_lock); + if (!dev->open_count++) { + spin_unlock(&dev->count_lock); + return i810_setup(dev); + } + spin_unlock(&dev->count_lock); + } + return retcode; +} + +int i810_release(struct inode *inode, struct file *filp) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev; + int retcode =3D 0; + + lock_kernel(); + dev =3D priv->dev; + DRM_DEBUG("pid =3D %d, device =3D 0x%x, open_count =3D %d\n", + current->pid, dev->device, dev->open_count); + + if (dev->lock.hw_lock && _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock) + && dev->lock.pid =3D current->pid) { + i810_reclaim_buffers(dev, priv->pid); + DRM_ERROR("Process %d dead, freeing lock for context %d\n", + current->pid, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + drm_lock_free(dev, + &dev->lock.hw_lock->lock, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + + /* FIXME: may require heavy-handed reset of + hardware at this point, possibly + processed via a callback to the X + server. */ + } else if (dev->lock.hw_lock) { + /* The lock is required to reclaim buffers */ + DECLARE_WAITQUEUE(entry, current); + add_wait_queue(&dev->lock.lock_queue, &entry); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!dev->lock.hw_lock) { + /* Device has been unregistered */ + retcode =3D -EINTR; + break; + } + if (drm_lock_take(&dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + dev->lock.pid =3D priv->pid; + dev->lock.lock_time =3D jiffies; + atomic_inc(&dev->total_locks); + break; /* Got lock */ + } + /* Contention */ + atomic_inc(&dev->total_sleeps); + schedule(); + if (signal_pending(current)) { + retcode =3D -ERESTARTSYS; + break; + } + } + current->state =3D TASK_RUNNING; + remove_wait_queue(&dev->lock.lock_queue, &entry); + if(!retcode) { + i810_reclaim_buffers(dev, priv->pid); + drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT); + } + } + drm_fasync(-1, filp, 0); + + down(&dev->struct_sem); + if (priv->prev) priv->prev->next =3D priv->next; + else dev->file_first =3D priv->next; + if (priv->next) priv->next->prev =3D priv->prev; + else dev->file_last =3D priv->prev; + up(&dev->struct_sem); + + drm_free(priv, sizeof(*priv), DRM_MEM_FILES); +#if LINUX_VERSION_CODE < 0x020333 + MOD_DEC_USE_COUNT; /* Needed before Linux 2.3.51 */ +#endif + atomic_inc(&dev->total_close); + spin_lock(&dev->count_lock); + if (!--dev->open_count) { + if (atomic_read(&dev->ioctl_count) || dev->blocked) { + DRM_ERROR("Device busy: %d %d\n", + atomic_read(&dev->ioctl_count), + dev->blocked); + spin_unlock(&dev->count_lock); + unlock_kernel(); + return -EBUSY; + } + spin_unlock(&dev->count_lock); + unlock_kernel(); + return i810_takedown(dev); + } + spin_unlock(&dev->count_lock); + unlock_kernel(); + return retcode; +} + +/* drm_ioctl is called whenever a process performs an ioctl on /dev/drm. */ + +int i810_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + int nr =3D DRM_IOCTL_NR(cmd); + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int retcode =3D 0; + drm_ioctl_desc_t *ioctl; + drm_ioctl_t *func; + + atomic_inc(&dev->ioctl_count); + atomic_inc(&dev->total_ioctl); + ++priv->ioctl_count; + + DRM_DEBUG("pid =3D %d, cmd =3D 0x%02x, nr =3D 0x%02x, dev 0x%x, auth =3D = %d\n", + current->pid, cmd, nr, dev->device, priv->authenticated); + + if (nr >=3D I810_IOCTL_COUNT) { + retcode =3D -EINVAL; + } else { + ioctl =3D &i810_ioctls[nr]; + func =3D ioctl->func; + + if (!func) { + DRM_DEBUG("no function\n"); + retcode =3D -EINVAL; + } else if ((ioctl->root_only && !capable(CAP_SYS_ADMIN)) + || (ioctl->auth_needed && !priv->authenticated)) { + retcode =3D -EACCES; + } else { + retcode =3D (func)(inode, filp, cmd, arg); + } + } + + atomic_dec(&dev->ioctl_count); + return retcode; +} + +int i810_unlock(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_lock_t lock; + + if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock))) + return -EFAULT; + + if (lock.context =3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("Process %d using kernel context %d\n", + current->pid, lock.context); + return -EINVAL; + } + + DRM_DEBUG("%d frees lock (%d holds)\n", + lock.context, + _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock)); + atomic_inc(&dev->total_unlocks); + if (_DRM_LOCK_IS_CONT(dev->lock.hw_lock->lock)) + atomic_inc(&dev->total_contends); + drm_lock_transfer(dev, &dev->lock.hw_lock->lock, DRM_KERNEL_CONTEXT); + if (!dev->context_flag) { + if (drm_lock_free(dev, &dev->lock.hw_lock->lock, + DRM_KERNEL_CONTEXT)) { + DRM_ERROR("\n"); + } + } +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.lhld[drm_histogram_slot(get_cycles() + - dev->lck_start)]); +#endif + + unblock_all_signals(); + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/i810_drv.h linux-2.4.13-lia/dri= vers/char/drm-4.0/i810_drv.h --- linux-2.4.13/drivers/char/drm-4.0/i810_drv.h Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/i810_drv.h Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,225 @@ +/* i810_drv.h -- Private header for the Matrox g200/g400 driver -*- linux-= c -*- + * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + * + */ + +#ifndef _I810_DRV_H_ +#define _I810_DRV_H_ + +typedef struct drm_i810_buf_priv { + u32 *in_use; + int my_use_idx; + int currently_mapped; + void *virtual; + void *kernel_virtual; + int map_count; + struct vm_area_struct *vma; +} drm_i810_buf_priv_t; + +typedef struct _drm_i810_ring_buffer{ + int tail_mask; + unsigned long Start; + unsigned long End; + unsigned long Size; + u8 *virtual_start; + int head; + int tail; + int space; +} drm_i810_ring_buffer_t; + +typedef struct drm_i810_private { + int ring_map_idx; + int buffer_map_idx; + + drm_i810_ring_buffer_t ring; + drm_i810_sarea_t *sarea_priv; + + unsigned long hw_status_page; + unsigned long counter; + + atomic_t flush_done; + wait_queue_head_t flush_queue; /* Processes waiting until flush */ + drm_buf_t *mmap_buffer; + +=09 + u32 front_di1, back_di1, zi1; +=09 + int back_offset; + int depth_offset; + int w, h; + int pitch; +} drm_i810_private_t; + + /* i810_drv.c */ +extern int i810_version(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_open(struct inode *inode, struct file *filp); +extern int i810_release(struct inode *inode, struct file *filp); +extern int i810_ioctl(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_unlock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + /* i810_dma.c */ +extern int i810_dma_schedule(drm_device_t *dev, int locked); +extern int i810_getbuf(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_irq_install(drm_device_t *dev, int irq); +extern int i810_irq_uninstall(drm_device_t *dev); +extern int i810_control(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_lock(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_dma_init(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_flush_ioctl(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern void i810_reclaim_buffers(drm_device_t *dev, pid_t pid); +extern int i810_getage(struct inode *inode, struct file *filp, unsigned i= nt cmd, + unsigned long arg); +extern int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma= ); +extern int i810_copybuf(struct inode *inode, struct file *filp,=20 + unsigned int cmd, unsigned long arg); +extern int i810_docopy(struct inode *inode, struct file *filp,=20 + unsigned int cmd, unsigned long arg); + + /* i810_bufs.c */ +extern int i810_addbufs(struct inode *inode, struct file *filp,=20 + unsigned int cmd, unsigned long arg); +extern int i810_infobufs(struct inode *inode, struct file *filp,=20 + unsigned int cmd, unsigned long arg); +extern int i810_markbufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_freebufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_addmap(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + + /* i810_context.c */ +extern int i810_resctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_addctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_modctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_getctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_switchctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_newctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); +extern int i810_rmctx(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + +extern int i810_context_switch(drm_device_t *dev, int old, int new); +extern int i810_context_switch_complete(drm_device_t *dev, int new); + +#define I810_VERBOSE 0 + + +int i810_dma_vertex(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + +int i810_swap_bufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + +int i810_clear_bufs(struct inode *inode, struct file *filp, + unsigned int cmd, unsigned long arg); + +#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) +#define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) +#define CMD_REPORT_HEAD (7<<23) +#define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) +#define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1) + +#define INST_PARSER_CLIENT 0x00000000 +#define INST_OP_FLUSH 0x02000000 +#define INST_FLUSH_MAP_CACHE 0x00000001 + + +#define BB1_START_ADDR_MASK (~0x7) +#define BB1_PROTECTED (1<<0) +#define BB1_UNPROTECTED (0<<0) +#define BB2_END_ADDR_MASK (~0x7) + +#define I810REG_HWSTAM 0x02098 +#define I810REG_INT_IDENTITY_R 0x020a4 +#define I810REG_INT_MASK_R 0x020a8 +#define I810REG_INT_ENABLE_R 0x020a0 + +#define LP_RING 0x2030 +#define HP_RING 0x2040 +#define RING_TAIL 0x00 +#define TAIL_ADDR 0x000FFFF8 +#define RING_HEAD 0x04 +#define HEAD_WRAP_COUNT 0xFFE00000 +#define HEAD_WRAP_ONE 0x00200000 +#define HEAD_ADDR 0x001FFFFC +#define RING_START 0x08 +#define START_ADDR 0x00FFFFF8 +#define RING_LEN 0x0C +#define RING_NR_PAGES 0x000FF000=20 +#define RING_REPORT_MASK 0x00000006 +#define RING_REPORT_64K 0x00000002 +#define RING_REPORT_128K 0x00000004 +#define RING_NO_REPORT 0x00000000 +#define RING_VALID_MASK 0x00000001 +#define RING_VALID 0x00000001 +#define RING_INVALID 0x00000000 + +#define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) +#define SC_UPDATE_SCISSOR (0x1<<1) +#define SC_ENABLE_MASK (0x1<<0) +#define SC_ENABLE (0x1<<0) + +#define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1)) +#define SCI_YMIN_MASK (0xffff<<16) +#define SCI_XMIN_MASK (0xffff<<0) +#define SCI_YMAX_MASK (0xffff<<16) +#define SCI_XMAX_MASK (0xffff<<0) + +#define GFX_OP_COLOR_FACTOR ((0x3<<29)|(0x1d<<24)|(0x1<<16)|0x0) +#define GFX_OP_STIPPLE ((0x3<<29)|(0x1d<<24)|(0x83<<16)) +#define GFX_OP_MAP_INFO ((0x3<<29)|(0x1d<<24)|0x2) +#define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0) +#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) +#define GFX_OP_PRIMITIVE ((0x3<<29)|(0x1f<<24)) + +#define CMD_OP_Z_BUFFER_INFO ((0x0<<29)|(0x16<<23)) +#define CMD_OP_DESTBUFFER_INFO ((0x0<<29)|(0x15<<23)) + +#define BR00_BITBLT_CLIENT 0x40000000 +#define BR00_OP_COLOR_BLT 0x10000000 +#define BR00_OP_SRC_COPY_BLT 0x10C00000 +#define BR13_SOLID_PATTERN 0x80000000 + + + +#endif + diff -urN linux-2.4.13/drivers/char/drm-4.0/init.c linux-2.4.13-lia/drivers= /char/drm-4.0/init.c --- linux-2.4.13/drivers/char/drm-4.0/init.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/init.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,113 @@ +/* init.c -- Setup/Cleanup for DRM -*- linux-c -*- + * Created: Mon Jan 4 08:58:31 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +int drm_flags =3D 0; + +/* drm_parse_option parses a single option. See description for + drm_parse_options for details. */ + +static void drm_parse_option(char *s) +{ + char *c, *r; +=09 + DRM_DEBUG("\"%s\"\n", s); + if (!s || !*s) return; + for (c =3D s; *c && *c !=3D ':'; c++); /* find : or \0 */ + if (*c) r =3D c + 1; else r =3D NULL; /* remember remainder */ + *c =3D '\0'; /* terminate */ + if (!strcmp(s, "noctx")) { + drm_flags |=3D DRM_FLAG_NOCTX; + DRM_INFO("Server-mediated context switching OFF\n"); + return; + } + if (!strcmp(s, "debug")) { + drm_flags |=3D DRM_FLAG_DEBUG; + DRM_INFO("Debug messages ON\n"); + return; + } + DRM_ERROR("\"%s\" is not a valid option\n", s); + return; +} + +/* drm_parse_options parse the insmod "drm=3D" options, or the command-line + * options passed to the kernel via LILO. The grammar of the format is as + * follows: + * + * drm ::=3D 'drm=3D' option_list + * option_list ::=3D option [ ';' option_list ] + * option ::=3D 'device:' major + * | 'debug'=20 + * | 'noctx' + * major ::=3D INTEGER + * + * Note that 's' contains option_list without the 'drm=3D' part. + * + * device=3Dmajor,minor specifies the device number used for /dev/drm + * if major =3D 0 then the misc device is used + * if major =3D 0 and minor =3D 0 then dynamic misc allocation is used + * debug=3Don specifies that debugging messages will be printk'd + * debug=3Dtrace specifies that each function call will be logged via prin= tk + * debug=3Doff turns off all debugging options + * + */ + +void drm_parse_options(char *s) +{ + char *h, *t, *n; +=09 + DRM_DEBUG("\"%s\"\n", s ?: ""); + if (!s || !*s) return; + + for (h =3D t =3D n =3D s; h && *h; h =3D n) { + for (; *t && *t !=3D ';'; t++); /* find ; or \0 */ + if (*t) n =3D t + 1; else n =3D NULL; /* remember next */ + *t =3D '\0'; /* terminate */ + drm_parse_option(h); /* parse */ + } +} + +/* drm_cpu_valid returns non-zero if the DRI will run on this CPU, and 0 + * otherwise. */ + +int drm_cpu_valid(void) +{ +#if defined(__i386__) + if (boot_cpu_data.x86 =3D 3) return 0; /* No cmpxchg on a 386 */ +#endif +#if defined(__sparc__) && !defined(__sparc_v9__) + if (1) + return 0; /* No cmpxchg before v9 sparc. */ +#endif + return 1; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/ioctl.c linux-2.4.13-lia/driver= s/char/drm-4.0/ioctl.c --- linux-2.4.13/drivers/char/drm-4.0/ioctl.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/ioctl.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,99 @@ +/* ioctl.c -- IOCTL processing for DRM -*- linux-c -*- + * Created: Fri Jan 8 09:01:26 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +int drm_irq_busid(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_irq_busid_t p; + struct pci_dev *dev; + + if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p))) + return -EFAULT; + dev =3D pci_find_slot(p.busnum, PCI_DEVFN(p.devnum, p.funcnum)); + if (dev) p.irq =3D dev->irq; + else p.irq =3D 0; + DRM_DEBUG("%d:%d:%d =3D> IRQ %d\n", + p.busnum, p.devnum, p.funcnum, p.irq); + if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p))) + return -EFAULT; + return 0; +} + +int drm_getunique(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_unique_t u; + + if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u))) + return -EFAULT; + if (u.unique_len >=3D dev->unique_len) { + if (copy_to_user(u.unique, dev->unique, dev->unique_len)) + return -EFAULT; + } + u.unique_len =3D dev->unique_len; + if (copy_to_user((drm_unique_t *)arg, &u, sizeof(u))) + return -EFAULT; + return 0; +} + +int drm_setunique(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_unique_t u; + + if (dev->unique_len || dev->unique) + return -EBUSY; + + if (copy_from_user(&u, (drm_unique_t *)arg, sizeof(u))) + return -EFAULT; + + if (!u.unique_len || u.unique_len > 1024) + return -EINVAL; +=09 + dev->unique_len =3D u.unique_len; + dev->unique =3D drm_alloc(u.unique_len + 1, DRM_MEM_DRIVER); + if (copy_from_user(dev->unique, u.unique, dev->unique_len)) + return -EFAULT; + dev->unique[dev->unique_len] =3D '\0'; + + dev->devname =3D drm_alloc(strlen(dev->name) + strlen(dev->unique) + 2, + DRM_MEM_DRIVER); + sprintf(dev->devname, "%s@%s", dev->name, dev->unique); + + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/lists.c linux-2.4.13-lia/driver= s/char/drm-4.0/lists.c --- linux-2.4.13/drivers/char/drm-4.0/lists.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/lists.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,218 @@ +/* lists.c -- Buffer list handling routines -*- linux-c -*- + * Created: Mon Apr 19 20:54:22 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +int drm_waitlist_create(drm_waitlist_t *bl, int count) +{ + if (bl->count) return -EINVAL; +=09 + bl->count =3D count; + bl->bufs =3D drm_alloc((bl->count + 2) * sizeof(*bl->bufs), + DRM_MEM_BUFLISTS); + bl->rp =3D bl->bufs; + bl->wp =3D bl->bufs; + bl->end =3D &bl->bufs[bl->count+1]; + bl->write_lock =3D SPIN_LOCK_UNLOCKED; + bl->read_lock =3D SPIN_LOCK_UNLOCKED; + return 0; +} + +int drm_waitlist_destroy(drm_waitlist_t *bl) +{ + if (bl->rp !=3D bl->wp) return -EINVAL; + if (bl->bufs) drm_free(bl->bufs, + (bl->count + 2) * sizeof(*bl->bufs), + DRM_MEM_BUFLISTS); + bl->count =3D 0; + bl->bufs =3D NULL; + bl->rp =3D NULL; + bl->wp =3D NULL; + bl->end =3D NULL; + return 0; +} + +int drm_waitlist_put(drm_waitlist_t *bl, drm_buf_t *buf) +{ =09 + int left; + unsigned long flags; + + left =3D DRM_LEFTCOUNT(bl); + if (!left) { + DRM_ERROR("Overflow while adding buffer %d from pid %d\n", + buf->idx, buf->pid); + return -EINVAL; + } +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D get_cycles(); +#endif + buf->list =3D DRM_LIST_WAIT; +=09 + spin_lock_irqsave(&bl->write_lock, flags); + *bl->wp =3D buf; + if (++bl->wp >=3D bl->end) bl->wp =3D bl->bufs; + spin_unlock_irqrestore(&bl->write_lock, flags); +=09 + return 0; +} + +drm_buf_t *drm_waitlist_get(drm_waitlist_t *bl) +{ + drm_buf_t *buf; + unsigned long flags; + + spin_lock_irqsave(&bl->read_lock, flags); + buf =3D *bl->rp; + if (bl->rp =3D bl->wp) { + spin_unlock_irqrestore(&bl->read_lock, flags); + return NULL; + } =20 + if (++bl->rp >=3D bl->end) bl->rp =3D bl->bufs; + spin_unlock_irqrestore(&bl->read_lock, flags); +=09 + return buf; +} + +int drm_freelist_create(drm_freelist_t *bl, int count) +{ + atomic_set(&bl->count, 0); + bl->next =3D NULL; + init_waitqueue_head(&bl->waiting); + bl->low_mark =3D 0; + bl->high_mark =3D 0; + atomic_set(&bl->wfh, 0); + bl->lock =3D SPIN_LOCK_UNLOCKED; + ++bl->initialized; + return 0; +} + +int drm_freelist_destroy(drm_freelist_t *bl) +{ + atomic_set(&bl->count, 0); + bl->next =3D NULL; + return 0; +} + +int drm_freelist_put(drm_device_t *dev, drm_freelist_t *bl, drm_buf_t *buf) +{ + drm_device_dma_t *dma =3D dev->dma; + + if (!dma) { + DRM_ERROR("No DMA support\n"); + return 1; + } + + if (buf->waiting || buf->pending || buf->list =3D DRM_LIST_FREE) { + DRM_ERROR("Freed buffer %d: w%d, p%d, l%d\n", + buf->idx, buf->waiting, buf->pending, buf->list); + } + if (!bl) return 1; +#if DRM_DMA_HISTOGRAM + buf->time_freed =3D get_cycles(); + drm_histogram_compute(dev, buf); +#endif + buf->list =3D DRM_LIST_FREE; +=09 + spin_lock(&bl->lock); + buf->next =3D bl->next; + bl->next =3D buf; + spin_unlock(&bl->lock); +=09 + atomic_inc(&bl->count); + if (atomic_read(&bl->count) > dma->buf_count) { + DRM_ERROR("%d of %d buffers free after addition of %d\n", + atomic_read(&bl->count), dma->buf_count, buf->idx); + return 1; + } + /* Check for high water mark */ + if (atomic_read(&bl->wfh) && atomic_read(&bl->count)>=3Dbl->high_mark) { + atomic_set(&bl->wfh, 0); + wake_up_interruptible(&bl->waiting); + } + return 0; +} + +static drm_buf_t *drm_freelist_try(drm_freelist_t *bl) +{ + drm_buf_t *buf; + + if (!bl) return NULL; +=09 + /* Get buffer */ + spin_lock(&bl->lock); + if (!bl->next) { + spin_unlock(&bl->lock); + return NULL; + } + buf =3D bl->next; + bl->next =3D bl->next->next; + spin_unlock(&bl->lock); +=09 + atomic_dec(&bl->count); + buf->next =3D NULL; + buf->list =3D DRM_LIST_NONE; + if (buf->waiting || buf->pending) { + DRM_ERROR("Free buffer %d: w%d, p%d, l%d\n", + buf->idx, buf->waiting, buf->pending, buf->list); + } +=09 + return buf; +} + +drm_buf_t *drm_freelist_get(drm_freelist_t *bl, int block) +{ + drm_buf_t *buf =3D NULL; + DECLARE_WAITQUEUE(entry, current); + + if (!bl || !bl->initialized) return NULL; +=09 + /* Check for low water mark */ + if (atomic_read(&bl->count) <=3D bl->low_mark) /* Became low */ + atomic_set(&bl->wfh, 1); + if (atomic_read(&bl->wfh)) { + if (block) { + add_wait_queue(&bl->waiting, &entry); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!atomic_read(&bl->wfh) + && (buf =3D drm_freelist_try(bl))) break; + schedule(); + if (signal_pending(current)) break; + } + current->state =3D TASK_RUNNING; + remove_wait_queue(&bl->waiting, &entry); + } + return buf; + } + =09 + return drm_freelist_try(bl); +} diff -urN linux-2.4.13/drivers/char/drm-4.0/lock.c linux-2.4.13-lia/drivers= /char/drm-4.0/lock.c --- linux-2.4.13/drivers/char/drm-4.0/lock.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/lock.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,252 @@ +/* lock.c -- IOCTLs for locking -*- linux-c -*- + * Created: Tue Feb 2 08:37:54 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include "drmP.h" + +int drm_block(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + DRM_DEBUG("\n"); + return 0; +} + +int drm_unblock(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + DRM_DEBUG("\n"); + return 0; +} + +int drm_lock_take(__volatile__ unsigned int *lock, unsigned int context) +{ + unsigned int old, new, prev; + + do { + old =3D *lock; + if (old & _DRM_LOCK_HELD) new =3D old | _DRM_LOCK_CONT; + else new =3D context | _DRM_LOCK_HELD; + prev =3D cmpxchg(lock, old, new); + } while (prev !=3D old); + if (_DRM_LOCKING_CONTEXT(old) =3D context) { + if (old & _DRM_LOCK_HELD) { + if (context !=3D DRM_KERNEL_CONTEXT) { + DRM_ERROR("%d holds heavyweight lock\n", + context); + } + return 0; + } + } + if (new =3D (context | _DRM_LOCK_HELD)) { + /* Have lock */ + return 1; + } + return 0; +} + +/* This takes a lock forcibly and hands it to context. Should ONLY be used + inside *_unlock to give lock to kernel before calling *_dma_schedule. */ +int drm_lock_transfer(drm_device_t *dev, + __volatile__ unsigned int *lock, unsigned int context) +{ + unsigned int old, new, prev; + + dev->lock.pid =3D 0; + do { + old =3D *lock; + new =3D context | _DRM_LOCK_HELD; + prev =3D cmpxchg(lock, old, new); + } while (prev !=3D old); + return 1; +} + +int drm_lock_free(drm_device_t *dev, + __volatile__ unsigned int *lock, unsigned int context) +{ + unsigned int old, new, prev; + pid_t pid =3D dev->lock.pid; + + dev->lock.pid =3D 0; + do { + old =3D *lock; + new =3D 0; + prev =3D cmpxchg(lock, old, new); + } while (prev !=3D old); + if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) !=3D context) { + DRM_ERROR("%d freed heavyweight lock held by %d (pid %d)\n", + context, + _DRM_LOCKING_CONTEXT(old), + pid); + return 1; + } + wake_up_interruptible(&dev->lock.lock_queue); + return 0; +} + +static int drm_flush_queue(drm_device_t *dev, int context) +{ + DECLARE_WAITQUEUE(entry, current); + int ret =3D 0; + drm_queue_t *q =3D dev->queuelist[context]; +=09 + DRM_DEBUG("\n"); +=09 + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) > 1) { + atomic_inc(&q->block_write); + add_wait_queue(&q->flush_queue, &entry); + atomic_inc(&q->block_count); + for (;;) { + current->state =3D TASK_INTERRUPTIBLE; + if (!DRM_BUFCOUNT(&q->waitlist)) break; + schedule(); + if (signal_pending(current)) { + ret =3D -EINTR; /* Can't restart */ + break; + } + } + atomic_dec(&q->block_count); + current->state =3D TASK_RUNNING; + remove_wait_queue(&q->flush_queue, &entry); + } + atomic_dec(&q->use_count); + atomic_inc(&q->total_flushed); + =09 + /* NOTE: block_write is still incremented! + Use drm_flush_unlock_queue to decrement. */ + return ret; +} + +static int drm_flush_unblock_queue(drm_device_t *dev, int context) +{ + drm_queue_t *q =3D dev->queuelist[context]; +=09 + DRM_DEBUG("\n"); +=09 + atomic_inc(&q->use_count); + if (atomic_read(&q->use_count) > 1) { + if (atomic_read(&q->block_write)) { + atomic_dec(&q->block_write); + wake_up_interruptible(&q->write_queue); + } + } + atomic_dec(&q->use_count); + return 0; +} + +int drm_flush_block_and_flush(drm_device_t *dev, int context, + drm_lock_flags_t flags) +{ + int ret =3D 0; + int i; +=09 + DRM_DEBUG("\n"); +=09 + if (flags & _DRM_LOCK_FLUSH) { + ret =3D drm_flush_queue(dev, DRM_KERNEL_CONTEXT); + if (!ret) ret =3D drm_flush_queue(dev, context); + } + if (flags & _DRM_LOCK_FLUSH_ALL) { + for (i =3D 0; !ret && i < dev->queue_count; i++) { + ret =3D drm_flush_queue(dev, i); + } + } + return ret; +} + +int drm_flush_unblock(drm_device_t *dev, int context, drm_lock_flags_t fla= gs) +{ + int ret =3D 0; + int i; +=09 + DRM_DEBUG("\n"); +=09 + if (flags & _DRM_LOCK_FLUSH) { + ret =3D drm_flush_unblock_queue(dev, DRM_KERNEL_CONTEXT); + if (!ret) ret =3D drm_flush_unblock_queue(dev, context); + } + if (flags & _DRM_LOCK_FLUSH_ALL) { + for (i =3D 0; !ret && i < dev->queue_count; i++) { + ret =3D drm_flush_unblock_queue(dev, i); + } + } + =09 + return ret; +} + +int drm_finish(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + int ret =3D 0; + drm_lock_t lock; + + DRM_DEBUG("\n"); + + if (copy_from_user(&lock, (drm_lock_t *)arg, sizeof(lock))) + return -EFAULT; + ret =3D drm_flush_block_and_flush(dev, lock.context, lock.flags); + drm_flush_unblock(dev, lock.context, lock.flags); + return ret; +} + +/* If we get here, it means that the process has called DRM_IOCTL_LOCK + without calling DRM_IOCTL_UNLOCK. + =20 + If the lock is not held, then let the signal proceed as usual. + =20 + If the lock is held, then set the contended flag and keep the signal + blocked. + =20 + + Return 1 if the signal should be delivered normally. + Return 0 if the signal should be blocked. */ + +int drm_notifier(void *priv) +{ + drm_sigdata_t *s =3D (drm_sigdata_t *)priv; + unsigned int old, new, prev; + + + /* Allow signal delivery if lock isn't held */ + if (!_DRM_LOCK_IS_HELD(s->lock->lock) + || _DRM_LOCKING_CONTEXT(s->lock->lock) !=3D s->context) return 1; +=09 + /* Otherwise, set flag to force call to + drmUnlock */ + do { + old =3D s->lock->lock; + new =3D old | _DRM_LOCK_CONT; + prev =3D cmpxchg(&s->lock->lock, old, new); + } while (prev !=3D old); + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/memory.c linux-2.4.13-lia/drive= rs/char/drm-4.0/memory.c --- linux-2.4.13/drivers/char/drm-4.0/memory.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/memory.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,486 @@ +/* memory.c -- Memory management wrappers for DRM -*- linux-c -*- + * Created: Thu Feb 4 14:00:34 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: + * Rickard E. (Rik) Faith + * + */ + +#define __NO_VERSION__ +#include +#include "drmP.h" +#include + +typedef struct drm_mem_stats { + const char *name; + int succeed_count; + int free_count; + int fail_count; + unsigned long bytes_allocated; + unsigned long bytes_freed; +} drm_mem_stats_t; + +static spinlock_t drm_mem_lock =3D SPIN_LOCK_UNLOCKED; +static unsigned long drm_ram_available =3D 0; /* In pages */ +static unsigned long drm_ram_used =3D 0; +static drm_mem_stats_t drm_mem_stats[] =3D { + [DRM_MEM_DMA] =3D { "dmabufs" }, + [DRM_MEM_SAREA] =3D { "sareas" }, + [DRM_MEM_DRIVER] =3D { "driver" }, + [DRM_MEM_MAGIC] =3D { "magic" }, + [DRM_MEM_IOCTLS] =3D { "ioctltab" }, + [DRM_MEM_MAPS] =3D { "maplist" }, + [DRM_MEM_VMAS] =3D { "vmalist" }, + [DRM_MEM_BUFS] =3D { "buflist" }, + [DRM_MEM_SEGS] =3D { "seglist" }, + [DRM_MEM_PAGES] =3D { "pagelist" }, + [DRM_MEM_FILES] =3D { "files" }, + [DRM_MEM_QUEUES] =3D { "queues" }, + [DRM_MEM_CMDS] =3D { "commands" }, + [DRM_MEM_MAPPINGS] =3D { "mappings" }, + [DRM_MEM_BUFLISTS] =3D { "buflists" }, + [DRM_MEM_AGPLISTS] =3D { "agplist" }, + [DRM_MEM_TOTALAGP] =3D { "totalagp" }, + [DRM_MEM_BOUNDAGP] =3D { "boundagp" }, + [DRM_MEM_CTXBITMAP] =3D { "ctxbitmap"}, + { NULL, 0, } /* Last entry must be null */ +}; + +void drm_mem_init(void) +{ + drm_mem_stats_t *mem; + struct sysinfo si; +=09 + for (mem =3D drm_mem_stats; mem->name; ++mem) { + mem->succeed_count =3D 0; + mem->free_count =3D 0; + mem->fail_count =3D 0; + mem->bytes_allocated =3D 0; + mem->bytes_freed =3D 0; + } +=09 + si_meminfo(&si); +#if LINUX_VERSION_CODE < 0x020317 + /* Changed to page count in 2.3.23 */ + drm_ram_available =3D si.totalram >> PAGE_SHIFT; +#else + drm_ram_available =3D si.totalram; +#endif + drm_ram_used =3D 0; +} + +/* drm_mem_info is called whenever a process reads /dev/drm/mem. */ + +static int _drm_mem_info(char *buf, char **start, off_t offset, int len, + int *eof, void *data) +{ + drm_mem_stats_t *pt; + + if (offset > 0) return 0; /* no partial requests */ + len =3D 0; + *eof =3D 1; + DRM_PROC_PRINT(" total counts " + " | outstanding \n"); + DRM_PROC_PRINT("type alloc freed fail bytes freed" + " | allocs bytes\n\n"); + DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", + "system", 0, 0, 0, + drm_ram_available << (PAGE_SHIFT - 10)); + DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", + "locked", 0, 0, 0, drm_ram_used >> 10); + DRM_PROC_PRINT("\n"); + for (pt =3D drm_mem_stats; pt->name; pt++) { + DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu %10lu | %6d %10ld\n", + pt->name, + pt->succeed_count, + pt->free_count, + pt->fail_count, + pt->bytes_allocated, + pt->bytes_freed, + pt->succeed_count - pt->free_count, + (long)pt->bytes_allocated + - (long)pt->bytes_freed); + } +=09 + return len; +} + +int drm_mem_info(char *buf, char **start, off_t offset, int len, + int *eof, void *data) +{ + int ret; +=09 + spin_lock(&drm_mem_lock); + ret =3D _drm_mem_info(buf, start, offset, len, eof, data); + spin_unlock(&drm_mem_lock); + return ret; +} + +void *drm_alloc(size_t size, int area) +{ + void *pt; +=09 + if (!size) { + DRM_MEM_ERROR(area, "Allocating 0 bytes\n"); + return NULL; + } +=09 + if (!(pt =3D kmalloc(size, GFP_KERNEL))) { + spin_lock(&drm_mem_lock); + ++drm_mem_stats[area].fail_count; + spin_unlock(&drm_mem_lock); + return NULL; + } + spin_lock(&drm_mem_lock); + ++drm_mem_stats[area].succeed_count; + drm_mem_stats[area].bytes_allocated +=3D size; + spin_unlock(&drm_mem_lock); + return pt; +} + +void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area) +{ + void *pt; +=09 + if (!(pt =3D drm_alloc(size, area))) return NULL; + if (oldpt && oldsize) { + memcpy(pt, oldpt, oldsize); + drm_free(oldpt, oldsize, area); + } + return pt; +} + +char *drm_strdup(const char *s, int area) +{ + char *pt; + int length =3D s ? strlen(s) : 0; +=09 + if (!(pt =3D drm_alloc(length+1, area))) return NULL; + strcpy(pt, s); + return pt; +} + +void drm_strfree(const char *s, int area) +{ + unsigned int size; +=09 + if (!s) return; +=09 + size =3D 1 + (s ? strlen(s) : 0); + drm_free((void *)s, size, area); +} + +void drm_free(void *pt, size_t size, int area) +{ + int alloc_count; + int free_count; +=09 + if (!pt) DRM_MEM_ERROR(area, "Attempt to free NULL pointer\n"); + else kfree(pt); + spin_lock(&drm_mem_lock); + drm_mem_stats[area].bytes_freed +=3D size; + free_count =3D ++drm_mem_stats[area].free_count; + alloc_count =3D drm_mem_stats[area].succeed_count; + spin_unlock(&drm_mem_lock); + if (free_count > alloc_count) { + DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n", + free_count, alloc_count); + } +} + +unsigned long drm_alloc_pages(int order, int area) +{ + unsigned long address; + unsigned long bytes =3D PAGE_SIZE << order; + unsigned long addr; + unsigned int sz; +=09 + spin_lock(&drm_mem_lock); + if ((drm_ram_used >> PAGE_SHIFT) + > (DRM_RAM_PERCENT * drm_ram_available) / 100) { + spin_unlock(&drm_mem_lock); + return 0; + } + spin_unlock(&drm_mem_lock); +=09 + address =3D __get_free_pages(GFP_KERNEL, order); + if (!address) { + spin_lock(&drm_mem_lock); + ++drm_mem_stats[area].fail_count; + spin_unlock(&drm_mem_lock); + return 0; + } + spin_lock(&drm_mem_lock); + ++drm_mem_stats[area].succeed_count; + drm_mem_stats[area].bytes_allocated +=3D bytes; + drm_ram_used +=3D bytes; + spin_unlock(&drm_mem_lock); +=09 +=09 + /* Zero outside the lock */ + memset((void *)address, 0, bytes); +=09 + /* Reserve */ + for (addr =3D address, sz =3D bytes; + sz > 0; + addr +=3D PAGE_SIZE, sz -=3D PAGE_SIZE) { +#if LINUX_VERSION_CODE >=3D 0x020400 + /* Argument type changed in 2.4.0-test6/pre8 */ + mem_map_reserve(virt_to_page(addr)); +#else + mem_map_reserve(MAP_NR(addr)); +#endif + } +=09 + return address; +} + +void drm_free_pages(unsigned long address, int order, int area) +{ + unsigned long bytes =3D PAGE_SIZE << order; + int alloc_count; + int free_count; + unsigned long addr; + unsigned int sz; +=09 + if (!address) { + DRM_MEM_ERROR(area, "Attempt to free address 0\n"); + } else { + /* Unreserve */ + for (addr =3D address, sz =3D bytes; + sz > 0; + addr +=3D PAGE_SIZE, sz -=3D PAGE_SIZE) { +#if LINUX_VERSION_CODE >=3D 0x020400 + /* Argument type changed in 2.4.0-test6/pre8 */ + mem_map_unreserve(virt_to_page(addr)); +#else + mem_map_unreserve(MAP_NR(addr)); +#endif + } + free_pages(address, order); + } +=09 + spin_lock(&drm_mem_lock); + free_count =3D ++drm_mem_stats[area].free_count; + alloc_count =3D drm_mem_stats[area].succeed_count; + drm_mem_stats[area].bytes_freed +=3D bytes; + drm_ram_used -=3D bytes; + spin_unlock(&drm_mem_lock); + if (free_count > alloc_count) { + DRM_MEM_ERROR(area, + "Excess frees: %d frees, %d allocs\n", + free_count, alloc_count); + } +} + +void *drm_ioremap(unsigned long offset, unsigned long size, drm_device_t *= dev) +{ + void *pt; +=09 + if (!size) { + DRM_MEM_ERROR(DRM_MEM_MAPPINGS, + "Mapping 0 bytes at 0x%08lx\n", offset); + return NULL; + } +=09 + if(dev->agp->cant_use_aperture =3D 0) { + goto standard_ioremap; + } else { + drm_map_t *map =3D NULL; + int i; + + for(i =3D 0; i < dev->map_count; i++) { + map =3D dev->maplist[i]; + if (!map) continue; + if (map->offset <=3D offset && + (map->offset + map->size) >=3D (offset + size)) + break; + } + =09 + if(map && map->type =3D _DRM_AGP) { + struct drm_agp_mem *agpmem; + + for(agpmem =3D dev->agp->memory; agpmem; + agpmem =3D agpmem->next) { + if(agpmem->bound <=3D offset && + (agpmem->bound + (agpmem->pages + << PAGE_SHIFT)) >=3D (offset + size)) + break; + } + + if(agpmem =3D NULL) + goto standard_ioremap; + + pt =3D agpmem->memory->vmptr + (offset - agpmem->bound); + goto ioremap_success; + } else { + goto standard_ioremap; + } + } + +standard_ioremap: + if (!(pt =3D ioremap(offset, size))) { + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_MAPPINGS].fail_count; + spin_unlock(&drm_mem_lock); + return NULL; + } + +ioremap_success: + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count; + drm_mem_stats[DRM_MEM_MAPPINGS].bytes_allocated +=3D size; + spin_unlock(&drm_mem_lock); + return pt; +} + +void drm_ioremapfree(void *pt, unsigned long size, drm_device_t *dev) +{ + int alloc_count; + int free_count; +=09 + if (!pt) + DRM_MEM_ERROR(DRM_MEM_MAPPINGS, + "Attempt to free NULL pointer\n"); + else if(dev->agp->cant_use_aperture =3D 0) + iounmap(pt); +=09 + spin_lock(&drm_mem_lock); + drm_mem_stats[DRM_MEM_MAPPINGS].bytes_freed +=3D size; + free_count =3D ++drm_mem_stats[DRM_MEM_MAPPINGS].free_count; + alloc_count =3D drm_mem_stats[DRM_MEM_MAPPINGS].succeed_count; + spin_unlock(&drm_mem_lock); + if (free_count > alloc_count) { + DRM_MEM_ERROR(DRM_MEM_MAPPINGS, + "Excess frees: %d frees, %d allocs\n", + free_count, alloc_count); + } +} + +#if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) +agp_memory *drm_alloc_agp(int pages, u32 type) +{ + agp_memory *handle; + + if (!pages) { + DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Allocating 0 pages\n"); + return NULL; + } +=09 + if ((handle =3D drm_agp_allocate_memory(pages, type))) { + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count; + drm_mem_stats[DRM_MEM_TOTALAGP].bytes_allocated + +=3D pages << PAGE_SHIFT; + spin_unlock(&drm_mem_lock); + return handle; + } + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_TOTALAGP].fail_count; + spin_unlock(&drm_mem_lock); + return NULL; +} + +int drm_free_agp(agp_memory *handle, int pages) +{ + int alloc_count; + int free_count; + int retval =3D -EINVAL; + + if (!handle) { + DRM_MEM_ERROR(DRM_MEM_TOTALAGP, + "Attempt to free NULL AGP handle\n"); + return retval;; + } +=09 + if (drm_agp_free_memory(handle)) { + spin_lock(&drm_mem_lock); + free_count =3D ++drm_mem_stats[DRM_MEM_TOTALAGP].free_count; + alloc_count =3D drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count; + drm_mem_stats[DRM_MEM_TOTALAGP].bytes_freed + +=3D pages << PAGE_SHIFT; + spin_unlock(&drm_mem_lock); + if (free_count > alloc_count) { + DRM_MEM_ERROR(DRM_MEM_TOTALAGP, + "Excess frees: %d frees, %d allocs\n", + free_count, alloc_count); + } + return 0; + } + return retval; +} + +int drm_bind_agp(agp_memory *handle, unsigned int start) +{ + int retcode =3D -EINVAL; + + if (!handle) { + DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, + "Attempt to bind NULL AGP handle\n"); + return retcode; + } + + if (!(retcode =3D drm_agp_bind_memory(handle, start))) { + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count; + drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_allocated + +=3D handle->page_count << PAGE_SHIFT; + spin_unlock(&drm_mem_lock); + return retcode; + } + spin_lock(&drm_mem_lock); + ++drm_mem_stats[DRM_MEM_BOUNDAGP].fail_count; + spin_unlock(&drm_mem_lock); + return retcode; +} + +int drm_unbind_agp(agp_memory *handle) +{ + int alloc_count; + int free_count; + int retcode =3D -EINVAL; +=09 + if (!handle) { + DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, + "Attempt to unbind NULL AGP handle\n"); + return retcode; + } + + if ((retcode =3D drm_agp_unbind_memory(handle))) return retcode; + spin_lock(&drm_mem_lock); + free_count =3D ++drm_mem_stats[DRM_MEM_BOUNDAGP].free_count; + alloc_count =3D drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count; + drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_freed + +=3D handle->page_count << PAGE_SHIFT; + spin_unlock(&drm_mem_lock); + if (free_count > alloc_count) { + DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, + "Excess frees: %d frees, %d allocs\n", + free_count, alloc_count); + } + return retcode; +} +#endif diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c linux-2.4.13-lia/dri= vers/char/drm-4.0/mga_bufs.c --- linux-2.4.13/drivers/char/drm-4.0/mga_bufs.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_bufs.c Thu Oct 4 00:21:40 20= 01 @@ -0,0 +1,629 @@ +/* mga_bufs.c -- IOCTLs to manage buffers -*- linux-c -*- + * Created: Thu Jan 6 01:47:26 2000 by jhartmann@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "mga_drv.h" +#include "linux/un.h" + + +int mga_addbufs_agp(struct inode *inode, struct file *filp, unsigned int c= md, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + drm_buf_entry_t *entry; + drm_buf_t *buf; + unsigned long offset; + unsigned long agp_offset; + int count; + int order; + int size; + int alignment; + int page_order; + int total; + int byte_count; + int i; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + count =3D request.count; + order =3D drm_order(request.size); + size =3D 1 << order; + agp_offset =3D request.agp_start; + alignment =3D (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size; + page_order =3D order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0; + total =3D PAGE_SIZE << page_order; + byte_count =3D 0; + + DRM_DEBUG("count: %d\n", count); + DRM_DEBUG("order: %d\n", order); + DRM_DEBUG("size: %d\n", size); + DRM_DEBUG("agp_offset: %ld\n", agp_offset); + DRM_DEBUG("alignment: %d\n", alignment); + DRM_DEBUG("page_order: %d\n", page_order); + DRM_DEBUG("total: %d\n", total); + DRM_DEBUG("byte_count: %d\n", byte_count); + + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + if (dev->queue_count) return -EBUSY; /* Not while in use */ + spin_lock(&dev->count_lock); + if (dev->buf_use) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + atomic_inc(&dev->buf_alloc); + spin_unlock(&dev->count_lock); + =20 + down(&dev->struct_sem); + entry =3D &dma->bufs[order]; + if (entry->buf_count) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; /* May only call once for each order */ + } + + /* This isnt neccessarily a good limit, but we have to stop a dumb + 32 bit overflow problem below */ + =20 + if ( count < 0 || count > 4096) + { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -EINVAL; + } + =20 + entry->buflist =3D drm_alloc(count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + if (!entry->buflist) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->buflist, 0, count * sizeof(*entry->buflist)); + =20 + entry->buf_size =3D size; + entry->page_order =3D page_order; + offset =3D 0; + + =20 + while(entry->buf_count < count) { + buf =3D &entry->buflist[entry->buf_count]; + buf->idx =3D dma->buf_count + entry->buf_count; + buf->total =3D alignment; + buf->order =3D order; + buf->used =3D 0; + + buf->offset =3D offset; /* Hrm */ + buf->bus_address =3D dev->agp->base + agp_offset + offset; + buf->address =3D (void *)(agp_offset + offset + dev->agp->base); + buf->next =3D NULL; + buf->waiting =3D 0; + buf->pending =3D 0; + init_waitqueue_head(&buf->dma_wait); + buf->pid =3D 0; + + buf->dev_private =3D drm_alloc(sizeof(drm_mga_buf_priv_t), + DRM_MEM_BUFS); + buf->dev_priv_size =3D sizeof(drm_mga_buf_priv_t); + +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D 0; + buf->time_dispatched =3D 0; + buf->time_completed =3D 0; + buf->time_freed =3D 0; +#endif + offset =3D offset + alignment; + entry->buf_count++; + byte_count +=3D PAGE_SIZE << page_order; + } + =20 + dma->buflist =3D drm_realloc(dma->buflist, + dma->buf_count * sizeof(*dma->buflist), + (dma->buf_count + entry->buf_count) + * sizeof(*dma->buflist), + DRM_MEM_BUFS); + for (i =3D dma->buf_count; i < dma->buf_count + entry->buf_count; i++) + dma->buflist[i] =3D &entry->buflist[i - dma->buf_count]; + =20 + dma->buf_count +=3D entry->buf_count; + + DRM_DEBUG("dma->buf_count : %d\n", dma->buf_count); + + dma->byte_count +=3D byte_count; + + DRM_DEBUG("entry->buf_count : %d\n", entry->buf_count); + + drm_freelist_create(&entry->freelist, entry->buf_count); + for (i =3D 0; i < entry->buf_count; i++) { + drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]); + } + =20 + up(&dev->struct_sem); + =20 + request.count =3D entry->buf_count; + request.size =3D size; + =20 + if (copy_to_user((drm_buf_desc_t *)arg, + &request, + sizeof(request))) + return -EFAULT; + =20 + atomic_dec(&dev->buf_alloc); + + DRM_DEBUG("count: %d\n", count); + DRM_DEBUG("order: %d\n", order); + DRM_DEBUG("size: %d\n", size); + DRM_DEBUG("agp_offset: %ld\n", agp_offset); + DRM_DEBUG("alignment: %d\n", alignment); + DRM_DEBUG("page_order: %d\n", page_order); + DRM_DEBUG("total: %d\n", total); + DRM_DEBUG("byte_count: %d\n", byte_count); + + dma->flags =3D _DRM_DMA_USE_AGP; + + DRM_DEBUG("dma->flags : %x\n", dma->flags); + + return 0; +} + +int mga_addbufs_pci(struct inode *inode, struct file *filp, unsigned int c= md, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + int count; + int order; + int size; + int total; + int page_order; + drm_buf_entry_t *entry; + unsigned long page; + drm_buf_t *buf; + int alignment; + unsigned long offset; + int i; + int byte_count; + int page_count; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + count =3D request.count; + order =3D drm_order(request.size); + size =3D 1 << order; +=09 + DRM_DEBUG("count =3D %d, size =3D %d (%d), order =3D %d, queue_count =3D = %d\n", + request.count, request.size, size, order, dev->queue_count); + + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + if (dev->queue_count) return -EBUSY; /* Not while in use */ + + alignment =3D (request.flags & _DRM_PAGE_ALIGN) ? PAGE_ALIGN(size):size; + page_order =3D order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0; + total =3D PAGE_SIZE << page_order; + + spin_lock(&dev->count_lock); + if (dev->buf_use) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + atomic_inc(&dev->buf_alloc); + spin_unlock(&dev->count_lock); +=09 + down(&dev->struct_sem); + entry =3D &dma->bufs[order]; + if (entry->buf_count) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; /* May only call once for each order */ + } +=09 + if(count < 0 || count > 4096) + { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -EINVAL; + } +=09 + entry->buflist =3D drm_alloc(count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + if (!entry->buflist) { + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->buflist, 0, count * sizeof(*entry->buflist)); + + entry->seglist =3D drm_alloc(count * sizeof(*entry->seglist), + DRM_MEM_SEGS); + if (!entry->seglist) { + drm_free(entry->buflist, + count * sizeof(*entry->buflist), + DRM_MEM_BUFS); + up(&dev->struct_sem); + atomic_dec(&dev->buf_alloc); + return -ENOMEM; + } + memset(entry->seglist, 0, count * sizeof(*entry->seglist)); + + dma->pagelist =3D drm_realloc(dma->pagelist, + dma->page_count * sizeof(*dma->pagelist), + (dma->page_count + (count << page_order)) + * sizeof(*dma->pagelist), + DRM_MEM_PAGES); + DRM_DEBUG("pagelist: %d entries\n", + dma->page_count + (count << page_order)); + + + entry->buf_size =3D size; + entry->page_order =3D page_order; + byte_count =3D 0; + page_count =3D 0; + while (entry->buf_count < count) { + if (!(page =3D drm_alloc_pages(page_order, DRM_MEM_DMA))) break; + entry->seglist[entry->seg_count++] =3D page; + for (i =3D 0; i < (1 << page_order); i++) { + DRM_DEBUG("page %d @ 0x%08lx\n", + dma->page_count + page_count, + page + PAGE_SIZE * i); + dma->pagelist[dma->page_count + page_count++] + =3D page + PAGE_SIZE * i; + } + for (offset =3D 0; + offset + size <=3D total && entry->buf_count < count; + offset +=3D alignment, ++entry->buf_count) { + buf =3D &entry->buflist[entry->buf_count]; + buf->idx =3D dma->buf_count + entry->buf_count; + buf->total =3D alignment; + buf->order =3D order; + buf->used =3D 0; + buf->offset =3D (dma->byte_count + byte_count + offset); + buf->address =3D (void *)(page + offset); + buf->next =3D NULL; + buf->waiting =3D 0; + buf->pending =3D 0; + init_waitqueue_head(&buf->dma_wait); + buf->pid =3D 0; +#if DRM_DMA_HISTOGRAM + buf->time_queued =3D 0; + buf->time_dispatched =3D 0; + buf->time_completed =3D 0; + buf->time_freed =3D 0; +#endif + DRM_DEBUG("buffer %d @ %p\n", + entry->buf_count, buf->address); + } + byte_count +=3D PAGE_SIZE << page_order; + } + + dma->buflist =3D drm_realloc(dma->buflist, + dma->buf_count * sizeof(*dma->buflist), + (dma->buf_count + entry->buf_count) + * sizeof(*dma->buflist), + DRM_MEM_BUFS); + for (i =3D dma->buf_count; i < dma->buf_count + entry->buf_count; i++) + dma->buflist[i] =3D &entry->buflist[i - dma->buf_count]; + + dma->buf_count +=3D entry->buf_count; + dma->seg_count +=3D entry->seg_count; + dma->page_count +=3D entry->seg_count << page_order; + dma->byte_count +=3D PAGE_SIZE * (entry->seg_count << page_order); +=09 + drm_freelist_create(&entry->freelist, entry->buf_count); + for (i =3D 0; i < entry->buf_count; i++) { + drm_freelist_put(dev, &entry->freelist, &entry->buflist[i]); + } +=09 + up(&dev->struct_sem); + + request.count =3D entry->buf_count; + request.size =3D size; + + if (copy_to_user((drm_buf_desc_t *)arg, + &request, + sizeof(request))) + return -EFAULT; +=09 + atomic_dec(&dev->buf_alloc); + return 0; +} + +int mga_addbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_buf_desc_t request; + + if (copy_from_user(&request, + (drm_buf_desc_t *)arg, + sizeof(request))) + return -EFAULT; + + if(request.flags & _DRM_AGP_BUFFER) + return mga_addbufs_agp(inode, filp, cmd, arg); + else + return mga_addbufs_pci(inode, filp, cmd, arg); +} + +int mga_infobufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_info_t request; + int i; + int count; + + if (!dma) return -EINVAL; + + spin_lock(&dev->count_lock); + if (atomic_read(&dev->buf_alloc)) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + ++dev->buf_use; /* Can't allocate more after this call */ + spin_unlock(&dev->count_lock); + + if (copy_from_user(&request, + (drm_buf_info_t *)arg, + sizeof(request))) + return -EFAULT; + + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) ++count; + } +=09 + if (request.count >=3D count) { + for (i =3D 0, count =3D 0; i < DRM_MAX_ORDER+1; i++) { + if (dma->bufs[i].buf_count) { + if (copy_to_user(&request.list[count].count, + &dma->bufs[i].buf_count, + sizeof(dma->bufs[0] + .buf_count)) || + copy_to_user(&request.list[count].size, + &dma->bufs[i].buf_size, + sizeof(dma->bufs[0].buf_size)) || + copy_to_user(&request.list[count].low_mark, + &dma->bufs[i] + .freelist.low_mark, + sizeof(dma->bufs[0] + .freelist.low_mark)) || + copy_to_user(&request.list[count] + .high_mark, + &dma->bufs[i] + .freelist.high_mark, + sizeof(dma->bufs[0] + .freelist.high_mark))) + return -EFAULT; + ++count; + } + } + } + request.count =3D count; + + if (copy_to_user((drm_buf_info_t *)arg, + &request, + sizeof(request))) + return -EFAULT; +=09 + return 0; +} + +int mga_markbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_desc_t request; + int order; + drm_buf_entry_t *entry; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, (drm_buf_desc_t *)arg, sizeof(request))) + return -EFAULT; + + order =3D drm_order(request.size); + if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) return -EINVAL; + entry =3D &dma->bufs[order]; + + if (request.low_mark < 0 || request.low_mark > entry->buf_count) + return -EINVAL; + if (request.high_mark < 0 || request.high_mark > entry->buf_count) + return -EINVAL; + + entry->freelist.low_mark =3D request.low_mark; + entry->freelist.high_mark =3D request.high_mark; +=09 + return 0; +} + +int mga_freebufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + drm_buf_free_t request; + int i; + int idx; + drm_buf_t *buf; + + if (!dma) return -EINVAL; + + if (copy_from_user(&request, + (drm_buf_free_t *)arg, + sizeof(request))) + return -EFAULT; + + for (i =3D 0; i < request.count; i++) { + if (copy_from_user(&idx, + &request.list[i], + sizeof(idx))) + return -EFAULT; + if (idx < 0 || idx >=3D dma->buf_count) { + DRM_ERROR("Index %d (of %d max)\n", + idx, dma->buf_count - 1); + return -EINVAL; + } + buf =3D dma->buflist[idx]; + if (buf->pid !=3D current->pid) { + DRM_ERROR("Process %d freeing buffer owned by %d\n", + current->pid, buf->pid); + return -EINVAL; + } + drm_free_buffer(dev, buf); + } +=09 + return 0; +} + +int mga_mapbufs(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_device_dma_t *dma =3D dev->dma; + int retcode =3D 0; + const int zero =3D 0; + unsigned long virtual; + unsigned long address; + drm_buf_map_t request; + int i; + + if (!dma) return -EINVAL; +=09 + spin_lock(&dev->count_lock); + if (atomic_read(&dev->buf_alloc)) { + spin_unlock(&dev->count_lock); + return -EBUSY; + } + ++dev->buf_use; /* Can't allocate more after this call */ + spin_unlock(&dev->count_lock); + + if (copy_from_user(&request, + (drm_buf_map_t *)arg, + sizeof(request))) + return -EFAULT; + + if (request.count >=3D dma->buf_count) { + if(dma->flags & _DRM_DMA_USE_AGP) { + drm_mga_private_t *dev_priv =3D dev->dev_private; + drm_map_t *map =3D NULL; + =20 + map =3D dev->maplist[dev_priv->buffer_map_idx]; + if (!map) { + retcode =3D -EINVAL; + goto done; + } + + DRM_DEBUG("map->offset : %lx\n", map->offset); + DRM_DEBUG("map->size : %lx\n", map->size); + DRM_DEBUG("map->type : %d\n", map->type); + DRM_DEBUG("map->flags : %x\n", map->flags); + DRM_DEBUG("map->handle : %p\n", map->handle); + DRM_DEBUG("map->mtrr : %d\n", map->mtrr); + down_write(¤t->mm->mmap_sem); + virtual =3D do_mmap(filp, 0, map->size,=20 + PROT_READ|PROT_WRITE, + MAP_SHARED,=20 + (unsigned long)map->offset); + up_write(¤t->mm->mmap_sem); + } else { + down_write(¤t->mm->mmap_sem); + virtual =3D do_mmap(filp, 0, dma->byte_count, + PROT_READ|PROT_WRITE, MAP_SHARED, 0); + up_write(¤t->mm->mmap_sem); + } + if (virtual > -1024UL) { + /* Real error */ + DRM_DEBUG("mmap error\n"); + retcode =3D (signed long)virtual; + goto done; + } + request.virtual =3D (void *)virtual; + =20 + for (i =3D 0; i < dma->buf_count; i++) { + if (copy_to_user(&request.list[i].idx, + &dma->buflist[i]->idx, + sizeof(request.list[0].idx))) { + retcode =3D -EFAULT; + goto done; + } + if (copy_to_user(&request.list[i].total, + &dma->buflist[i]->total, + sizeof(request.list[0].total))) { + retcode =3D -EFAULT; + goto done; + } + if (copy_to_user(&request.list[i].used, + &zero, + sizeof(zero))) { + retcode =3D -EFAULT; + goto done; + } + address =3D virtual + dma->buflist[i]->offset; + if (copy_to_user(&request.list[i].address, + &address, + sizeof(address))) { + retcode =3D -EFAULT; + goto done; + } + } + } + done: + request.count =3D dma->buf_count; + DRM_DEBUG("%d buffers, retcode =3D %d\n", request.count, retcode); + =20 + if (copy_to_user((drm_buf_map_t *)arg, + &request, + sizeof(request))) + return -EFAULT; + + DRM_DEBUG("retcode : %d\n", retcode); + + return retcode; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_context.c linux-2.4.13-lia/= drivers/char/drm-4.0/mga_context.c --- linux-2.4.13/drivers/char/drm-4.0/mga_context.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_context.c Thu Oct 4 00:21:40= 2001 @@ -0,0 +1,209 @@ +/* mga_context.c -- IOCTLs for mga contexts -*- linux-c -*- + * Created: Mon Dec 13 09:51:35 1999 by faith@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + *=20 + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + *=20 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + *=20 + * Author: Rickard E. (Rik) Faith + * Jeff Hartmann + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "mga_drv.h" + +static int mga_alloc_queue(drm_device_t *dev) +{ + return drm_ctxbitmap_next(dev); +} + +int mga_context_switch(drm_device_t *dev, int old, int new) +{ + char buf[64]; + + atomic_inc(&dev->total_ctx); + + if (test_and_set_bit(0, &dev->context_flag)) { + DRM_ERROR("Reentering -- FIXME\n"); + return -EBUSY; + } + +#if DRM_DMA_HISTOGRAM + dev->ctx_start =3D get_cycles(); +#endif + =20 + DRM_DEBUG("Context switch from %d to %d\n", old, new); + + if (new =3D dev->last_context) { + clear_bit(0, &dev->context_flag); + return 0; + } + =20 + if (drm_flags & DRM_FLAG_NOCTX) { + mga_context_switch_complete(dev, new); + } else { + sprintf(buf, "C %d %d\n", old, new); + drm_write_string(dev, buf); + } + =20 + return 0; +} + +int mga_context_switch_complete(drm_device_t *dev, int new) +{ + dev->last_context =3D new; /* PRE/POST: This is the _only_ writer= . */ + dev->last_switch =3D jiffies; + =20 + if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) { + DRM_ERROR("Lock isn't held after context switch\n"); + } + + /* If a context switch is ever initiated + when the kernel holds the lock, release + that lock here. */ +#if DRM_DMA_HISTOGRAM + atomic_inc(&dev->histo.ctx[drm_histogram_slot(get_cycles() + - dev->ctx_start)]); + =20 +#endif + clear_bit(0, &dev->context_flag); + wake_up(&dev->context_wait); + =20 + return 0; +} + +int mga_resctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_res_t res; + drm_ctx_t ctx; + int i; + + if (copy_from_user(&res, (drm_ctx_res_t *)arg, sizeof(res))) + return -EFAULT; + if (res.count >=3D DRM_RESERVED_CONTEXTS) { + memset(&ctx, 0, sizeof(ctx)); + for (i =3D 0; i < DRM_RESERVED_CONTEXTS; i++) { + ctx.handle =3D i; + if (copy_to_user(&res.contexts[i], + &i, + sizeof(i))) + return -EFAULT; + } + } + res.count =3D DRM_RESERVED_CONTEXTS; + if (copy_to_user((drm_ctx_res_t *)arg, &res, sizeof(res))) + return -EFAULT; + return 0; +} + +int mga_addctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + if ((ctx.handle =3D mga_alloc_queue(dev)) =3D DRM_KERNEL_CONTEXT) { + /* Skip kernel's context and get a new one. */ + ctx.handle =3D mga_alloc_queue(dev); + } + if (ctx.handle =3D -1) { + return -ENOMEM; + } + DRM_DEBUG("%d\n", ctx.handle); + if (copy_to_user((drm_ctx_t *)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int mga_modctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + /* This does nothing for the mga */ + return 0; +} + +int mga_getctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t*)arg, sizeof(ctx))) + return -EFAULT; + /* This is 0, because we don't hanlde any context flags */ + ctx.flags =3D 0; + if (copy_to_user((drm_ctx_t*)arg, &ctx, sizeof(ctx))) + return -EFAULT; + return 0; +} + +int mga_switchctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + return mga_context_switch(dev, dev->last_context, ctx.handle); +} + +int mga_newctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + mga_context_switch_complete(dev, ctx.handle); + + return 0; +} + +int mga_rmctx(struct inode *inode, struct file *filp, unsigned int cmd, + unsigned long arg) +{ + drm_file_t *priv =3D filp->private_data; + drm_device_t *dev =3D priv->dev; + drm_ctx_t ctx; + + if (copy_from_user(&ctx, (drm_ctx_t *)arg, sizeof(ctx))) + return -EFAULT; + DRM_DEBUG("%d\n", ctx.handle); + if(ctx.handle =3D DRM_KERNEL_CONTEXT+1) priv->remove_auth_on_close =3D 1; + + if(ctx.handle !=3D DRM_KERNEL_CONTEXT) { + drm_ctxbitmap_free(dev, ctx.handle); + } +=09 + return 0; +} diff -urN linux-2.4.13/drivers/char/drm-4.0/mga_dma.c linux-2.4.13-lia/driv= ers/char/drm-4.0/mga_dma.c --- linux-2.4.13/drivers/char/drm-4.0/mga_dma.c Wed Dec 31 16:00:00 1969 +++ linux-2.4.13-lia/drivers/char/drm-4.0/mga_dma.c Thu Oct 4 00:21:40 2001 @@ -0,0 +1,1059 @@ +/* mga_dma.c -- DMA support for mga g200/g400 -*- linux-c -*- + * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com + * + * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. + * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software= "), + * to deal in the Software without restriction, including without limitati= on + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the ne= xt + * paragraph) shall be included in all copies or substantial portions of t= he + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES= OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR O= THER + * DEALINGS IN THE SOFTWARE. + * + * Authors: Rickard E. (Rik) Faith + * Jeff Hartmann + * Keith Whitwell + * + */ + +#define __NO_VERSION__ +#include "drmP.h" +#include "mga_drv.h" + +#include /* For task queue support */ + +#define MGA_REG(reg) 2 +#define MGA_BASE(reg) ((unsigned long) \ + ((drm_device_t *)dev)->maplist[MGA_REG(reg)]->handle) +#define MGA_ADDR(reg) (MGA_BASE(reg) + reg) +#define MGA_DEREF(reg) *(__volatile__ int *)MGA_ADDR(reg) +#define MGA_READ(reg) MGA_DEREF(reg) +#define MGA_WRITE(reg,val) do { MGA_DEREF(reg) =3D val; } while (0) + +#define PDEA_pagpxfer_enable 0x2 + +static int mga_flush_queue(drm_device_t *dev); + +static unsigned long mga_alloc_page(drm_device_t *dev) +{ + unsigned long address; + + address =3D __get_free_page(GFP_KERNEL); + if(address =3D 0UL) { + return 0; + } + atomic_inc(&virt_to_page(address)->count); + set_bit(PG_reserved, &virt_to_page(address)->flags); + + return address; +} + +static void mga_free_page(drm_device_t *dev, unsigned long page) +{ + if(!page) return; + atomic_dec(&virt_to_page(page)->count); + clear_bit(PG_reserved, &virt_to_page(page)->flags); + free_page(page); + return; +} + +static void mga_delay(void) +{ + return; +} + +/* These are two age tags that will never be sent to + * the hardware */ +#define MGA_BUF_USED 0xffffffff +#define MGA_BUF_FREE 0 + +static int mga_freelist_init(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + drm_buf_t *buf; + drm_mga_buf_priv_t *buf_priv; + drm_mga_private_t *dev_priv =3D (drm_mga_private_t *)dev->dev_priva= te; + drm_mga_freelist_t *item; + int i; + + dev_priv->head =3D drm_alloc(sizeof(drm_mga_freelist_t), DRM_MEM_DRIVE= R); + if(dev_priv->head =3D NULL) return -ENOMEM; + memset(dev_priv->head, 0, sizeof(drm_mga_freelist_t)); + dev_priv->head->age =3D MGA_BUF_USED; + + for (i =3D 0; i < dma->buf_count; i++) { + buf =3D dma->buflist[ i ]; + buf_priv =3D buf->dev_private; + item =3D drm_alloc(sizeof(drm_mga_freelist_t), + DRM_MEM_DRIVER); + if(item =3D NULL) return -ENOMEM; + memset(item, 0, sizeof(drm_mga_freelist_t)); + item->age =3D MGA_BUF_FREE; + item->prev =3D dev_priv->head; + item->next =3D dev_priv->head->next; + if(dev_priv->head->next !=3D NULL) + dev_priv->head->next->prev =3D item; + if(item->next =3D NULL) dev_priv->tail =3D item; + item->buf =3D buf; + buf_priv->my_freelist =3D item; + buf_priv->discard =3D 0; + buf_priv->dispatched =3D 0; + dev_priv->head->next =3D item; + } + + return 0; +} + +static void mga_freelist_cleanup(drm_device_t *dev) +{ + drm_mga_private_t *dev_priv =3D (drm_mga_private_t *)dev->dev_priva= te; + drm_mga_freelist_t *item; + drm_mga_freelist_t *prev; + + item =3D dev_priv->head; + while(item) { + prev =3D item; + item =3D item->next; + drm_free(prev, sizeof(drm_mga_freelist_t), DRM_MEM_DRIVER); + } + + dev_priv->head =3D dev_priv->tail =3D NULL; +} + +/* Frees dispatch lock */ +static inline void mga_dma_quiescent(drm_device_t *dev) +{ + drm_device_dma_t *dma =3D dev->dma; + drm_mga_private_t *dev_priv =3D (drm_mga_private_t *)dev->dev_private; + drm_mga_sarea_t *sarea_priv =3D dev_priv->sarea_priv; + unsigned long end; + int i; + + DRM_DEBUG("dispatch_status =3D 0x%02lx\n", dev_priv->dispatch_status); + end =3D jiffies + (HZ*3); + while(1) { + if(!test_and_set_bit(MGA_IN_DISPATCH, + &dev_priv->dispatch_status)) { + break; + } + if((signed)(end - jiffies) <=3D 0) { + DRM_ERROR("irqs: %d wanted %d\n", + atomic_read(&dev->total_irq), + atomic_read(&dma->total_lost)); + DRM_ERROR("lockup: dispatch_status =3D 0x%02lx," + " jiffies =3D %lu, end =3D %lu\n", + dev_priv->dispatch_status, jiffies, end); + return; + } + for (i =3D 0 ; i < 2000 ; i++) mga_delay(); + } + end =3D jiffies + (HZ*3); + DRM_DEBUG("quiescent status : %x\n", MGA_READ(MGAREG_STATUS)); + while((MGA_READ(MGAREG_STATUS) & 0x00030001) !=3D 0x00020000) { + if((signed)(end - jiffies) <=3D 0) { + DRM_ERROR("irqs: %d wanted