aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/kernel
AgeCommit message (Collapse)Author
2006-03-21Pull bsp-removal into release branchTony Luck
2006-03-08[IA64] Fix race in the accessed/dirty bit handlersChristoph Lameter
A pte may be zapped by the swapper, exiting process, unmapping or page migration while the accessed or dirty bit handers are about to run. In that case the accessed bit or dirty is set on an zeroed pte which leads the VM to conclude that this is a swap pte. This may lead to - Messages from the vm like swap_free: Bad swap file entry 4000000000000000 - Processes being aborted swap_dup: Bad swap file entry 4000000000000000 VM: killing process .... Page migration is particular suitable for the creation of this race since it needs to remove and restore page table entries. The fix here is to check for the present bit and simply not update the pte if the page is not present anymore. If the page is not present then the fault handler should run next which will take care of the problem by bringing the page back and then mark the page dirty or move it onto the active list. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-07[IA64] mca recovery return value when no bus checkRuss Anderson
When there is no bus check, the return code should be failure, not success. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-07[IA64] don't report !sn2 or !summit hardware as an errorBjorn Helgaas
This stuff is all in the generic ia64 kernel, and the new initcall error reporting complains about them. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-07[IA64] Increase severity of MCA recovery messagesRuss Anderson
The MCA recovery messages are currently KERN_DEBUG, so they don't show up in /var/log/messages (by default). Increase the severity to KERN_ERR, for the initial message (and also add the physical address to this message). Leave the successful isolation message as KERN_DEBUG, but increase the severity when isolation fails to KERN_CRIT. [Russ' patch made these all KERN_CRIT] Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-28[IA64] sysctl option to silence unaligned trap warningsJes Sorensen
Allow sysadmin to disable all warnings about userland apps making unaligned accesses by using: # echo 1 > /proc/sys/kernel/ignore-unaligned-usertrap Rather than having to use prctl on a process by process basis. Default behaivour leaves the warnings enabled. Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-28[IA64] cleanup in fsys.SKen Chen
beautify coding style for zeroing end of fsyscall_table entries. Remove misleading __NR_syscall_last and add more comments. Drop (now unneeded) "guard against failure to increase NR_syscalls" Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-27[IA64] die_if_kernel() can returnTony Luck
arch/ia64/kernel/unaligned.c erroneously marked die_if_kernel() with a "noreturn" attribute ... which is silly (it returns whenever the argument regs say that the fault happened in user mode, as one might expect given the "if_kernel" part of its name!). Thanks to Alan and Gareth for pointing this out. Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-27[IA64] Delete a redundant instruction in unaligned_accessZhang, Yanmin
unaligned_access does fetch cr.ipsr, then calls dispatch_unaligned_handler, but dispatch_unaligned_handler fetches cr.ipsr again, so delete the first one. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-16[IA64] Count disabled cpus as potential hot-pluggable CPUsAshok Raj
Minor updates to earlier patch. - Added to documentation to add ia64 as well. - Minor clarification on how to use disabled cpus - used plain max instead of max_t per Andew Morton. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-16[IA64] Missing check for TIF_WORK if trace/audit enabledJack Steiner
It appears that if auditing is enabled, the kernel fails to check for pending signals before returning to user mode. Signed-off-by: Jack Steiner <steiner@sgi.com> Acked-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15Pull fix-cpu-possible-map into release branchTony Luck
2006-02-15[IA64] support panic_on_oops sysctlHorms
Trivial port of this feature from i386 As it stands, panic_on_oops but does nothing on ia64 Signed-Off-By: Horms <horms@verge.net.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64] ia64: simplify and fix udelay()hawkes@sgi.com
The original ia64 udelay() was simple, but flawed for platforms without synchronized ITCs: a preemption and migration to another CPU during the while-loop likely resulted in too-early termination or very, very lengthy looping. The first fix (now in 2.6.15) broke the delay loop into smaller, non-preemptible chunks, reenabling preemption between the chunks. This fix is flawed in that the total udelay is computed to be the sum of just the non-premptible while-loop pieces, i.e., not counting the time spent in the interim preemptible periods. If an interrupt or a migration occurs during one of these interim periods, then that time is invisible and only serves to lengthen the effective udelay(). This new fix backs out the current flawed fix and returns to a simple udelay(), fully preemptible and interruptible. It implements two simple alternative udelay() routines: one a default generic version that uses ia64_get_itc(), and the other an sn-specific version that uses that platform's RTC. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-15[IA64] Remove duplicate EXPORT_SYMBOLsAndreas Schwab
Remove symbol exports from ia64_ksyms.c that are already exported in lib/string.c. Signed-off-by: Andreas Schwab <schwab@suse.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-14[IA64] Count disabled cpus as potential hot-pluggable CPUsAshok Raj
Have a facility to account for potentially hot-pluggable CPUs. ACPI doesnt give a determinstic method to find hot-pluggable CPUs. Hence we use 2 methods to assist. - BIOS can mark potentially hot-pluggable CPUs as disabled in the MADT tables. - User can specify the number of hot-pluggable CPUs via parameter additional_cpus=X The option is enabled only if ACPI_CONFIG_HOTPLUG_CPU=y which enables the physical hotplug option. Without which user can still use logical onlining and offlining of CPUs by enabling CONFIG_HOTPLUG_CPU=y Adds more bits to cpu_possible_map for potentially hot-pluggable cpus. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-14[IA64] Dont set NR_CPUS for cpu_possible_map when CPU hotplug is enabled.Ashok Raj
Do not set cpu_possible_map for NR_CPUS when ACPI_CONFIG_HOTPLUG_CPU is set. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-09Pull new-syscalls into release branchTony Luck
2006-02-09[IA64] mca_drv: Add minstate validationHidetoshi Seto
MCA driver can cause panic if kernel gets a state info with no minstate. This patch adds minstate validation before handling it. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-08[IA64] unshare system call registration for ia64Janak Desai
Registers system call for the ia64 architecture. Reserves space for ppoll and pselect, and adds unshare at system call number 1296. Signed-off-by: Janak Desai <janak@us.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-07[IA64] Fix CONFIG_PRINTK_TIMETony Luck
There were two problems with enabling the PRINTK_TIME config option: 1) The first calls to printk() occur before per-cpu data virtual address is pinned into the TLB, so sched_clock() can fault. 2) sched_clock() is based on ar.itc, which may not be synchronized across cpus. Ken Chen started this patch, Tony Luck tinkered with it, and Jes Sorensen perfected it. Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-07[IA64] Fix wrong use of memparse in efi.cZou Nan hai
The check of (end != cp) after memparse in efi.c looks wrong to me. The result is that we can't use mem= and max_addr= kernel parameter at the same time. The following patch removed the check just like other arches do. Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-07[IA64] Fix a possible buffer overflow in efi.cZou Nan hai
Make sure to save space for the trailing '\0'. Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-06Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
2006-02-06[IA64] add syscall entry for *at()Chen, Kenneth W
Wire up the ia64 syscalls for *at() functions. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-03[PATCH] Export cpu topology in sysfsZhang, Yanmin
The patch implements cpu topology exportation by sysfs. Items (attributes) are similar to /proc/cpuinfo. 1) /sys/devices/system/cpu/cpuX/topology/physical_package_id: represent the physical package id of cpu X; 2) /sys/devices/system/cpu/cpuX/topology/core_id: represent the cpu core id to cpu X; 3) /sys/devices/system/cpu/cpuX/topology/thread_siblings: represent the thread siblings to cpu X in the same core; 4) /sys/devices/system/cpu/cpuX/topology/core_siblings: represent the thread siblings to cpu X in the same physical package; To implement it in an architecture-neutral way, a new source file, driver/base/topology.c, is to export the 5 attributes. If one architecture wants to support this feature, it just needs to implement 4 defines, typically in file include/asm-XXX/topology.h. The 4 defines are: #define topology_physical_package_id(cpu) #define topology_core_id(cpu) #define topology_thread_siblings(cpu) #define topology_core_siblings(cpu) The type of **_id is int. The type of siblings is cpumask_t. To be consistent on all architectures, the 4 attributes should have deafult values if their values are unavailable. Below is the rule. 1) physical_package_id: If cpu has no physical package id, -1 is the default value. 2) core_id: If cpu doesn't support multi-core, its core id is 0. 3) thread_siblings: Just include itself, if the cpu doesn't support HT/multi-thread. 4) core_siblings: Just include itself, if the cpu doesn't support multi-core and HT/Multi-thread. So be careful when declaring the 4 defines in include/asm-XXX/topology.h. If an attribute isn't defined on an architecture, it won't be exported. Thank Nathan, Greg, Andi, Paul and Venki. The patch provides defines for i386/x86_64/ia64. Signed-off-by: Zhang, Yanmin <yanmin.zhang@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-02-02[IA64] avoid broken SAL_CACHE_FLUSH implementationsBjorn Helgaas
If SAL_CACHE_FLUSH drops interrupts, complain about it and fall back to using PAL_CACHE_FLUSH instead. This is to work around a defect in HP rx5670 firmware: when an interrupt occurs during SAL_CACHE_FLUSH, SAL drops the interrupt but leaves it marked "in-service", which leaves the interrupt (and others of equal or lower priority) masked. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-02-01Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6
2006-01-24[ACPI] merge 3549 4320 4485 4588 4980 5483 5651 acpica asus fops pnpacpi ↵Len Brown
branches into release Signed-off-by: Len Brown <len.brown@intel.com>
2006-01-24[IA64] Scaling fix for simultaneous unaligned accessesJack Steiner
Eliminate a hot shared cacheline that occurs if multiple cpus are taking unaligned exceptions. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-24[IA64] Set the correct default OS status in the MCA handlerKeith Owens
sos->os_status is set to a default value of IA64_MCA_COLD_BOOT for an MCA, but then is incorrectly overwritten with IA64_MCA_SAME_CONTEXT (0). This makes SAL think that all MCAs have been recovered. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-19[IA64] Fix UP build with BSP removal support.Ashok Raj
Causes undefined force_cpei_retarget defined in arch/ia64/kernel/smpboot.c Push the unneeded code inside #ifdef CONFIG_HOTPLUG_CPU. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-19[IA64] eliminate softlockup warningJohn Hawkes
Fix an unnecessary softlockup watchdog warning in the ia64 uncached_build_memmap() that occurs occasionally at 256p and always at 512p. The problem occurs at boot time. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-19[IA64] sem2mutex: arch/ia64/kernel/perfmon.cJes Sorensen
Migrate perfmon from using an old semaphore to a completion handler. Signed-off-by: Jes Sorensen <jes@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-16[IA64] Perfmon for MontecitoStephane Eranian
Add Montecito PMU description table for perfmon2 Signed-off-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] prevent accidental modification of args in jprobe handlerZhang Yanmin
When jprobe is hit, the function parameters of the original function should be saved before jprobe handler is executed, and restored it after jprobe handler is executed, because jprobe handler might change the register values due to tail call optimization by the gcc. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Add hotplug cpu to salinfo.c, replace semaphore with mutexKeith Owens
Add hotplug cpu support to salinfo.c. The cpu_event field is a cpumask so use the cpu_* macros consistently, replacing the existing mixture of cpu_* and *_bit macros. Instead of counting the number of outstanding events in a semaphore and trying to track that count over user space context, interrupt context, non-maskable interrupt context and cpu hotplug, replace the semaphore with a test for "any bits set" combined with a mutex. Modify the locking to make the test for "work to do" an atomic operation. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Handle debug traps in fsys modeJason Uhlenkott
We need to handle debug traps in fsys mode non-fatally. They can happen now that we have fsyscalls which contain probe instructions. Signed-off-by: Jason Uhlenkott <jasonuhl@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Fix conversion of pal_min_state physical addressFrancois Wellenrieter
On return from INIT handler we must convert the address of the minstate area from a kernel virtual uncached address (0xC...) to physical uncached (0x8...). A typo (or thinko?) in the code converted to physical cached. Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Add stub entry to fsys.S for sys_migrate_pagesTony Luck
When this new syscall was added to ia64 in commit 39743889aaf76725152f16aa90ca3c45f6d52da3 fsys.S was forgotten. Add a ".data8 0" there to keep it in step. [Reported by Stephane Eranian] Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-12[PATCH] ia64: task_pt_regs()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] ia64: task_thread_info()Al Viro
on ia64 thread_info is at the constant offset from task_struct and stack is embedded into the same beast. Set __HAVE_THREAD_FUNCTIONS, made task_thread_info() just add a constant. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] scheduler cache-hot-autodetectakpm@osdl.org
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] sched: add cacheflush() asmIngo Molnar
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] capable/capability.h (arch/)Randy Dunlap
arch: Use <linux/capability.h> where capable() is used. Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] kprobes: fix race in recovery of reentrant probeKeshavamurthy Anil S
There is a window where a probe gets removed right after the probe is hit on some different cpu. In this case probe handlers can't find a matching probe instance related to break address. In this case we need to read the original instruction at break address to see if that is not a break/int3 instruction and recover safely. Previous code had a bug where we were not checking for the above race in case of reentrant probes and the below patch fixes this race. Tested on IA64, Powerpc, x86_64. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: arch_remove_kprobeAnil S Keshavamurthy
Currently arch_remove_kprobes() is only implemented/required for x86_64 and powerpc. All other architecture like IA64, i386 and sparc64 implementes a dummy function which is being called from arch independent kprobes.c file. This patch removes the dummy functions and replaces it with #define arch_remove_kprobe(p, s) do { } while(0) Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] /dev/mem: validate mmap requestsBjorn Helgaas
Add a hook so architectures can validate /dev/mem mmap requests. This is analogous to validation we already perform in the read/write paths. The identity mapping scheme used on ia64 requires that each 16MB or 64MB granule be accessed with exactly one attribute (write-back or uncacheable). This avoids "attribute aliasing", which can cause a machine check. Sample problem scenario: - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K - efi_memmap_init() discards any write-back (WB) memory in the first granule - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0 - hwinfo receives UC mapping (the default, since memmap says "no WB here") - Machine check abort (on chipsets that don't support UC access to WB memory, e.g., sx1000) In the scenario above, the only choices are - Use WB for hwinfo mmap. Can't do this because it causes attribute aliasing with the UC mapping for the VGA MMIO space. - Use UC for hwinfo mmap. Can't do this because the chipset may not support UC for that region. - Disallow the hwinfo mmap with -EINVAL. That's what this patch does. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] remove gcc-2 checksAndrew Morton
Remove various things which were checking for gcc-1.x and gcc-2.x compilers. From: Adrian Bunk <bunk@stusta.de> Some documentation updates and removes some code paths for gcc < 3.2. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] use ptrace_get_task_struct in various placesChristoph Hellwig
The ptrace_get_task_struct() helper that I added as part of the ptrace consolidation is useful in variety of places that currently opencode it. Switch them to the common helpers. Add a ptrace_traceme() helper that needs to be explicitly called, and simplify the ptrace_get_task_struct() interface. We don't need the request argument now, and we return the task_struct directly, using ERR_PTR() for error returns. It's a bit more code in the callers, but we have two sane routines that do one thing well now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>