aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/kernel
AgeCommit message (Collapse)Author
2006-01-16[IA64] Perfmon for MontecitoStephane Eranian
Add Montecito PMU description table for perfmon2 Signed-off-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] prevent accidental modification of args in jprobe handlerZhang Yanmin
When jprobe is hit, the function parameters of the original function should be saved before jprobe handler is executed, and restored it after jprobe handler is executed, because jprobe handler might change the register values due to tail call optimization by the gcc. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Add hotplug cpu to salinfo.c, replace semaphore with mutexKeith Owens
Add hotplug cpu support to salinfo.c. The cpu_event field is a cpumask so use the cpu_* macros consistently, replacing the existing mixture of cpu_* and *_bit macros. Instead of counting the number of outstanding events in a semaphore and trying to track that count over user space context, interrupt context, non-maskable interrupt context and cpu hotplug, replace the semaphore with a test for "any bits set" combined with a mutex. Modify the locking to make the test for "work to do" an atomic operation. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Handle debug traps in fsys modeJason Uhlenkott
We need to handle debug traps in fsys mode non-fatally. They can happen now that we have fsyscalls which contain probe instructions. Signed-off-by: Jason Uhlenkott <jasonuhl@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Fix conversion of pal_min_state physical addressFrancois Wellenrieter
On return from INIT handler we must convert the address of the minstate area from a kernel virtual uncached address (0xC...) to physical uncached (0x8...). A typo (or thinko?) in the code converted to physical cached. Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-13[IA64] Add stub entry to fsys.S for sys_migrate_pagesTony Luck
When this new syscall was added to ia64 in commit 39743889aaf76725152f16aa90ca3c45f6d52da3 fsys.S was forgotten. Add a ".data8 0" there to keep it in step. [Reported by Stephane Eranian] Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-12[PATCH] ia64: task_pt_regs()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] ia64: task_thread_info()Al Viro
on ia64 thread_info is at the constant offset from task_struct and stack is embedded into the same beast. Set __HAVE_THREAD_FUNCTIONS, made task_thread_info() just add a constant. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] scheduler cache-hot-autodetectakpm@osdl.org
) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-12[PATCH] sched: add cacheflush() asmIngo Molnar
Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] capable/capability.h (arch/)Randy Dunlap
arch: Use <linux/capability.h> where capable() is used. Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-11[PATCH] kprobes: fix race in recovery of reentrant probeKeshavamurthy Anil S
There is a window where a probe gets removed right after the probe is hit on some different cpu. In this case probe handlers can't find a matching probe instance related to break address. In this case we need to read the original instruction at break address to see if that is not a break/int3 instruction and recover safely. Previous code had a bug where we were not checking for the above race in case of reentrant probes and the below patch fixes this race. Tested on IA64, Powerpc, x86_64. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-10[PATCH] kprobes: arch_remove_kprobeAnil S Keshavamurthy
Currently arch_remove_kprobes() is only implemented/required for x86_64 and powerpc. All other architecture like IA64, i386 and sparc64 implementes a dummy function which is being called from arch independent kprobes.c file. This patch removes the dummy functions and replaces it with #define arch_remove_kprobe(p, s) do { } while(0) Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] /dev/mem: validate mmap requestsBjorn Helgaas
Add a hook so architectures can validate /dev/mem mmap requests. This is analogous to validation we already perform in the read/write paths. The identity mapping scheme used on ia64 requires that each 16MB or 64MB granule be accessed with exactly one attribute (write-back or uncacheable). This avoids "attribute aliasing", which can cause a machine check. Sample problem scenario: - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K - efi_memmap_init() discards any write-back (WB) memory in the first granule - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0 - hwinfo receives UC mapping (the default, since memmap says "no WB here") - Machine check abort (on chipsets that don't support UC access to WB memory, e.g., sx1000) In the scenario above, the only choices are - Use WB for hwinfo mmap. Can't do this because it causes attribute aliasing with the UC mapping for the VGA MMIO space. - Use UC for hwinfo mmap. Can't do this because the chipset may not support UC for that region. - Disallow the hwinfo mmap with -EINVAL. That's what this patch does. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] remove gcc-2 checksAndrew Morton
Remove various things which were checking for gcc-1.x and gcc-2.x compilers. From: Adrian Bunk <bunk@stusta.de> Some documentation updates and removes some code paths for gcc < 3.2. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] use ptrace_get_task_struct in various placesChristoph Hellwig
The ptrace_get_task_struct() helper that I added as part of the ptrace consolidation is useful in variety of places that currently opencode it. Switch them to the common helpers. Add a ptrace_traceme() helper that needs to be explicitly called, and simplify the ptrace_get_task_struct() interface. We don't need the request argument now, and we return the task_struct directly, using ERR_PTR() for error returns. It's a bit more code in the callers, but we have two sane routines that do one thing well now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] Swap Migration V5: sys_migrate_pages interfaceChristoph Lameter
sys_migrate_pages implementation using swap based page migration This is the original API proposed by Ray Bryant in his posts during the first half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org. The intent of sys_migrate is to migrate memory of a process. A process may have migrated to another node. Memory was allocated optimally for the prior context. sys_migrate_pages allows to shift the memory to the new node. sys_migrate_pages is also useful if the processes available memory nodes have changed through cpuset operations to manually move the processes memory. Paul Jackson is working on an automated mechanism that will allow an automatic migration if the cpuset of a process is changed. However, a user may decide to manually control the migration. This implementation is put into the policy layer since it uses concepts and functions that are also needed for mbind and friends. The patch also provides a do_migrate_pages function that may be useful for cpusets to automatically move memory. sys_migrate_pages does not modify policies in contrast to Ray's implementation. The current code here is based on the swap based page migration capability and thus is not able to preserve the physical layout relative to it containing nodeset (which may be a cpuset). When direct page migration becomes available then the implementation needs to be changed to do a isomorphic move of pages between different nodesets. The current implementation simply evicts all pages in source nodeset that are not in the target nodeset. Patch supports ia64, i386 and x86_64. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-05[IA64] Fix compile warnings in setup.cTony Luck
arch/ia64/kernel/setup.c: In function `show_cpuinfo': arch/ia64/kernel/setup.c:576: warning: long unsigned int format, different type arg (arg 12) arch/ia64/kernel/setup.c:576: warning: long unsigned int format, different type arg (arg 13) Introduced by 95235ca2c20ac0b31a8eb39e2d599bcc3e9c9a10 Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-01-04Merge master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreqLinus Torvalds
2005-12-16[IA64] Add __read_mostly support for IA64Christoph Lameter
sparc64, i386 and x86_64 have support for a special data section dedicated to rarely updated data that is frequently read. The section was created to avoid false sharing of those rarely read data with frequently written kernel data. This patch creates such a data section for ia64 and will group rarely written data into this section. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16[IA64] uncached ref count leakJes Sorensen
Use raw_smp_processor_id() instead of get_cpu() as we don't need the extra features of get_cpu(). Signed-off-by: Jes Sorensen <jes@trained-monkey.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16[IA64] disable preemption in udelay()John Hawkes
The udelay() inline for ia64 uses the ITC. If CONFIG_PREEMPT is enabled and the platform has unsynchronized ITCs and the calling task migrates to another CPU while doing the udelay loop, then the effective delay may be too short or very, very long. This patch disables preemption around 100 usec chunks of the overall desired udelay time. This minimizes preemption-holdoffs. udelay() is now too big to be inline, move it out of line and export it. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-14[IA64] fix for SET_PERSONALITY when CONFIG_IA32_SUPPORT is not set.Robin Holt
Missed this when fixing the SET_PERSONALITY change. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-12Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
2005-12-12[PATCH] kprobes: increment kprobe missed count for multiprobesKeshavamurthy Anil S
When multiple probes are registered at the same address and if due to some recursion (probe getting triggered within a probe handler), we skip calling pre_handlers and just increment nmissed field. The below patch make sure it walks the list for multiple probes case. Without the below patch we get incorrect results of nmissed count for multiple probe case. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-12-06[CPUFREQ] CPU frequency display in /proc/cpuinfoVenkatesh Pallipadi
What is the value shown in "cpu MHz" of /proc/cpuinfo when CPUs are capable of changing frequency? Today the answer is: It depends. On i386: SMP kernel - It is always the boot frequency UP kernel - Scales with the frequency change and shows that was last set. On x86_64: There is one single variable cpu_khz that gets written by all the CPUs. So, the frequency set by last CPU will be seen on /proc/cpuinfo of all the CPUs in the system. What you see also depends on whether you have constant_tsc capable CPU or not. On ia64: It is always boot time frequency of a particular CPU that gets displayed. The patch below changes this to: Show the last known frequency of the particular CPU, when cpufreq is present. If cpu doesnot support changing of frequency through cpufreq, then boot frequency will be shown. The patch affects i386, x86_64 and ia64 architectures. Signed-off-by: Venkatesh Pallipadi<venkatesh.pallipadi@intel.com> Signed-off-by: Dave Jones <davej@redhat.com>
2005-12-06[IA64] Change SET_PERSONALITY to comply with comment in binfmt_elf.c.Robin Holt
We have a customer application which trips a bug. The problem arises when a driver attempts to call do_munmap on an area which is mapped, but because current->thread.task_size has been set to 0xC0000000, the call to do_munmap fails thinking it is an unmap beyond the user's address space. The comment in fs/binfmt_elf.c in load_elf_library() before the call to SET_PERSONALITY() indicates that task_size must not be changed for the running application until flush_thread, but is for ia64 executing ia32 binaries. This patch moves the setting of task_size from SET_PERSONALITY() to flush_thread() as indicated. The customer application no longer is able to trip the bug. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-05[IA64] Allow salinfo_decode to detect signals on readKeith Owens
Return -EINTR instead of -ERESTARTSYS when signals are delivered during a blocked read of /proc/sal/*/event. This allows salinfo_decode to detect signals when it is blocked on a read of those files. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-29[IA64] Remove getting break_num by decoding instructionKeshavamurthy Anil S
break.b always sets cr.iim to 0 and the current code tries to get the break_num by decoding instruction. However, their seems to be a race condition while reading the regs->cr_iip, as on other cpu the break.b at regs->cr_iip might have been replaced with the original instruction as a result of unregister_kprobe() and hence decoding instruction to obtain break_num will result in wrong value in this case. Also includes changes to kprobes.c which now has to handle break number zero. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-29[IA64] - Make pfn_valid more precise for SGI Altix systemsDean Roe
A single SGI Altix system can be divided into multiple partitions, each running their own instance of the Linux kernel. pfn_valid() is currently not optimal for any but the first partition, since it does not compare the pfn with min_low_pfn before calling the more costly ia64_pfn_valid(). Signed-off-by: Dean Roe <roe@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-23[PATCH] kprobes: Fix return probes on sys_execveJim Keniston
Fix a bug in kprobes that can cause an Oops or even a crash when a return probe is installed on one of the following functions: sys_execve, do_execve, load_*_binary, flush_old_exec, or flush_thread. The fix is to remove the call to kprobe_flush_task() in flush_thread(). This fix has been tested on all architectures for which the return-probes feature has been implemented (i386, x86_64, ppc64, ia64). Please apply. BACKGROUND Up to now, we have called kprobe_flush_task() under two situations: when a task exits, and when it execs. Flushing kretprobe_instances on exit is correct because (a) do_exit() doesn't return, and (b) one or more return-probed functions may be active when a task calls do_exit(). Neither is the case for sys_execve() and its callees. Initially, the mistaken call to kprobe_flush_task() on exec was harmless because we put the "real" return address of each active probed function back in the stack, just to be safe, when we recycled its kretprobe_instance. When support for ppc64 and ia64 was added, this safety measure couldn't be employed, and was eventually dropped even for i386 and x86_64. sys_execve() and its callees were informally blacklisted for return probes until this fix was developed. Acked-by: Prasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: Jim Keniston <jkenisto@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-17[IA64] polish comments for tlb fault handler in ivt.SChen, Kenneth W
Polish the comments specifically in vhpt_miss and nested_dtlb_miss handlers. I think it's better to explicitly name each page table level with its name instead of numerically name them. i.e., use pgd, pud, pmd, and pte instead of referring as L1, L2, L3 etc. Along the line, remove some magic number in the comments like: "PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)". No code change at all, pure comment update. Feel free to shoot anything you have, darts or tomahawk cruise missile. I will duck behind a bunker ;-) Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Acked-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-17[IA64] 4 level page table bug fix in vhpt_missChen, Kenneth W
From source code inspection, I think there is a bug with 4 level page table with vhpt_miss handler. In the code path of rechecking page table entry against previously read value after tlb insertion, *pte value in register r18 was overwritten with value newly read from pud pointer, render the check of new *pte against previous *pte completely wrong. Though the bug is none fatal and the penalty is to purge the entry and retry. For functional correctness, it should be fixed. The fix is to use a different register so new *pud don't trash *pte. (btw, the comments in the cmp statement is wrong as well, which I will address in the next patch). Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-15[PATCH] ia64: cpu_idle performance bug fixChen, Kenneth W
Our performance validation on 2.6.15-rc1 caught a disastrous performance regression on ia64 with netperf (-98%) and volanomark (-58%) compares to previous kernel version 2.6.14-git7. See the following chart (result group 1 & 2). http://kernel-perf.sourceforge.net/results.machine_id=26.html We have root caused it to commit 64c7c8f88559624abdbe12b5da6502e8879f8d28 This changeset broke the ia64 task resched notification. In sched.c:resched_task(), a reschedule IPI is conditioned upon TIF_POLLING_NRFLAG. However, the above changeset unconditionally set the polling thread flag for idle tasks regardless whether pal_halt_light is in use or not. As a result, resched IPI is not sent from resched_task(). And since the default behavior on ia64 is to use pal_halt_light, we end up delaying the rescheduling task until next timer tick, and thus cause the performance regression. This fixes the performance bug. I'm glad our performance suite is turning up bad performance bug like this in time. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-11[IA64] 4-level page tablesRobin Holt
This patch introduces 4-level page tables to ia64. I have run some benchmarks and found nothing interesting. Performance has consistently fallen within the noise range. It also introduces a config option (setting the default to 3 levels). The config option prevents having 4 level page tables with 64k base page size. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-10[IA64] Replace kcalloc(1, with kzalloc.Panagiotis Issaris
Conversion from kcalloc(1, to kzalloc. Signed-off-by: Panagiotis Issaris <takis@issaris.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-10Pull context-bitmap into release branchTony Luck
2005-11-10Pull extend-notify-die into release branchTony Luck
2005-11-10Pull mca-check-psp into release branchTony Luck
2005-11-10Pull align-sig-frame into release branchTony Luck
2005-11-09Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
2005-11-09[PATCH] sched: resched and cpu_idle reworkNick Piggin
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce confusion, and make their semantics rigid. Improves efficiency of resched_task and some cpu_idle routines. * In resched_task: - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held, and as we hold it during resched_task, then there is no need for an atomic test and set there. The only other time this should be set is when the task's quantum expires, in the timer interrupt - this is protected against because the rq lock is irq-safe. - If TIF_NEED_RESCHED is set, then we don't need to do anything. It won't get unset until the task get's schedule()d off. - If we are running on the same CPU as the task we resched, then set TIF_NEED_RESCHED and no further action is required. - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set after TIF_NEED_RESCHED has been set, then we need to send an IPI. Using these rules, we are able to remove the test and set operation in resched_task, and make clear the previously vague semantics of POLLING_NRFLAG. * In idle routines: - Enter cpu_idle with preempt disabled. When the need_resched() condition becomes true, explicitly call schedule(). This makes things a bit clearer (IMO), but haven't updated all architectures yet. - Many do a test and clear of TIF_NEED_RESCHED for some reason. According to the resched_task rules, this isn't needed (and actually breaks the assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock held). So remove that. Generally one less locked memory op when switching to the idle thread. - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner most polling idle loops. The above resched_task semantics allow it to be set until before the last time need_resched() is checked before going into a halt requiring interrupt wakeup. Many idle routines simply never enter such a halt, and so POLLING_NRFLAG can be always left set, completely eliminating resched IPIs when rescheduling the idle task. POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Con Kolivas <kernel@kolivas.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09[PATCH] sched: disable preempt in idle tasksNick Piggin
Run idle threads with preempt disabled. Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()). How did it ever work before? Might fix the CPU hotplugging hang which Nigel Cunningham noted. We think the bug hits if the idle thread is preempted after checking need_resched() and before going to sleep, then the CPU offlined. After calling stop_machine_run, the CPU eventually returns from preemption and into the idle thread and goes to sleep. The CPU will continue executing previous idle and have no chance to call play_dead. By disabling preemption until we are ready to explicitly schedule, this bug is fixed and the idle threads generally become more robust. From: alexs <ashepard@u.washington.edu> PPC build fix From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp> MIPS build fix Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-08[IA64] MCA recovery: Bump reference count on bad pagesRuss Anderson
When a page has a memory uncorrectable ECC error, the recovery code wants to prevent the page from being reused. This change bumps the reference count to prevent the page from getting back on the free list. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08[IA64] MCA recovery: pfn_valid() needs a pfnRuss Anderson
paddr needs to be shifted by PAGE_SHIFT to be valid input for pfn_valid(). Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08[IA64] MCA recovery based on PSP bitsRuss Anderson
The determination of whether an MCA is recoverable or not must be based on the bits set in the PSP (Processor State Parameter). The specific bits are shown in the Intel IA-64 Architecture Software Developer's Manual, Vol 2, Table 11-6 Software Recovery Bits in Processor State Parameter. Those bits should be consistent across the entire IA-64 family of processors. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08[IA64] align signal-frame even when not using alternate signal-stackDavid Mosberger-Tang
At the moment, attempting to invoke a signal-handler on the normal stack is guaranteed to fail if the stack-pointer happens not to be 16-byte aligned. This is because the signal-trampoline will attempt to store fp-regs with stf.spill instructions, which will trap for misaligned addresses. This isn't terribly useful behavior. It's better to just always align the signal frame to the next lower 16-byte boundary. Signed-off-by: David Mosberger-Tang <David.Mosberger@acm.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-07[IA64] Extend notify_die() hooks for IA64Keith Owens
notify_die() added for MCA_{MONARCH,SLAVE,RENDEZVOUS}_{ENTER,PROCESS,LEAVE} and INIT_{MONARCH,SLAVE}_{ENTER,PROCESS,LEAVE}. We need multiple notification points for these events because they can take many seconds to run which has nasty effects on the behaviour of the rest of the system. DIE_SS replaced by a generic DIE_FAULT which checks the vector number, to allow interception of faults other than SS. DIE_MACHINE_{HALT,RESTART} added to allow last minute close down processing, especially when the halt/restart routines are called from error handlers. DIE_OOPS added. The check for kprobe's break numbers has been moved from traps.c to kprobes.c, allowing DIE_BREAK to be used for any additional break numbers, i.e. it is no longer kprobes specific. Hooks for kernel debuggers and kernel dumpers added, ENTER and LEAVE. Both of these disable the system for long periods which impact on watchdogs and heartbeat systems in general. More patches to come that use these events to reset watchdogs and heartbeats. unregister_die_notifier() added and both routines exported. Requested by Dean Nelson. Lock removed from {un,}register_die_notifier. notifier_chain_register() already takes a lock. Also the generic notifier chain locking is being reworked to distinguish between callbacks that can block and those that cannot, the lock in {un,}register_die_notifier would interfere with that change. http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2 Leading white space removed from arch/ia64/kernel/kprobes.c. Typo in mca.c in original version of this patch found & fixed by Dean Nelson. Signed-off-by: Keith Owens <kaos@sgi.com> Acked-by: Dean Nelson <dcn@sgi.com> Acked-by: Anil Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-07[PATCH] kfree cleanup: archJesper Juhl
This is the arch/ part of the big kfree cleanup patch. Remove pointless checks for NULL prior to calling kfree() in arch/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Grant Grundler <grundler@parisc-linux.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07[PATCH] Kprobes: preempt_disable/enable() simplificationAnanth N Mavinakayanahalli
Reorganize the preempt_disable/enable calls to eliminate the extra preempt depth. Changes based on Paul McKenney's review suggestions for the kprobes RCU changeset. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>