aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2008-04-19init: move setup of nr_cpu_ids to as early as possibleMike Travis
Move the setting of nr_cpu_ids from sched_init() to start_kernel() so that it's available as early as possible. Note that an arch has the option of setting it even earlier if need be, but it should not result in a different value than the setup_nr_cpu_ids() function. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: remove another cpumask_t variable from stackMike Travis
* Remove another cpumask_t variable from stack that was missed in the last kernel_sched_c updates. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19cpumask: use new cpus_scnprintf functionMike Travis
* Cleaned up references to cpumask_scnprintf() and added new cpulist_scnprintf() interfaces where appropriate. * Fix some small bugs (or code efficiency improvments) for various uses of cpumask_scnprintf. * Clean up some checkpatch errors. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19cpumask: reduce stack usage in SD_x_INIT initializersMike Travis
* Remove empty cpumask_t (and all non-zero/non-null) variables in SD_*_INIT macros. Use memset(0) to clear. Also, don't inline the initializer functions to save on stack space in build_sched_domains(). * Merge change to include/linux/topology.h that uses the new node_to_cpumask_ptr function in the nr_cpus_node macro into this patch. Depends on: [mm-patch]: asm-generic-add-node_to_cpumask_ptr-macro.patch [sched-devel]: sched: add new set_cpus_allowed_ptr function Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19nodemask: use new node_to_cpumask_ptr functionMike Travis
* Use new node_to_cpumask_ptr. This creates a pointer to the cpumask for a given node. This definition is in mm patch: asm-generic-add-node_to_cpumask_ptr-macro.patch * Use new set_cpus_allowed_ptr function. Depends on: [mm-patch]: asm-generic-add-node_to_cpumask_ptr-macro.patch [sched-devel]: sched: add new set_cpus_allowed_ptr function [x86/latest]: x86: add cpus_scnprintf function Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Greg Banks <gnb@melbourne.sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19generic: reduce stack pressure in sched_affinityMike Travis
* Modify sched_affinity functions to pass cpumask_t variables by reference instead of by value. * Use new set_cpus_allowed_ptr function. Depends on: [sched-devel]: sched: add new set_cpus_allowed_ptr function Cc: Paul Jackson <pj@sgi.com> Cc: Cliff Wickman <cpw@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19cpuset: modify cpuset_set_cpus_allowed to use cpumask pointerMike Travis
* Modify cpuset_cpus_allowed to return the currently allowed cpuset via a pointer argument instead of as the function return value. * Use new set_cpus_allowed_ptr function. * Cleanup CPU_MASK_ALL and NODE_MASK_ALL uses. Depends on: [sched-devel]: sched: add new set_cpus_allowed_ptr function Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19generic: use new set_cpus_allowed_ptr functionMike Travis
* Use new set_cpus_allowed_ptr() function added by previous patch, which instead of passing the "newly allowed cpus" cpumask_t arg by value, pass it by pointer: -int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask) +int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask) * Modify CPU_MASK_ALL Depends on: [sched-devel]: sched: add new set_cpus_allowed_ptr function Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: remove fixed NR_CPUS sized arrays in kernel_sched_cMike Travis
* Change fixed size arrays to per_cpu variables or dynamically allocated arrays in sched_init() and sched_init_smp(). (1) static struct sched_entity *init_sched_entity_p[NR_CPUS]; (1) static struct cfs_rq *init_cfs_rq_p[NR_CPUS]; (1) static struct sched_rt_entity *init_sched_rt_entity_p[NR_CPUS]; (1) static struct rt_rq *init_rt_rq_p[NR_CPUS]; static struct sched_group **sched_group_nodes_bycpu[NR_CPUS]; (1) - these arrays are allocated via alloc_bootmem_low() * Change sched_domain_debug_one() to use cpulist_scnprintf instead of cpumask_scnprintf. This reduces the output buffer required and improves readability when large NR_CPU count machines arrive. * In sched_create_group() we allocate new arrays based on nr_cpu_ids. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19cpumask: Cleanup more uses of CPU_MASK and NODE_MASKMike Travis
* Replace usages of CPU_MASK_NONE, CPU_MASK_ALL, NODE_MASK_NONE, NODE_MASK_ALL to reduce stack requirements for large NR_CPUS and MAXNODES counts. * In some cases, the cpumask variable was initialized but then overwritten with another value. This is the case for changes like this: - cpumask_t oldmask = CPU_MASK_ALL; + cpumask_t oldmask; Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: fix cpus_allowed settingsGregory Haskins
Signed-off-by: Gregory Haskins <ghaskins@novell.com> Acked-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: allow cpuacct stats to be resetDhaval Giani
Currently the schedstats implementation does not allow the statistics to be reset. This patch aims to allow that. echo 0 > cpuacct.usage resets the usage. Any other value is not allowed and returns -EINVAL. Signed-off-by: Dhaval Giani <dhaval@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: cleanup cpuacct variable namesDhaval Giani
Change the variable names to the common convention for the cpuacct subsystem. Signed-off-by: Dhaval Giani <dhaval@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19tasklets: execute tasklets in the same order they were queuedOlof Johansson
I noticed this when looking at an openswan issue. Openswan (ab?)uses the tasklet API to defer processing of packets in some situations, with one packet per tasklet_action(). I started noticing sequences of backwards-ordered sequence numbers coming over the wire, since new tasklets are always queued at the head of the list but processed sequentially. Convert it to instead append new entries to the tail of the list. As an extra bonus, the splicing code in takeover_tasklets() no longer has to iterate over the list. Signed-off-by: Olof Johansson <olof@lixom.net> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: rt-group: smp balancingPeter Zijlstra
Currently the rt group scheduling does a per cpu runtime limit, however the rt load balancer makes no guarantees about an equal spread of real- time tasks, just that at any one time, the highest priority tasks run. Solve this by making the runtime limit a global property by borrowing excessive runtime from the other cpus once the local limit runs out. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: rt-group: synchonised bandwidth periodPeter Zijlstra
Various SMP balancing algorithms require that the bandwidth period run in sync. Possible improvements are moving the rt_bandwidth thing into root_domain and keeping a span per rt_bandwidth which marks throttled cpus. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: fix regression with sched yieldPeter Zijlstra
Balbir Singh reported: > 1:mon> t > [c0000000e7677da0] c000000000067de0 .sys_sched_yield+0x6c/0xbc > [c0000000e7677e30] c000000000008748 syscall_exit+0x0/0x40 > --- Exception: c01 (System Call) at 00000400001d09e4 > SP (4000664cb10) is in userspace > 1:mon> r > cpu 0x1: Vector: 300 (Data Access) at [c0000000e7677aa0] > pc: c000000000068e50: .yield_task_fair+0x94/0xc4 > lr: c000000000067de0: .sys_sched_yield+0x6c/0xbc the check that should have avoided that is: /* * Are we the only task in the tree? */ if (unlikely(rq->load.weight == curr->se.load.weight)) return; But I guess that overlooks rt tasks, they also increase the load. So I guess something like this ought to fix it.. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19latencytop: optimize LT_BACKTRACEDEPTH loops a bitDmitry Adamushko
There is no need to loop any longer when 'same == 0'. Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: remove sysctl_sched_batch_wakeup_granularityIngo Molnar
it's unused. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: reenable sync wakeupsIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: cache hot buddyIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: feat affine wakeupsIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: introduce SCHED_FEAT_SYNC_WAKEUPS, turn it offIngo Molnar
turn off sync wakeups by default. They are not needed anymore - the buddy logic should be smart enough to keep the system from overscheduling. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: fix wakeup granularity for buddiesPeter Zijlstra
The wakeup buddy logic didn't use the same wakeup granularity logic as the wakeup preemption did, this might cause the ->next buddy to be selected past the point where we would have preempted had the task been a single running instance. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: fix rq->clock overflows detection with CONFIG_NO_HZGuillaume Chazarain
When using CONFIG_NO_HZ, rq->tick_timestamp is not updated every TICK_NSEC. We check that the number of skipped ticks matches the clock jump seen in __update_rq_clock(). Signed-off-by: Guillaume Chazarain <guichaz@yahoo.fr> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: sched.c needs tick.hReynes Philippe
kernel/sched.c:506: erreur: implicit declaration of function tick_get_tick_sched kernel/sched.c:506: erreur: invalid type argument of -> kernel/sched.c:506: erreur: NOHZ_MODE_INACTIVE undeclared (first use in this function) kernel/sched.c:506: erreur: (Each undeclared identifier is reported only once kernel/sched.c:506: erreur: for each function it appears in.) Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: make cpu_clock() globally synchronousIngo Molnar
Alexey Zaytsev reported (and bisected) that the introduction of cpu_clock() in printk made the timestamps jump back and forth. Make cpu_clock() more reliable while still keeping it fast when it's called frequently. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19sched: re-do "sched: fix fair sleepers"Ingo Molnar
re-apply: | commit e22ecef1d2658ba54ed7d3fdb5d60829fb434c23 | Author: Ingo Molnar <mingo@elte.hu> | Date: Fri Mar 14 22:16:08 2008 +0100 | | sched: fix fair sleepers | | Fair sleepers need to scale their latency target down by runqueue | weight. Otherwise busy systems will gain ever larger sleep bonus. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19x86: fpu xstate split fixSuresh Siddha
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-19x86, fpu: split FPU state from task struct - v5Suresh Siddha
Split the FPU save area from the task struct. This allows easy migration of FPU context, and it's generally cleaner. It also allows the following two optimizations: 1) only allocate when the application actually uses FPU, so in the first lazy FPU trap. This could save memory for non-fpu using apps. Next patch does this lazy allocation. 2) allocate the right size for the actual cpu rather than 512 bytes always. Patches enabling xsave/xrstor support (coming shortly) will take advantage of this. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-19x86: tsc prevent time going backwardsThomas Gleixner
We already catch most of the TSC problems by sanity checks, but there is a subtle bug which has been in the code forever. This can cause time jumps in the range of hours. This was reported in: http://lkml.org/lkml/2007/8/23/96 and http://lkml.org/lkml/2008/3/31/23 I was able to reproduce the problem with a gettimeofday loop test on a dual core and a quad core machine which both have sychronized TSCs. The TSCs seems not to be perfectly in sync though, but the kernel is not able to detect the slight delta in the sync check. Still there exists an extremly small window where this delta can be observed with a real big time jump. So far I was only able to reproduce this with the vsyscall gettimeofday implementation, but in theory this might be observable with the syscall based version as well. CPU 0 updates the clock source variables under xtime/vyscall lock and CPU1, where the TSC is slighty behind CPU0, is reading the time right after the seqlock was unlocked. The clocksource reference data was updated with the TSC from CPU0 and the value which is read from TSC on CPU1 is less than the reference data. This results in a huge delta value due to the unsigned subtraction of the TSC value and the reference value. This algorithm can not be changed due to the support of wrapping clock sources like pm timer. The huge delta is converted to nanoseconds and added to xtime, which is then observable by the caller. The next gettimeofday call on CPU1 will show the correct time again as now the TSC has advanced above the reference value. To prevent this TSC specific wreckage we need to compare the TSC value against the reference value and return the latter when it is larger than the actual TSC value. I pondered to mark the TSC unstable when the readout is smaller than the reference value, but this would render an otherwise good and fast clocksource unusable without a real good reason. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-19generic, x86: add prctl commands PR_GET_TSC and PR_SET_TSCErik Bosman
This patch adds prctl commands that make it possible to deny the execution of timestamp counters in userspace. If this is not implemented on a specific architecture, prctl will return -EINVAL. ned-off-by: Erik Bosman <ejbosman@cs.vu.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-18kernel: Remove unnecessary inclusions of asm/semaphore.hMatthew Wilcox
None of these files use any of the functionality promised by asm/semaphore.h. Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
2008-04-19Audit: Final renamings and cleanupAhmed S. Darwish
Rename the se_str and se_rule audit fields elements to lsm_str and lsm_rule to avoid confusion. Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com> Acked-by: James Morris <jmorris@namei.org>
2008-04-19SELinux: use new audit hooks, remove redundant exportsAhmed S. Darwish
Setup the new Audit LSM hooks for SELinux. Remove the now redundant exported SELinux Audit interface. Audit: Export 'audit_krule' and 'audit_field' to the public since their internals are needed by the implementation of the new LSM hook 'audit_rule_known'. Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com> Acked-by: James Morris <jmorris@namei.org>
2008-04-19Audit: internally use the new LSM audit hooksAhmed S. Darwish
Convert Audit to use the new LSM Audit hooks instead of the exported SELinux interface. Basically, use: security_audit_rule_init secuirty_audit_rule_free security_audit_rule_known security_audit_rule_match instad of (respectively) : selinux_audit_rule_init selinux_audit_rule_free audit_rule_has_selinux selinux_audit_rule_match Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com> Acked-by: James Morris <jmorris@namei.org>
2008-04-19Audit: use new LSM hooks instead of SELinux exportsAhmed S. Darwish
Stop using the following exported SELinux interfaces: selinux_get_inode_sid(inode, sid) selinux_get_ipc_sid(ipcp, sid) selinux_get_task_sid(tsk, sid) selinux_sid_to_string(sid, ctx, len) kfree(ctx) and use following generic LSM equivalents respectively: security_inode_getsecid(inode, secid) security_ipc_getsecid*(ipcp, secid) security_task_getsecid(tsk, secid) security_sid_to_secctx(sid, ctx, len) security_release_secctx(ctx, len) Call security_release_secctx only if security_secid_to_secctx succeeded. Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com> Acked-by: James Morris <jmorris@namei.org> Reviewed-by: Paul Moore <paul.moore@hp.com>
2008-04-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-hrtLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-hrt: clocksource: make clocksource watchdog cycle through online CPUs Documentation: move timer related documentation to a single place clockevents: optimise tick_nohz_stop_sched_tick() a bit locking: remove unused double_spin_lock() hrtimers: simplify lockdep handling timers: simplify lockdep handling posix-timers: fix shadowed variables timer_list: add annotations to workqueue.c hrtimer: use nanosleep specific restart_block fields hrtimer: add nanosleep specific restart_block member
2008-04-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-kgdbLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-kgdb: kgdb: always use icache flush for sw breakpoints kgdb: fix SMP NMI kgdb_handle_exception exit race kgdb: documentation fixes kgdb: allow static kgdbts boot configuration kgdb: add documentation kgdb: Kconfig fix kgdb: add kgdb internal test suite kgdb: fix several kgdb regressions kgdb: kgdboc pl011 I/O module kgdb: fix optional arch functions and probe_kernel_* kgdb: add x86 HW breakpoints kgdb: print breakpoint removed on exception kgdb: clocksource watchdog kgdb: fix NMI hangs kgdb: fix kgdboc dynamic module configuration kgdb: document parameters x86: kgdb support consoles: polling support, kgdboc kgdb: core uaccess: add probe_kernel_write()
2008-04-18Merge branch 'semaphore' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/willy/misc * 'semaphore' of git://git.kernel.org/pub/scm/linux/kernel/git/willy/misc: Remove DEBUG_SEMAPHORE from Kconfig Improve semaphore documentation Simplify semaphore implementation Add down_timeout and change ACPI to use it Introduce down_killable() Generic semaphore implementation Add semaphore.h to kernel_lock.c Fix quota.h includes
2008-04-18Merge branch 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6Linus Torvalds
* 'for-linus' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: (36 commits) [S390] Remove code duplication from monreader / dcssblk. [S390] kernel: show last breaking-event-address on oops [S390] lowcore: Change type of lowcores softirq_pending to __u32. [S390] zcrypt: Comments and kernel-doc cleanup [S390] uaccess: Always access the correct address space. [S390] Fix a lot of sparse warnings. [S390] Convert s390 to GENERIC_CLOCKEVENTS. [S390] genirq/clockevents: move irq affinity prototypes/inlines to interrupt.h [S390] Convert monitor calls to function calls. [S390] qdio (new feature): enhancing info-retrieval from QDIO-adapters [S390] replace remaining __FUNCTION__ occurrences [S390] remove redundant display of free swap space in show_mem() [S390] qdio: remove outdated developerworks link. [S390] Add debug_register_mode() function to debug feature API [S390] crypto: use more descriptive function names for init/exit routines. [S390] switch sched_clock to store-clock-extended. [S390] zcrypt: add support for large random numbers [S390] hw_random: allow rng_dev_read() to return hardware errors. [S390] Vertical cpu management. [S390] cpu topology support for s390. ...
2008-04-18ptrace_signal subroutineRoland McGrath
This breaks out the ptrace handling from get_signal_to_deliver into a new subroutine. The actual code there doesn't change, and it gets inlined into nearly identical compiled code. This makes the function substantially shorter and thus easier to read, and it nicely isolates the ptrace magic. Signed-off-by: Roland McGrath <roland@redhat.com> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-18cgroup: fix a race condition in manipulating tsk->cg_listLi Zefan
When I ran a test program to fork mass processes and at the same time 'cat /cgroup/tasks', I got the following oops: ------------[ cut here ]------------ kernel BUG at lib/list_debug.c:72! invalid opcode: 0000 [#1] SMP Pid: 4178, comm: a.out Not tainted (2.6.25-rc9 #72) ... Call Trace: [<c044a5f9>] ? cgroup_exit+0x55/0x94 [<c0427acf>] ? do_exit+0x217/0x5ba [<c0427ed7>] ? do_group_exit+0.65/0x7c [<c0427efd>] ? sys_exit_group+0xf/0x11 [<c0404842>] ? syscall_call+0x7/0xb [<c05e0000>] ? init_cyrix+0x2fa/0x479 ... EIP: [<c04df671>] list_del+0x35/0x53 SS:ESP 0068:ebc7df4 ---[ end trace caffb7332252612b ]--- Fixing recursive fault but reboot is needed! After digging into the code and debugging, I finlly found out a race situation: do_exit() ->cgroup_exit() ->if (!list_empty(&tsk->cg_list)) list_del(&tsk->cg_list); cgroup_iter_start() ->cgroup_enable_task_cg_list() ->list_add(&tsk->cg_list, ..); In this case the list won't be deleted though the process has exited. We got two bug reports in the past, which seem to be the same bug as this one: http://lkml.org/lkml/2008/3/5/332 http://lkml.org/lkml/2007/10/17/224 Actually sometimes I got oops on list_del, sometimes oops on list_add. And I can change my test program a bit to trigger other oops. The patch has been tested both on x86_32 and x86_64. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Acked-by: Paul Menage <menage@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-17kgdb: always use icache flush for sw breakpointsJason Wessel
On the ppc 4xx architecture the instruction cache must be flushed as well as the data cache. This patch just makes it generic for all architectures where CACHE_FLUSH_IS_SAFE is set to 1. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: fix SMP NMI kgdb_handle_exception exit raceJason Wessel
Fix the problem of protecting the kgdb handle_exception exit which had an NMI race condition, while trying to restore normal system operation. There was a small window after the master processor sets cpu_in_debug to zero but before it has set kgdb_active to zero where a non-master processor in an SMP system could receive an NMI and re-enter the kgdb_wait() loop. As long as the master processor sets the cpu_in_debug before sending the cpu roundup the cpu_in_debug variable can also be used to guard against the race condition. The kgdb_wait() function no longer needs to check kgdb_active because it is done in the arch specific code and handled along with the nmi traps at the low level. This also allows kgdb_wait() to exit correctly if it was entered for some unknown reason due to a spurious NMI that could not be handled by the arch specific code. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: fix several kgdb regressionsJason Wessel
kgdb core fixes: - Check to see that mm->mmap_cache is not null before calling flush_cache_range(), else on arch=ARM it will cause a fatal fault. - Breakpoints should only be restored if they are in the BP_ACTIVE state. - Fix a typo in comments to "kgdb_register_io_module" x86 kgdb fixes: - Fix the x86 arch handler such that on a kill or detach that the appropriate cleanup on the single stepping flags gets run. - Add in the DIE_NMIWATCHDOG call for x86_64 - Touch the nmi watchdog before returning the system to normal operation after performing any kind of kgdb operation, else the possibility exists to trigger the watchdog. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: fix optional arch functions and probe_kernel_*Jason Wessel
Fix two regressions dealing with the kgdb core. 1) kgdb_skipexception and kgdb_post_primary_code are optional functions that are only required on archs that need special exception fixups. 2) The kernel address space scope must be set on any probe_kernel_* function or archs such as ARCH=arm will not allow access to the kernel memory space. As an example, it is required to allow the full kernel address space is when you the kernel debugger to inspect a system call. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: add x86 HW breakpointsJason Wessel
Add HW breakpoints into the arch specific portion of x86 kgdb. In the current x86 kernel.org kernels HW breakpoints are changed out in lazy fashion because there is no infrastructure around changing them when changing to a kernel task or entering the kernel mode via a system call. This lazy approach means that if a user process uses HW breakpoints the kgdb will loose out. This is an acceptable trade off because the developer debugging the kernel is assumed to know what is going on system wide and would be aware of this trade off. There is a minor bug fix to the kgdb core so as to correctly call the hw breakpoint functions with a valid value from the enum. There is also a minor change to the x86_64 startup code when using early HW breakpoints. When the debugger is connected, the cpu startup code must not zero out the HW breakpoint registers or you cannot hit the breakpoints you are interested in, in the first place. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: print breakpoint removed on exceptionJason Wessel
If kgdb does remove a breakpoint that had a problem on the recursion check, it should also print the address of the breakpoint. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17kgdb: clocksource watchdogJason Wessel
In order to not trip the clocksource watchdog, kgdb must touch the clocksource watchdog on the return to normal system run state. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>