aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/entry_64.S
AgeCommit message (Collapse)Author
2008-05-23ftrace: use dynamic patching for updating mcount callsSteven Rostedt
This patch replaces the indirect call to the mcount function pointer with a direct call that will be patched by the dynamic ftrace routines. On boot up, the mcount function calls the ftace_stub function. When the dynamic ftrace code is initialized, the ftrace_stub is replaced with a call to the ftrace_record_ip, which records the instruction pointers of the locations that call it. Later, the ftraced daemon will call kstop_machine and patch all the locations to nops. When a ftrace is enabled, the original calls to mcount will now be set top call ftrace_caller, which will do a direct call to the registered ftrace function. This direct call is also patched when the function that should be called is updated. All patching is performed by a kstop_machine routine to prevent any type of race conditions that is associated with modifying code on the fly. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-23ftrace: add basic support for gcc profiler instrumentationArnaldo Carvalho de Melo
If CONFIG_FTRACE is selected and /proc/sys/kernel/ftrace_enabled is set to a non-zero value the ftrace routine will be called everytime we enter a kernel function that is not marked with the "notrace" attribute. The ftrace routine will then call a registered function if a function happens to be registered. [ This code has been highly hacked by Steven Rostedt and Ingo Molnar, so don't blame Arnaldo for all of this ;-) ] Update: It is now possible to register more than one ftrace function. If only one ftrace function is registered, that will be the function that ftrace calls directly. If more than one function is registered, then ftrace will call a function that will loop through the functions to call. Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17x86: ptrace vs -ENOSYSRoland McGrath
When we're stopped at syscall entry tracing, ptrace can change the %rax value from -ENOSYS to something else. If no system call is actually made because the syscall number (now in orig_rax) is bad, then we now always reset %rax to -ENOSYS again. This changes it to leave the return value alone after entry tracing. That way, the %rax value set by ptrace is there to be seen in user mode (or in syscall exit tracing). This is consistent with what the 32-bit kernel does. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-26x86: fix execve with -fstack-protectIngo Molnar
pointed out by pageexec@freemail.hu: > what happens here is that gcc treats the argument area as owned by the > callee, not the caller and is allowed to do certain tricks. for ssp it > will make a copy of the struct passed by value into the local variable > area and pass *its* address down, and it won't copy it back into the > original instance stored in the argument area. > > so once sys_execve returns, the pt_regs passed by value hasn't at all > changed and its default content will cause a nice double fault (FWIW, > this part took me the longest to debug, being down with cold didn't > help it either ;). To fix this we pass in pt_regs by pointer. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-19x86: don't make irq_return globalAdrian Bunk
Signed-off-by: Adrian Bunk <bunk@kernel.org> Cc: hpa@zytor.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-09x86: fixup more paravirt falloutIngo Molnar
Use a common irq_return entry point for all the iret places, which need the paravirt INTERRUPT return wrapper. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-06x86: fix iret exception recovery on 64-bitRoland McGrath
This change broke recovery of exceptions in iret: commit 72fe4858544292ad64600765cb78bc02298c6b1c Author: Glauber de Oliveira Costa <gcosta@redhat.com> x86: replace privileged instructions with paravirt macros The ENTRY(native_iret) macro adds alignment padding before the iretq instruction, so "iret_label" no longer points exactly at the instruction. It was sloppy to leave the old "iret_label" label behind when replacing its nearby use. Removing it would have revealed the other use of the label later in the file, and upon noticing that use, anyone exercising the minimum of attention to detail expected of anyone touching this subtle code would realize it needed to change as well. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-06x86: make traps on entry code be debuggable in user space, 64-bitRoland McGrath
Unify the x86-64 behavior for 32-bit processes that set bogus %cs/%ss values (the only ones that can fault in iret) match what the native i386 behavior is. (do not kill the task via do_exit but generate a SIGSEGV signal) [ tglx@linutronix.de: build fix ] Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86: replace privileged instructions with paravirt macrosGlauber de Oliveira Costa
The assembly code in entry_64.S issues a bunch of privileged instructions, like cli, sti, swapgs, and others. Paravirt guests are forbidden to do so, and we then replace them with macros that will do the right thing. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-25sched: high-res preemption tickPeter Zijlstra
Use HR-timers (when available) to deliver an accurate preemption tick. The regular scheduler tick that runs at 1/HZ can be too coarse when nice level are used. The fairness system will still keep the cpu utilisation 'fair' by then delaying the task that got an excessive amount of CPU time but try to minimize this by delivering preemption points spot-on. The average frequency of this extra interrupt is sched_latency / nr_latency. Which need not be higher than 1/HZ, its just that the distribution within the sched_latency period is important. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-10-17x86: return correct error code from child_rip in x86_64 entry.SAndrey Mirkin
Right now register edi is just cleared before calling do_exit. That is wrong because correct return value will be ignored. Value from rax should be copied to rdi instead of clearing edi. AK: changed to 32bit move because it's strictly an int [ tglx: arch/x86 adaptation ] Signed-off-by: Andrey Mirkin <major@openvz.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-11lockdep: x86_64: connect the sysexit hookPeter Zijlstra
Run the lockdep_sys_exit hook after all other C code on the syscall return path. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-10-11x86_64: move kernelThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>