Age | Commit message (Collapse) | Author |
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This implements a few trace points across events that are deemed
interesting. This implements a number of trace points:
- The page fault handler / TLB miss
- IPC calls
- Kernel thread creation
The original LTTng patch had the slow-path instrumented, which
fails to account for the vast majority of events. In general
placing this in the fast-path is not a huge performance hit, as
we don't take page faults for kernel addresses.
The other bits of interest are some of the other trap handlers, as
well as the syscall entry/exit (which is better off being handled
through the tracehook API). Most of the other trap handlers are corner
cases where alternate means of notification exist, so there is little
value in placing extra trace points in these locations.
Based on top of the points provided both by the LTTng instrumentation
patch as well as the patch shipping in the ST-Linux tree, albeit in a
stripped down form.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This patch contains the following cleanups:
- make the following needlessly global code static:
- cf-enabler.c: cf_init()
- cpu/clock.c: __clk_enable()
- cpu/clock.c: __clk_disable()
- process_32.c: default_idle()
- time_32.c: struct clocksource_sh
- timers/timer-tmu.c: struct tmu_timer_ops
- remove the following unused functions (no CONFIG_BLK_DEV_FD on sh):
- process_{32,64}.c: disable_hlt()
- process_{32,64}.c: enable_hlt()
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:
scheduler switch to idle task
enable interrupts
Window starts here
----> interrupt happens (does not set NEED_RESCHED)
irq_exit() stops the tick
----> interrupt happens (does set NEED_RESCHED)
return from schedule()
cpu_idle(): preempt_disable();
Window ends here
The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.
The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.
Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.
cpu_idle()
{
preempt_disable();
while(1) {
tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
are in the idle loop
while (!need_resched())
halt();
tick_nohz_restart_sched_tick(); <- disables NOHZ mode
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}
In hindsight we should have done this forever, but ...
/me grabs a large brown paperbag.
Debugged-by: Jack Ren <jack.ren@marvell.com>,
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Presently with preempt enabled there's the possibility to be preempted
after the TIF_USEDFPU test and the register save, leading to bogus
state post-__switch_to(). Use an explicit preempt_disable()/enable()
pair around unlazy_fpu()/clear_fpu() to avoid this. Follows the x86
change.
Reported-by: Takuo Koguchi <takuo.koguchi.sw@hitachi.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This implements kernel-level atomic rollback built on top of gUSA,
as an alternative non-IRQ based atomicity method. This is generally
a faster method for platforms that are lacking the LL/SC pairs that
SH-4A and later use, and is only supportable on legacy cores.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|