Age | Commit message (Collapse) | Author |
|
Reading from the ROM is not a good idea as it could disturb some
flash operation that it is in progress.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The SH instruction set has several instructions which accept an 8 bit
immediate operand. For logical instructions this operand is zero extended,
for arithmetic instructions the operand is sign extended. After adding an
option to the assembler to check this, it was found that several pieces
of assembly code were assuming this behaviour, and in one case
getting it wrong.
So this patch explicitly sign extends any immediate operands, which makes
it obvious what is happening, and fixes the one case which got it wrong.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The synopsys PCI cell used in the later STMicro chips requires code to
be run in order to do IO cycles, rather than just memory mapping the IO
space. Rather than extending the existing SH infrastructure to allow
this, use the GENERIC_IOMAP implmentation to save re-inventing the
wheel.
This set of changes allows the SH to be built with GENERIC_IOMAP
enabled, it just ifdef's out the functions provided by the GENERIC_IOMAP
implementation, and provides a few required missing functions.
Signed-off-by: David McKay <david.mckay@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
This patch is V3 of the SuperH Mobile Runtime PM platform bus
implentation matching Rafael's Runtime PM v16.
The code gets invoked from the SuperH specific Runtime PM
platform bus functions that override the weak symbols for:
- platform_pm_runtime_suspend()
- platform_pm_runtime_resume()
- platform_pm_runtime_idle()
This Runtime PM implementation performs two levels of power
management. At the time of platform bus runtime suspend the
clock to the device is stopped instantly. Later on if all
devices within the power domain has their clocks stopped
then the device driver ->runtime_suspend() callbacks are
used to save hardware register state for each device.
Device driver ->runtime_suspend() calls are scheduled from
cpuidle context using platform_pm_runtime_suspend_idle().
When all devices have been fully suspended the processor
is allowed to enter deep sleep from cpuidle.
The runtime resume operation turns on clocks and also
restores registers if needed. It is worth noting that the
devices start in a suspended state and the device driver
is responsible for calling runtime resume before accessing
the actual hardware.
In this particular platform bus implementation runtime
resume is not allowed from interrupt context. Runtime
suspend is however allowed from interrupt context as
long as the synchronous functions are avoided.
[ updated for v17 -- PFM. ]
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Conflicts:
arch/sh/kernel/cpu/sh3/entry.S
|
|
sh64 does not yet support GENERIC_BUG, but still wants unwinder support.
Alias UNWINDER_BUG and UNWINDER_BUG_ON to their BUG counterparts until
the conversion to GENERIC_BUG is completed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This simplifies the unwinder trap handling, dropping the use of the
special trapa vector and simply piggybacking on top of the BUG support. A
new BUGFLAG_UNWINDER is added for flagging the unwinder fault, before
continuing on with regular BUG dispatch.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Allow a DWARF register to have an undefined value. When applied to the
DWARF return address register this lets lets us label a function as
having no direct caller, e.g. kernel_thread_helper().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
|
We can't assume that if we execute the unwinder code and the unwinder
was already running that it has faulted. Clearly two kernel threads can
invoke the unwinder at the same time and may be running simultaneously.
The previous approach used BUG() and BUG_ON() in the unwinder code to
detect whether the unwinder was incapable of unwinding the stack, and
that the next available unwinder should be used instead. A better
approach is to explicitly invoke a trap handler to switch unwinders when
the current unwinder cannot continue.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
|
The handling of DW_CFA_val_offset ops was incorrectly using the
DWARF_REG_OFFSET flag but the register's value cannot be calculated
using the DWARF_REG_OFFSET method. Create a new flag to indicate that a
different method must be used to calculate the register's value even
though there is no implementation for DWARF_VAL_OFFSET yet; it's mainly
just a place holder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
|
Plug a memory leak in dwarf_unwinder_dump() where we didn't free the
memory that we had previously allocated for the DWARF frames and DWARF
registers.
Now is also a opportune time to implement our own mempool and kmem
cache. It's a good idea to have a certain number of frame and register
objects in reserve at all times, so that we are guaranteed to have our
allocation satisfied even when memory is scarce. Since we have pools to
allocate from we can implement the registers for each frame as a linked
list as opposed to a sparsely populated array. Whilst it's true that the
lookup time for a linked list is larger than for arrays, there's only
usually a maximum of 8 registers per frame. So the overhead isn't that
much of a concern.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
|
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
As an excellent indicator of how much testing the DSP code gets, a couple
of rather glaring bugs in the DSP save/restore paths were found:
- In the DSP restore case a0 needs to be popped off before a0g,
or the value of a0g is clobbered by the MSB of a0 in the case
of sign extension.
- Beyond that, the save and restore orders were out of sync,
so this fixes that up as well. At the same time, we switch over
to using movs.l for both the save and restore of the general DSP
registers as opposed to using sts.l (which was initially put in
place to work around a bug in ancient binutils versions which
the kernel no longer supports).
Reported-by: Chee Soon Yip <yip.cheesoon@renesas.com>
Cc: Chu Lih Kwek <kwek.chulih@renesas.com>,
Cc: General Lai <general.lai@renesas.com>,
Cc: Robert Cozens <Robert.Cozens@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
While most platforms implement LED banks in sets of 8/16/32, some use
different configurations. This adds a LED mask to the heartbeat platform
data to allow platforms to constrain the bitmap, which is otherwise
derived from the register size.
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This moves the initialization over to an early_initcall(). This fixes up
some lockdep interaction issues. At the same time, kill off some
superfluous locking in the init path.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Also, remove the "fix" to DW_CFA_def_cfa_register where we reset the
frame's cfa_offset to 0. This action is incorrect when handling
DW_CFA_def_cfa_register as the DWARF spec specifically states that the
previous contents of cfa_offset should be used with the new
register. The reason that I thought cfa_offset should be reset to 0 was
because it was being assigned a bogus value prior to executing the
DW_CFA_def_cfa_register op. It turns out that the bogus cfa_offset value
came from interpreting .cfi_escape pseudo-ops (those used by the GNU
extensions) as CFA_DW_def_cfa ops.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
|
|
|
Trying to figure out the best value for DWARF_ARCH_UNWIND_OFFSET is
tricky at best. Various things can change the size (and offset from the
beginning of the function) of the prologue. Notably, turning on ftrace
adds calls to mcount at the beginning of functions, thereby pushing the
prologue further into the function.
So replace DWARF_ARCH_UNWIND_OFFSET with some code that continues to
execute CFA instructions until the value of return address register is
defined. This is safe to do because we know that the return address must
have been pushed onto the frame before our first function call; we just
can't figure out where at compile-time.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
In order to use DWARF unwinder info the frame register has to contain a
valid value. Whilst GCC takes care of this for C code, we have to do it
ourselves for assembly.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Provide an interface for registering stack unwinders, where each
unwinder is given a rating that describes its accuracy and
complexity. The more accurate an unwinder is, the more complex it is.
If a the current stack unwinder faults, then the stack unwinder with the
next highest accuracy will be used in its place (provided one is
available). For example, this allows unwinders, such as the DWARF
unwinder, to liberally sprinkle BUG()s to catch badly formed DWARF debug
info.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Copy the stacktrace ops code from x86 and provide a central function for
use by functions that need to dump a callstack.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
These patches extend struct platform device data for a bunch of
SuperH Mobile processors and embedded boards. The patches simply
add hardware block ids to on-chip platform devices. Platform
devices off chip (such as external ethernet controllers or flash
chips) are left out which gives them a special case hardware block
id of zero.
Upcoming Runtime PM code will make use of the hardware block id
to group devices together. The hardware block id can also be used
to extend the SuperH Mobile clock framework implementation.
This series of patches depend on the following:
"Driver Core: Add platform device arch data V3".
This patch adds a hwblk_id member to struct pdev_archdata. This member
should be used to point out on-chip hardware block id.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6 into sh/hwblk
|
|
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This cleans up the irqflags tracing code quite a bit and ties it
in to various missing callsites that caused an imbalance when
CONFIG_PROVE_LOCKING was enabled.
Previously this was catching on:
987 #ifdef CONFIG_PROVE_LOCKING
988 DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
989 DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
990 #endif
991 retval = -EAGAIN;
with hardirqs being doubly enabled, and subsequently bailing out
with the following call trace:
Call trace:
[<88035224>] __lock_acquire+0x616/0x6a6
[<88015a8c>] do_fork+0xf8/0x2b0
[<880331ec>] trace_hardirqs_on_caller+0xd4/0x114
[<88241074>] _spin_unlock_irq+0x20/0x64
[<88035224>] __lock_acquire+0x616/0x6a6
[<8800386c>] kernel_thread+0x48/0x70
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88003894>] kernel_thread_helper+0x0/0x14
[<88024bac>] __call_usermodehelper+0x38/0x70
[<88025dc0>] worker_thread+0x150/0x274
[<88035b9c>] lock_release+0x0/0x198
[<88024b74>] __call_usermodehelper+0x0/0x70
[<88028cf0>] autoremove_wake_function+0x0/0x30
[<88028bf2>] kthread+0x3e/0x70
[<88025c70>] worker_thread+0x0/0x274
[<8800389c>] kernel_thread_helper+0x8/0x14
[<88028bb4>] kthread+0x0/0x70
[<88003894>] kernel_thread_helper+0x0/0x14
Reported-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Those definitions are already provided by asm-generic
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
Upcoming paches to support the new 64-bit "BookE" powerpc architecture
will need to have the virtual address corresponding to PTE page when
freeing it, due to the way the HW table walker works.
Basically, the TLB can be loaded with "large" pages that cover the whole
virtual space (well, sort-of, half of it actually) represented by a PTE
page, and which contain an "indirect" bit indicating that this TLB entry
RPN points to an array of PTEs from which the TLB can then create direct
entries. Thus, in order to invalidate those when PTE pages are deleted,
we need the virtual address to pass to tlbilx or tlbivax instructions.
The old trick of sticking it somewhere in the PTE page struct page sucks
too much, the address is almost readily available in all call sites and
almost everybody implemets these as macros, so we may as well add the
argument everywhere. I added it to the pmd and pud variants for consistency.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
Acked-by: Nick Piggin <npiggin@suse.de>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Allocate one of the unused PTE bits for _PAGE_SPECIAL directly. This is
prep work for fast gup and the zero page revival.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
Extend the SuperH hwblk code to support more than one counter.
Contains ground work for the future Runtime PM implementation.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
Add both dynamic and static function graph tracer support for sh.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Pull the initial preempt_count value into a single
definition site.
Maintainers for: alpha, ia64 and m68k, please have a look,
your arch code is funny.
The header magic is a bit odd, but similar to the KERNEL_DS
one, CPP waits with expanding these macros until the
INIT_THREAD_INFO macro itself is expanded, which is in
arch/*/kernel/init_task.c where we've already included
sched.h so we're good.
Cc: tony.luck@intel.com
Cc: rth@twiddle.net
Cc: geert@linux-m68k.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
Now that I've added TIF_SYSCALL_FTRACE the thread flags do not fit into
a single byte any more. Code testing them now needs to be aware of the
upper and lower bytes.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
It seems that MCOUNT_INSN_OFFSET was calculating the distance between
the wrong functions. The value that should have actually been computed
is the distance between ftrace_call and ftrace_stub. I discovered this
when I added some code to ftrace_caller.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
This patch adds cpuidle support for SuperH Mobile.
The sleep mode selected by cpuidle is compared with
the mode selected by the hwblk sleep code and the
best allowed mode is entered.
At this point "Sleep mode" and "Sleep mode + SF" are
supported. This code can easily be extended to support
"Software suspend mode", but the assembly code must
first be updated to avoid loosing interrupts.
Also, update the code to only copy the assembly snippet
into internal memory once at bootup.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This patch is the hwblk base implementation, containing
structures and shared functions dealing with hardware blocks.
A each processor model should provide a list of hwblks and
describe which module stop bit that is associated with each
hwblck and how the hwblks are grouped together into areas.
The shared code keeps track of the usage count for each
hwblk and the areas. Fallback implementations for processor
specific code are also kept as weak symbols.
The clock framework, the runtime pm code and cpuidle will
all tie into this hwblk implementation.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Rework the bootmem allocator to use the lmb framework.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Fixes up recent build breakage.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
When arch/sh/include/asm/syscall_32.h is included from a file that
doesn't also include linux/err.h the following error is produced,
In file included from /home/matt/src/kernels/sh-2.6/arch/sh/include/asm/syscall.h:5,
from kernel/trace/trace_syscalls.c:3:
/home/matt/src/kernels/sh-2.6/arch/sh/include/asm/syscall_32.h: In function 'syscall_get_error':
/home/matt/src/kernels/sh-2.6/arch/sh/include/asm/syscall_32.h:28: error: implicit declaration of function 'IS_ERR_VALUE'
make[2]: *** [kernel/trace/trace_syscalls.o] Error 1
make[1]: *** [kernel/trace] Error 2
make: *** [kernel] Error 2
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Fixes up a recently introduced build error.
Reported-by: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
commit dbe6f1869188b6e04e38aa861dd198befb08bcd7
("dma-mapping: mark dma_sync_single and dma_sync_sg as deprecated"
conveniently broke every single SH build.
In the future it would be great if people could at least bother
figuring out how to use grep.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Crib the x86 cpu_idle_wait() implementation and shove it in with the
idle code, subsequently enabling ARCH_HAS_CPU_IDLE_WAIT.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|