aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/head_64.S
AgeCommit message (Collapse)Author
2009-06-09powerpc: Split exception handling out of head_64.SBenjamin Herrenschmidt
To prepare for future support of Book3E 64-bit PowerPC processors, which use a completely different exception handling, we move that code to a new exceptions-64s.S file. This file is #included from head_64.S due to some of the absolute address requirements which can currently only be fulfilled from within that file. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-09powerpc: Move VMX and VSX asm code to vector.SBenjamin Herrenschmidt
Currently, load_up_altivec and give_up_altivec are duplicated in 32-bit and 64-bit. This creates a common implementation that is moved away from head_32.S, head_64.S and misc_64.S and into vector.S, using the same macros we already use for our common implementation of load_up_fpu. I also moved the VSX code over to vector.S though in that case I didn't make it build on 32-bit (yet). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-03-11powerpc/kconfig: Kill PPC_MULTIPLATFORMBenjamin Herrenschmidt
CONFIG_PPC_MULTIPLATFORM is a remain of the pre-powerpc days and isn't really meaningful anymore. It was basically equivalent to PPC64 || 6xx. This removes it along with the following changes: - 32-bit platforms that relied on PPC32 && PPC_MULTIPLATFORM now rely on 6xx which is what they want anyway. - A new symbol, PPC_BOOK3S, is defined that represent compliance with the "Server" variant of the architecture. This is set when either 6xx or PPC64 is set and open the door for future BOOK3E 64-bit. - 64-bit platforms that relied on PPC64 && PPC_MULTIPLATFORM now use PPC64 && PPC_BOOK3S - A separate and selectable CONFIG_PPC_OF_BOOT_TRAMPOLINE option is now used to control the use of prom_init.c Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-01-13powerpc/powermac: Fix occasional SMP boot failureBenjamin Herrenschmidt
The PowerMac kernel occasionally fails to bring up the secondary CPUs on SMP, the trigger factor seem to be fairly random and related to location of code and data. This appears to be due to the initial loading of the TOC value by the secondary processor which now happens before we clear HID4:RM_CI (Real Mode Cache Invalidate). This bit should really be cleared before we do any load or store other than fetching code. This fix works based on the assumption that all SMP 64-bit PowerMacs use variants of the 970, which fortunately is true, by explicitely clearing that bit, adding an slbia for good measure as RM_CI mode is known to create bogus ERAT entries. I also removed some spurrious debug output that was left enabled by mistake while at it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-31powerpc/ppc64/kdump: Better flag for running relocatableMilton Miller
The __kdump_flag ABI is overly constraining for future development. As of 2.6.27, the kernel entry point has 4 constraints: Offset 0 is the starting point for the master (boot) cpu (entered with r3 pointing to the device tree structure), offset 0x60 is code for the slave cpus (entered with r3 set to their device tree physical id), offset 0x20 is used by the iseries hypervisor, and secondary cpus must be well behaved when the first 256 bytes are copied to address 0. Placing the __kdump_flag at 0x18 is bad because: - It was taking the last 8 bytes before the iseries hypervisor data. - It was 8 bytes for a boolean flag - It had no way of identifying that the flag was present - It does leave any room for the master to add any additional code before branching, which hurts debug. - It will be unnecessarily hard for 32 bit code to be common (8 bytes) Now that we have eliminated the use of __kdump_flag in favor of the standard is_kdump_kernel(), this flag only controls run without relocating the kernel to PHYSICAL_START (0), so rename it __run_at_load. Move the flag to 0x5c, 1 word before the secondary cpu entry point at 0x60. Initialize it with "run0" to say it will run at 0 unless it is set to 1. It only exists if we are relocatable. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-22powerpc: Support for relocatable kdump kernelMohan Kumar M
This adds relocatable kernel support for kdump. With this one can use the same regular kernel to capture the kdump. A signature (0xfeed1234) is passed in r6 from panic code to the next kernel through kexec_sequence and purgatory code. The signature is used to differentiate between kdump kernel and non-kdump kernels. The purgatory code compares the signature and sets the __kdump_flag in head_64.S. During the boot up, kernel code checks __kdump_flag and if it is set, the kernel will behave as relocatable kdump kernel. This kernel will boot at the address where it was loaded by kexec-tools ie. at the address reserved through crashkernel boot parameter. CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump kernel as relocatable. So the same kernel can be used as production and kdump kernel. This patch incorporates the changes suggested by Paul Mackerras to avoid GOT use and to avoid two copies of the code. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Mohan Kumar M <mohan@in.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-09-15powerpc: Make the 64-bit kernel as a position-independent executablePaul Mackerras
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as a position-independent executable (PIE) when it is set. This involves processing the dynamic relocations in the image in the early stages of booting, even if the kernel is being run at the address it is linked at, since the linker does not necessarily fill in words in the image for which there are dynamic relocations. (In fact the linker does fill in such words for 64-bit executables, though not for 32-bit executables, so in principle we could avoid calling relocate() entirely when we're running a 64-bit kernel at the linked address.) The dynamic relocations are processed by a new function relocate(addr), where the addr parameter is the virtual address where the image will be run. In fact we call it twice; once before calling prom_init, and again when starting the main kernel. This means that reloc_offset() returns 0 in prom_init (since it has been relocated to the address it is running at), which necessitated a few adjustments. This also changes __va and __pa to use an equivalent definition that is simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are constants (for 64-bit) whereas PHYSICAL_START is a variable (and KERNELBASE ideally should be too, but isn't yet). With this, relocatable kernels still copy themselves down to physical address 0 and run there. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15powerpc: Use LOAD_REG_IMMEDIATE only for constants on 64-bitPaul Mackerras
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols generates 5 instructions where LOAD_REG_ADDR can do it in one, and will generate R_PPC64_ADDR16_* relocations in the output when we get to making the kernel as a position-independent executable, which we'd rather not have to handle. This changes various bits of assembly code to use LOAD_REG_ADDR when we need to get the address of a symbol, or to use suitable position-independent code for cases where we can't access the TOC for various reasons, or if we're not running at the address we were linked at. It also cleans up a few minor things; there's no reason to save and restore SRR0/1 around RTAS calls, __mmu_off can get the return address from LR more conveniently than the caller can supply it in R4 (and we already assume elsewhere that EA == RA if the MMU is on in early boot), and enable_64b_mode was using 5 instructions where 2 would do. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15powerpc: Make it possible to move the interrupt handlers away from the kernelPaul Mackerras
This changes the way that the exception prologs transfer control to the handlers in 64-bit kernels with the aim of making it possible to have the prologs separate from the main body of the kernel. Now, instead of computing the address of the handler by taking the top 32 bits of the paca address (to get the 0xc0000000........ part) and ORing in something in the bottom 16 bits, we get the base address of the kernel by doing a load from the paca and add an offset. This also replaces an mfmsr and an ori to compute the MSR value for the handler with a load from the paca. That makes it unnecessary to have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit mode. We can no longer use a direct branches in the exception prolog code, which means that the SLB miss handlers can't branch directly to .slb_miss_realmode any more. Instead we have to compute the address and do an indirect branch. This is conditional on CONFIG_RELOCATABLE; for non-relocatable kernels we use a direct branch as before. (A later change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.) Since the secondary CPUs on pSeries start execution in the first 0x100 bytes of real memory and then have to get to wherever the kernel is, we can't use a direct branch to get there. Instead this changes __secondary_hold_spinloop from a flag to a function pointer. When it is set to a non-NULL value, the secondary CPUs jump to the function pointed to by that value. Finally this eliminates one code difference between 32-bit and 64-bit by making __secondary_hold be the text address of the secondary CPU spinloop rather than a function descriptor for it. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15powerpc: Rearrange head_64.S to move interrupt handler code to the beginningPaul Mackerras
This rearranges head_64.S so that we have all the first-level exception prologs together starting at 0x100, followed by all the second-level handlers that are invoked from the first-level prologs, followed by other code. This doesn't make any functional change but will make following changes for relocatable kernel support easier. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-15powerpc: Don't spin on sync instruction at boot timeSonny Rao
Push the sync below the secondary smp init hold loop and comment its purpose. This should speed up boot by reducing global traffic during the single-threaded portion of boot. Signed-off-by: Sonny Rao <sonnyrao@us.ibm.com> Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-01powerpc: Add VSX context save/restore, ptrace and signal supportMichael Neuling
This patch extends the floating point save and restore code to use the VSX load/stores when VSX is available. This will make FP context save/restore marginally slower on FP only code, when VSX is available, as it has to load/store 128bits rather than just 64bits. Mixing FP, VMX and VSX code will get constant architected state. The signals interface is extended to enable access to VSR 0-31 doubleword 1 after discussions with tool chain maintainers. Backward compatibility is maintained. The ptrace interface is also extended to allow access to VSR 0-31 full registers. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01powerpc: Make load_up_fpu and load_up_altivec callableMichael Neuling
Make load_up_fpu and load_up_altivec callable so they can be reused by the VSX code. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01powerpc: Move altivec_unavailableMichael Neuling
Move the altivec_unavailable code, to make room at 0xf40 where the vsx_unavailable exception will be. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-09[POWERPC] Fix bogus paca->_current initializationBenjamin Herrenschmidt
When doing lockdep, I had two patches to initialize paca->_current early, one bogus, and one correct. Unfortunately both got merged as the bad one ended up being part of the main lockdep patch by mistake. This causes memory corruption at boot. This removes the offending code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29[POWERPC] Add fast little-endian switch system callPaul Mackerras
This adds a system call on 64-bit platforms for switching between little-endian and big-endian modes that is much faster than doing a prctl call. This system call is handled as a special case right at the start of the system call entry code, and because it is a special case, it uses a system call number which is out of the range of normal system calls, namely 0x1ebe. Measurements with lmbench on a 4.2GHz POWER6 showed no measurable change in the speed of normal system calls with this patch. Switching endianness with this new system call takes around 60ns on a 4.2GHz POWER6, compared with around 300ns to switch endian mode with a prctl. This can provide a significant performance advantage for emulators for little-endian architectures that want to switch between big-endian and little-endian mode frequently, e.g. because they are generating instructions sequences on the fly and they want to run those sequences in little-endian mode. The other thing about this system call is that it doesn't clobber as many registers as a normal system call. It only clobbers r12. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-18[POWERPC] irqtrace support for 64-bit powerpcBenjamin Herrenschmidt
This adds the low level irq tracing hooks to the powerpc architecture needed to enable full lockdep functionality. This is partly based on Johannes Berg's initial version. I removed the asm trampoline that isn't needed (thus improving performance) and modified all sorts of bits and pieces, reworking most of the assembly, etc... Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17[POWERPC] Initialize paca->current earlierBenjamin Herrenschmidt
Currently, we initialize the "current" pointer in the PACA (which is used by the "current" macro in the kernel) before calling setup_system(). That means that early_setup() is called with current still "NULL" which is -not- a good idea. It happens to work so far but breaks with lockdep when early code calls printk. This changes it so that all PACAs are statically initialized with __current pointing to the init task. For non-0 CPUs, this is fixed up before use. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-14[POWERPC] Fix handling of unrecoverable SLB miss interruptsPaul Mackerras
If an SLB miss interrupt happens while the RI bit of MSR is zero, we can't just return, because RI being zero indicates that SRR0/SRR1 potentially had live values in them, and the process of taking an interrupt overwrites them. This should never happen, but if it does, we try to print a nice oops message. That doesn't work, however, because the code at unrecov_slb assumes that the MMU has been turned on, but we call it with the MMU off (and have done so since the SLB miss handler was rewritten to run without turning the MMU on) -- except on iSeries, where everything runs with the MMU on. This fixes it by adding the necessary code to turn the MMU on if necessary. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-03[POWERPC] Fix iSeries hard irq enabling regressionBenjamin Herrenschmidt
A subtle bug sneaked into iSeries recently. On this platform, we must not normally clear MSR:EE (the hardware external interrupt enable) except for short periods of time. Taking an interrupt while soft-disabled doesn't cause us to clear it for example. The iSeries kernel expects to mostly run with MSR:EE enabled at all times except in a few exception entry/exit code paths. Thus local_irq_enable() doesn't check if it needs to hard-enable as it expects this to be unnecessary on iSeries. However, hard_irq_disable() _does_ cause MSR:EE to be cleared, including on iSeries. A call to it was recently added to the context switch code, thus causing interrupts to become disabled for a long periods of time, causing the iSeries watchdog to kick in under some circumstances and other nasty things. This patch fixes it by making local_irq_enable() properly re-enable MSR:EE on iSeries. It basically removes a return statement here to make iSeries use the same code path as everybody else. That does mean that we might occasionally get spurious decrementer interrupts but I don't think that matters. Another option would have been to make hard_irq_disable() a nop on iSeries but I didn't like it much, in case we have good reasons to hard-disable. Part of the patch is fixes to make sure the hard_enabled PACA field is properly set on iSeries as it used not to be before, since it was mostly unused. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-24[POWERPC] Provide a way to protect 4k subpages when using 64k pagesPaul Mackerras
Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-08[POWERPC] Fix si_addr value on low level hash failuresBenjamin Herrenschmidt
If the low level MMU hash table insertion returns an error (which can happen in some rare circumstances when the hypervisor refuses the insertion of a PTE, typically if you try to access junk via /dev/mem), the generated signal had an incorrect si_addr value due to a bug in the assembly, which was loading it as a 32 bits quantity instead of a 64 bits quantity. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-12[POWERPC] Use 1TB segmentsPaul Mackerras
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19[POWERPC] FWNMI is only used on pSeriesStephen Rothwell
This saves 4k on non pSeries builds (except for iSeries where it saves almost 4k). Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19[POWERPC] Size swapper_pg_dir correctlyStephen Rothwell
David Gibson pointed out that swapper_pg_dir actually need to be PGD_TABLE_SIZE bytes long not PAGE_SIZE. This actually saves 64k in the bss for a kernel ppc64_defconfig built with CONFIG_PPC_64K_PAGES. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19[POWERPC] Remove cmd_line from head*.SStephen Rothwell
It is just a C char array, so declare it thusly. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-14[POWERPC] Move lowlevel runlatch calls under cpu feature controlOlof Johansson
There's no need to call the runlatch on functions on processors that don't implement them (CPU_FTR_CTRL). Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22[POWERPC] Move the iSeries exception vectorsStephen Rothwell
out of head_64.S and into platforms/iseries/exception.S Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22[POWERPC] Move the exception macros into a header fileStephen Rothwell
It makes head_64.S a bit more readable and will allow us to move the iSeries exceptions elsewhere. This also removes the last line of the comment: * The following macros define the code that appears as * the prologue to each of the exception handlers. They * are split into two parts to allow a single kernel binary * to be used for pSeries and iSeries. * LOL. One day... - paulus Anything is possible. :-) Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22[POWERPC] Move iSeries startup code out of head_64.SStephen Rothwell
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22[POWERPC] iSeries: Clean up lparmap messStephen Rothwell
We need to have xLparMap in head_64.S so that it is at a fixed address (because the linker will not resolve (address & 0xffffffff) for us). But the assembler miscalculates the KERNEL_VSID() expressions. So put the confusing expressions into asm-offsets.c. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-10[POWERPC] Fix more section mismatches in head_64.SStephen Rothwell
WARNING: vmlinux.o(.text+0x8174): Section mismatch: reference to .init.text:.prom_init (between '.__boot_from_prom' and '.__after_prom_start') WARNING: vmlinux.o(.text+0x8498): Section mismatch: reference to .init.text:.early_setup (between '.start_here_multiplatform' and '.start_here_common') WARNING: vmlinux.o(.text+0x8514): Section mismatch: reference to .init.text:.setup_system (between '.start_here_common' and 'system_call_common') WARNING: vmlinux.o(.text+0x8530): Section mismatch: reference to .init.text:.start_kernel (between '.start_here_common' and 'system_call_common') Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26[POWERPC] iSeries: Fix section mismatch warningsStephen Rothwell
WARNING: vmlinux.o(.text+0x8124): Section mismatch: reference to .init.text:.iSeries_early_setup (between '.__start_initialization_iSeries' and '.__mmu_off') WARNING: vmlinux.o(.text+0x8128): Section mismatch: reference to .init.text:.early_setup (between '.__start_initialization_iSeries' and '.__mmu_off') Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-28[POWERPC] Correct __secondary_hold commentGeoff Levand
Remove references to pSeries and OpenFirmware in the __secondary_hold usage comment. __secondary_hold is a generic routine and can be used by other platforms. Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-24[POWERPC] Save trap number in bad_stackOlof Johansson
Save the trap number in the case of getting a bad stack in an exception handler. It is sometimes useful to know what exception it was that caused this to happen. Without this, no trap number is reported. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Remove stale comment from head_64.SSonny Rao
This is now inaccurate because we may not have entered prom_init() and r3 is overwritten immediately anyway. Signed-off-by: Sonny Rao <sonny@burdell.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-09[POWERPC] Remove some redundant isync instructionsMOKUNO Masakazu
Remove some redundant isync instructions. enable_64b_mode() already does an isync, so there is no need to do it again. Signed-off-by: MOKUNO, Masakazu <mokuno@sm.sony.co.jp> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-07[POWERPC] Fix performance monitor exceptionLivio Soares
To the issue: some point during 2.6.20 development, Paul Mackerras introduced the "lazy IRQ disabling" patch (very cool work, BTW). In that patch, the performance monitor unit exception was marked as "maskable", in the sense that if interrupts were soft-disabled, that exception could be ignored. This broke my PowerPC profiling code. The symptom that I see is that a varying number of interrupts (from 0 to $n$, typically closer to 0) get delivered, when, in reality, it should always be very close to $n$. The issue stems from the way masking is being done. Masking in this fashion seems to work well with the decrementer and external interrupts, because they are raised again until "really" handled. For the PMU, however, this does not apply (at least on my Xserver machine with a 970FX processor). If the PMU exception is not handled, it will _not_ be re-raised (at least on my machine). The documentation states that the PMXE bit in MMCR0 is set to 0 when the PMU exception is raised. However, software must re-set the bit to re-enable PMU exceptions. If the exception is ignored (as currently) not only is that interrupt lost, but because software does not re-set PMXE, the PMU registers are "frozen" forever. [This patch means that performance monitor exceptions are taken and handled even if irqs are off, as long as some other interrupt hasn't come along and caused interrupts to be hard-disabled. In this sense the PMU exception becomes like an NMI. The oprofile code for most powerpc processors does nothing that is unsafe in an NMI context, but the Cell oprofile code does a spin_lock_irqsave. However, that turns out to be OK because Cell doesn't actually use the performance monitor exception; performance monitor interrupts come in as a regular interrupt on Cell, so will be disabled when irqs are off. -- paulus.] Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04[POWERPC] iSeries: Eliminate "exceeds stub group size" warningsStephen Rothwell
Commit 3ccfc65c5004e5fe5cfbffe43b8acc686680b53e missed the same fixes for legacy iSeries specific code, so make some more symbols no longer global. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04Merge branch 'linux-2.6' into for-linusPaul Mackerras
2006-11-13[PATCH] Remove occurences of PPC_MULTIPLATFORM in head_64.Ss.hauer@pengutronix.de
Since iSeries is merged to MULTIPLATFORM, there is no way to build a 64bit kernel without MULTIPLATFORM, so PPC_MULTIPLATFORM can be removed in 64bit-only files. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-11-01[PATCH] powerpc: Eliminate "exceeds stub group size" linker warningPaul Mackerras
It turns out that the linker warnings on 64-bit powerpc about "section blah exceeds stub group size" were being triggered by conditional branches in head_64.S branching to global symbols, whether in head_64.S or in other files. This eliminates the warnings by making some global symbols in head_64.S no longer global, and by rearranging some branches. Signed-off-by: Paul Mackerras <paulus@samba.org> [ Yee-haa. Maybe I'll notice newly introduced real warnings now - Linus ] Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-26[POWERPC] Make sure __cpu_preinit_ppc970 gets called on 970GX processorsOlof Johansson
Add check for 970GX for __cpu_preinit_ppc970. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-25[POWERPC] Consolidate feature fixup codeBenjamin Herrenschmidt
There are currently two versions of the functions for applying the feature fixups, one for CPU features and one for firmware features. In addition, they are both in assembly and with separate implementations for 32 and 64 bits. identify_cpu() is also implemented in assembly and separately for 32 and 64 bits. This patch replaces them with a pair of C functions. The call sites are slightly moved on ppc64 as well to be called from C instead of from assembly, though it's a very small change, and thus shouldn't cause any problem. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-18[POWERPC] Make sure interrupt enable gets restored properlyPaul Mackerras
The lazy IRQ disable patch missed a couple of places where the interrupt enable flags need to be restored correctly. First, we weren't restoring the paca->hard_enabled flag on interrupt exit. Instead of saving it on entry, we compute it from the MSR_EE bit in the MSR we are restoring at exit. Secondly, the MMU hash miss code was clearing both paca->soft_enabled and paca->hard_enabled but not restoring them in the case where hash_page was able to resolve the miss from the Linux page tables. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-16[POWERPC] Lazy interrupt disabling for 64-bit machinesPaul Mackerras
This implements a lazy strategy for disabling interrupts. This means that local_irq_disable() et al. just clear the 'interrupts are enabled' flag in the paca. If an interrupt comes along, the interrupt entry code notices that interrupts are supposed to be disabled, and clears the EE bit in SRR1, clears the 'interrupts are hard-enabled' flag in the paca, and returns. This means that interrupts only actually get disabled in the processor when an interrupt comes along. When interrupts are enabled by local_irq_enable() et al., the code sets the interrupts-enabled flag in the paca, and then checks whether interrupts got hard-disabled. If so, it also sets the EE bit in the MSR to hard-enable the interrupts. This has the potential to improve performance, and also makes it easier to make a kernel that can boot on iSeries and on other 64-bit machines, since this lazy-disable strategy is very similar to the soft-disable strategy that iSeries already uses. This version renames paca->proc_enabled to paca->soft_enabled, and changes a couple of soft-disables in the kexec code to hard-disables, which should fix the crash that Michael Ellerman saw. This doesn't yet use a reserved CR field for the soft_enabled and hard_enabled flags. This applies on top of Stephen Rothwell's patches to make it possible to build a combined iSeries/other kernel. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-03[POWERPC] implement BEGIN/END_FW_FTR_SECTIONStephen Rothwell
and use it an all the obvious places in assembler code. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
2006-09-13[POWERPC] powerpc: Reduce default cacheline size to 64 bytesOlof Johansson
Reduce default cacheline size on 64-bit powerpc from 128 bytes to 64. This is the architected minimum. In most cases we'll still end up using cache line information from the device tree, but defaults are used during early boot and doing a few dcbst/icbi's too many there won't do any harm. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-08-25[POWERPC] Cleanup CPU initsOlof Johansson
Cleanup CPU inits a bit more, Geoff Levand already did some earlier. * Move CPU state save to cpu_setup, since cpu_setup is only ever done on cpu 0 on 64-bit and save is never done more than once. * Rename __restore_cpu_setup to __restore_cpu_ppc970 and add function pointers to the cputable to use instead. Powermac always has 970 so no need to check there. * Rename __970_cpu_preinit to __cpu_preinit_ppc970 and check PVR before calling it instead of in it, it's too early to use cputable. * Rename pSeries_secondary_smp_init to generic_secondary_smp_init since everyone but powermac and iSeries use it. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-29[POWERPC] force 64bit mode in fwnmi handlers to workaround firmware bugsOlaf Hering
The firmware of POWER4 and JS20 systems does not switch the cpu to 64bit mode when the registered system_reset and machine_check handlers get called. If a 32bit process runs on that cpu at the time of the event, the cpu remains in 32bit mode. xmon and kdump can not deal with it, the result is an error like 'Bad kernel stack pointer fff2aad0 at 3200'. xmon just loses some register info, but booting the kdump kernel usually fails. Both handlers are not hot paths. Duplicate the EXCEPTION_PROLOG_PSERIES macro and add two instructions to switch to 64bit: li r11,5; rldimi r10,r11,61,0; Signed-off-by: Olaf Hering <olh@suse.de> Signed-off-by: Paul Mackerras <paulus@samba.org>