aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/mm/hash_utils_64.c
AgeCommit message (Collapse)Author
2008-05-15[POWERPC] vmemmap fixes to use smaller pagesBenjamin Herrenschmidt
This changes vmemmap to use a different region (region 0xf) of the address space, and to configure the page size of that region dynamically at boot. The problem with the current approach of always using 16M pages is that it's not well suited to machines that have small amounts of memory such as small partitions on pseries, or PS3's. In fact, on the PS3, failure to allocate the 16M page backing vmmemmap tends to prevent hotplugging the HV's "additional" memory, thus limiting the available memory even more, from my experience down to something like 80M total, which makes it really not very useable. The logic used by my match to choose the vmemmap page size is: - If 16M pages are available and there's 1G or more RAM at boot, use that size. - Else if 64K pages are available, use that - Else use 4K pages I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages) and it seems to work fine. Note that I intend to change the way we organize the kernel regions & SLBs so the actual region will change from 0xf back to something else at one point, as I simplify the SLB miss handler, but that will be for a later patch. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-07[POWERPC] htab_remove_mapping is only used by MEMORY_HOTPLUGStephen Rothwell
This eliminates a warning in builds that don't define CONFIG_MEMORY_HOTPLUG. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-01[POWERPC] Add error return from htab_remove_mapping()Badari Pulavarty
If the platform doesn't support hpte_removebolted(), gracefully return failure rather than success. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-03-26Merge branch 'linux-2.6'Paul Mackerras
2008-03-24[POWERPC] Don't use 64k pages for ioremap on pSeriesPaul Mackerras
On pSeries, the hypervisor doesn't let us map in the eHEA ethernet adapter using 64k pages, and thus the ehea driver will fail if 64k pages are configured. This works around the problem by always using 4k pages for ioremap on pSeries (but not on other platforms). A better fix would be to check whether the partition could ever have an eHEA adapter, and only force 4k pages if it could, but this will do for 2.6.25. This is based on an earlier patch by Tony Breeds. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-03-13Merge branch 'linux-2.6'Paul Mackerras
2008-03-13[POWERPC] Fix large hash table allocation on Cell bladesMichael Ellerman
My recent hack to allocate the hash table under 1GB on cell was poorly tested, *cough*. It turns out on blades with large amounts of memory we fail to allocate the hash table at all. This is because RTAS has been instantiated just below 768MB, and 0-x MB are used by the kernel, leaving no areas that are both large enough and also naturally-aligned. For the cell IOMMU hack the page tables must be under 2GB, so use that as the limit instead. This has been tested on real hardware and boots happily. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-26[POWERPC] Add code for removing HPTEs for parts of the linear mappingBadari Pulavarty
For memory remove, we need to clean up htab mappings for the section of the memory we are removing. This implements support for removing htab bolted mappings for pSeries logical partitions. Other sub-archs may need to implement similar functionality for hotplug memory remove to work on them. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-13[LIB]: Make PowerPC LMB code generic so sparc64 can use it too.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-01-31[POWERPC] Allocate the hash table under 1G on cellMichael Ellerman
In order to support the fixed IOMMU mapping (in a subsequent patch), we need the hash table to be inside the IOMMUs DMA window. This is usually 2G, but let's make sure the hash table is under 1G as that will satisfy the IOMMU requirements and also means the hash table will be on node 0. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-24[POWERPC] Provide a way to protect 4k subpages when using 64k pagesPaul Mackerras
Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-17[POWERPC] Add hugepagesz boot-time parameterJon Tollefson
This adds the hugepagesz boot-time parameter for ppc64. It lets one pick the size for huge pages. The choices available are 64K and 16M when the base page size is 4k. It defaults to 16M (previously the only only choice) if nothing or an invalid choice is specified. Tested 64K huge pages successfully with the libhugetlbfs 1.2. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11[POWERPC] Use SLB size from the device treeMichael Neuling
Currently we hardwire the number of SLBs to 64, but PAPR says we should use the ibm,slb-size property to obtain the number of SLB entries. This uses this property instead of assuming 64. If no property is found, we assume 64 entries as before. This soft patches the SLB handler, so it shouldn't change performance at all. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-08[POWERPC] Include udbg.h when using udbg_printfwill schmidt
This fixes the error error: implicit declaration of function "udbg_printf" We have a few spots where we reference udbg_printf() without #including udbg.h. These are within #ifdef DEBUG blocks, so unnoticed until we do a #define DEBUG or #define DEBUG_LOW nearby. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-29[POWERPC] powerpc: Fix demotion of segments to 4K pagesBenjamin Herrenschmidt
When demoting a process to use 4K HW pages (instead of 64K), which happens under various circumstances such as doing cache inhibited mappings on machines that do not support 64K CI pages, the assembly hash code calls back into the C function flush_hash_page(). This function prototype was recently changed to accomodate for 1T segments but the assembly call site was not updated, causing applications that do demotion to hang. In addition, when updating the per-CPU PACA for the new sizes, we didn't properly update the slice "map", thus causing the SLB miss code to re-insert segments for the wrong size. This fixes both and adds a warning comment next to the C implementation to try to avoid problems next time someone changes it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-17[POWERPC] Add 1TB workaround for PA6TOlof Johansson
PA6T has a bug where the slbie instruction does not honor the large segment bit. As a result, we have to always use slbia when switching context. We don't have to worry about changing the slbie's during fault processing, since they should never be replacing one VSID with another using the same ESID. I.e. there's no risk for inserting duplicate entries due to a failed slbie of the old entry. So as long as we clear it out on context switch we should be fine. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-17[POWERPC] Fix 1TB segment detectionOlof Johansson
Buglet in the 1TB detection makes it return after checking the first property word, even if it's not a match. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-12[POWERPC] Use 1TB segmentsPaul Mackerras
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-17[POWERPC] Tidy up CONFIG_PPC_MM_SLICES codeStephen Rothwell
This removes some of the #ifdefs from .c files. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-17[POWERPC] Clean out a bunch of duplicate includesJesper Juhl
This removes several duplicate includes from arch/powerpc/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-17[POWERPC] Fix invalid semicolon after if statementIlpo Järvinen
A similar fix to netfilter from Eric Dumazet inspired me to look around a bit by using some grep/sed stuff as looking for this kind of bugs seemed easy to automate. This is one of them I found where it looks like this semicolon is not valid. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-03[POWERPC] Fixes for the SLB shadow buffer codeMichael Neuling
On a machine with hardware 64kB pages and a kernel configured for a 64kB base page size, we need to change the vmalloc segment from 64kB pages to 4kB pages if some driver creates a non-cacheable mapping in the vmalloc area. However, we never updated with SLB shadow buffer. This fixes it. Thanks to paulus for finding this. Also added some write barriers to ensure the shadow buffer contents are always consistent. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-22[POWERPC] cell: CONFIG_SPE_BASE is a typoGeert Uytterhoeven
The config symbol for SPE support is called CONFIG_SPU_BASE, not CONFIG_SPE_BASE. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14[POWERPC] Kill typedef-ed structs for hash PTEs and BATsDavid Gibson
Using typedefs to rename structure types if frowned on by CodingStyle. However, we do so for the hash PTE structure on both ppc32 (where it's called "PTE") and ppc64 (where it's called "hpte_t"). On ppc32 we also have such a typedef for the BATs ("BAT"). This removes this unhelpful use of typedefs, in the process bringing ppc32 and ppc64 closer together, by using the name "struct hash_pte" in both cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-17[POWERPC] Correct #endif commentJon Tollefson
Fix up comment on two #endifs to match their #ifs. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> ---- hash_utils_64.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-09[POWERPC] Add ability to 4K kernel to hash in 64K pagesBenjamin Herrenschmidt
This adds the ability for a kernel compiled with 4K page size to have special slices containing 64K pages and hash the right type of hash PTEs. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-09[POWERPC] Introduce address space "slices"Benjamin Herrenschmidt
The basic issue is to be able to do what hugetlbfs does but with different page sizes for some other special filesystems; more specifically, my need is: - Huge pages - SPE local store mappings using 64K pages on a 4K base page size kernel on Cell - Some special 4K segments in 64K-page kernels for mapping a dodgy type of powerpc-specific infiniband hardware that requires 4K MMU mappings for various reasons I won't explain here. The main issues are: - To maintain/keep track of the page size per "segment" (as we can only have one page size per segment on powerpc, which are 256MB divisions of the address space). - To make sure special mappings stay within their allotted "segments" (including MAP_FIXED crap) - To make sure everybody else doesn't mmap/brk/grow_stack into a "segment" that is used for a special mapping Some of the necessary mechanisms to handle that were present in the hugetlbfs code, but mostly in ways not suitable for anything else. The patch relies on some changes to the generic get_unmapped_area() that just got merged. It still hijacks hugetlb callbacks here or there as the generic code hasn't been entirely cleaned up yet but that shouldn't be a problem. So what is a slice ? Well, I re-used the mechanism used formerly by our hugetlbfs implementation which divides the address space in "meta-segments" which I called "slices". The division is done using 256MB slices below 4G, and 1T slices above. Thus the address space is divided currently into 16 "low" slices and 16 "high" slices. (Special case: high slice 0 is the area between 4G and 1T). Doing so simplifies significantly the tracking of segments and avoids having to keep track of all the 256MB segments in the address space. While I used the "concepts" of hugetlbfs, I mostly re-implemented everything in a more generic way and "ported" hugetlbfs to it. Slices can have an associated page size, which is encoded in the mmu context and used by the SLB miss handler to set the segment sizes. The hash code currently doesn't care, it has a specific check for hugepages, though I might add a mechanism to provide per-slice hash mapping functions in the future. The slice code provide a pair of "generic" get_unmapped_area() (bottomup and topdown) functions that should work with any slice size. There is some trickiness here so I would appreciate people to have a look at the implementation of these and let me know if I got something wrong. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-09[POWERPC] Small fixes & cleanups in segment page size demotionBenjamin Herrenschmidt
The code for demoting segments to 4K had some issues, like for example, when using _PAGE_4K_PFN flag, the first CPU to hit it would do the demotion, but other CPUs hitting the same page wouldn't properly flush their SLBs if mmu_ci_restriction isn't set. There are also potential issues with hash_preload not handling _PAGE_4K_PFN. All of these are non issues on current hardware but might bite us in the future. This patch thus fixes it by: - Taking the test comparing the mm and current CPU context page sizes to decide to flush SLBs out of the mmu_ci_restrictions test since that can also be triggered by _PAGE_4K_PFN pages - Due to the above being done all the time, demote_segment_4k doesn't need update the context and flush the SLB - demote_segment_4k can be static and doesn't need an EXPORT_SYMBOL - Making hash_preload ignore anything that has either _PAGE_4K_PFN or _PAGE_NO_CACHE set, thus avoiding duplication of the complicated logic in hash_page() (and possibly making hash_preload a little bit faster for the normal case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-02[POWERPC] Initialise spinlock in the DEBUG_PAGEALLOC codeMichael Ellerman
Fixes: BUG: spinlock bad magic on CPU#0, swapper/0 lock: c00000000064ec30, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 Call Trace: [c00000000062b980] [c00000000000f920] .show_stack+0x6c/0x1a0 (unreliable) [c00000000062ba20] [c0000000001c2b40] .spin_bug+0xb0/0xd4 [c00000000062bab0] [c0000000001c2ed0] ._raw_spin_lock+0x44/0x184 [c00000000062bb50] [c0000000003a42b4] ._spin_lock+0x10/0x24 [c00000000062bbd0] [c00000000002b4dc] .kernel_map_pages+0x198/0x278 [c00000000062bc90] [c000000000079720] .free_hot_cold_page+0x124/0x418 [c00000000062bd70] [c000000000530278] .free_all_bootmem_core+0x14c/0x224 [c00000000062be50] [c00000000052a178] .mem_init+0x68/0x170 [c00000000062bee0] [c00000000051d874] .start_kernel+0x2a0/0x37c [c00000000062bf90] [c0000000000084c8] .start_here_common+0x54/0x8c Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] DEBUG_PAGEALLOC for 64-bitBenjamin Herrenschmidt
Here's an implementation of DEBUG_PAGEALLOC for 64 bits powerpc. It applies on top of the 32 bits patch. Unlike Anton's previous attempt, I'm not using updatepp. I'm removing the hash entries from the bolted mapping (using a map in RAM of all the slots). Expensive but it doesn't really matter, does it ? :-) Memory hot-added doesn't benefit from this unless it's added at an address that is below end_of_DRAM() as calculated at boot time. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> arch/powerpc/Kconfig.debug | 2 arch/powerpc/mm/hash_utils_64.c | 84 ++++++++++++++++++++++++++++++++++++++-- 2 files changed, 82 insertions(+), 4 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13[POWERPC] Allow drivers to map individual 4k pages to userspacePaul Mackerras
Some drivers have resources that they want to be able to map into userspace that are 4k in size. On a kernel configured with 64k pages we currently end up mapping the 4k we want plus another 60k of physical address space, which could contain anything. This can introduce security problems, for example in the case of an infiniband adaptor where the other 60k could contain registers that some other program is using for its communications. This patch adds a new function, remap_4k_pfn, which drivers can use to map a single 4k page to userspace regardless of whether the kernel is using a 4k or a 64k page size. Like remap_pfn_range, it would typically be called in a driver's mmap function. It only maps a single 4k page, which on a 64k page kernel appears replicated 16 times throughout a 64k page. On a 4k page kernel it reduces to a call to remap_pfn_range. The way this works on a 64k kernel is that a new bit, _PAGE_4K_PFN, gets set on the linux PTE. This alters the way that __hash_page_4K computes the real address to put in the HPTE. The RPN field of the linux PTE becomes the 4k RPN directly rather than being interpreted as a 64k RPN. Since the RPN field is 32 bits, this means that physical addresses being mapped with remap_4k_pfn have to be below 2^44, i.e. 0x100000000000. The patch also factors out the code in arch/powerpc/mm/hash_utils_64.c that deals with demoting a process to use 4k pages into one function that gets called in the various different places where we need to do that. There were some discrepancies between exactly what was done in the various places, such as a call to spu_flush_all_slbs in one case but not in others. Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-10[POWERPC] Fix spu SLB invalidationsBenjamin Herrenschmidt
The SPU code doesn't properly invalidate SPUs SLBs when necessary, for example when changing a segment size from the hugetlbfs code. In addition, it saves and restores the SLB content on context switches which makes it harder to properly handle those invalidations. This patch removes the saving & restoring for now, something more efficient might be found later on. It also adds a spu_flush_all_slbs(mm) that can be used by the core mm code to flush the SLBs of all SPEs that are running a given mm at the time of the flush. In order to do that, it adds a spinlock to the list of all SPEs and move some bits & pieces from spufs to spu_base.c Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2006-12-04[POWERPC] Fix cputable.h for combined buildStephen Rothwell
Remove CPU_FTR_16M_PAGE from the cupfeatures mask at runtime on iSeries. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-28[POWERPC] Use ppc_md.hpte_insert() in htab_bolt_mapping()Michael Ellerman
With the ppc_md htab pointers setup earlier, we can use ppc_md.hpte_insert in htab_bolt_mapping(), rather than deciding which version to call by hand. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28[POWERPC] powerpc: Initialise ppc_md htab pointers earlierMichael Ellerman
Initialise the ppc_md htab callbacks earlier, in the probe routines. This allows us to call htab_finish_init() from htab_initialize(), and makes it private to hash_utils_64.c. Move htab_finish_init() and make_bl() above htab_initialize() to avoid forward declarations. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-15powerpc: Use 64k pages without needing cache-inhibited large pagesPaul Mackerras
Some POWER5+ machines can do 64k hardware pages for normal memory but not for cache-inhibited pages. This patch lets us use 64k hardware pages for most user processes on such machines (assuming the kernel has been configured with CONFIG_PPC_64K_PAGES=y). User processes start out using 64k pages and get switched to 4k pages if they use any non-cacheable mappings. With this, we use 64k pages for the vmalloc region and 4k pages for the imalloc region. If anything creates a non-cacheable mapping in the vmalloc region, the vmalloc region will get switched to 4k pages. I don't know of any driver other than the DRM that would do this, though, and these machines don't have AGP. When a region gets switched from 64k pages to 4k pages, we do not have to clear out all the 64k HPTEs from the hash table immediately. We use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page was hashed in as a 64k page or a set of 4k pages. If hash_page is trying to insert a 4k page for a Linux PTE and it sees that it has already been inserted as a 64k page, it first invalidates the 64k HPTE before inserting the 4k HPTE. The hash invalidation routines also use the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a set of 4k HPTEs to remove. With those two changes, we can tolerate a mix of 4k and 64k HPTEs in the hash table, and they will all get removed when the address space is torn down. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-04-22[PATCH] powerpc: Quiet page order outputOlof Johansson
No need to always print page orders. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-28[PATCH] powerpc: Kill _machine and hard-coded platform numbersBenjamin Herrenschmidt
This removes statically assigned platform numbers and reworks the powerpc platform probe code to use a better mechanism. With this, board support files can simply declare a new machine type with a macro, and implement a probe() function that uses the flattened device-tree to detect if they apply for a given machine. We now have a machine_is() macro that replaces the comparisons of _machine with the various PLATFORM_* constants. This commit also changes various drivers to use the new macro instead of looking at _machine. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-22[PATCH] powerpc: Replace platform_is_lpar() with a firmware featureMichael Ellerman
It has been decreed that platform numbers are evil, so as a step in that direction, replace platform_is_lpar() with a FW_FEATURE_LPAR bit. Currently FW_FEATURE_LPAR really means i/pSeries LPAR, in the future we might have to clean that up if we need to be more specific about what LPAR actually means. But that's another patch ... Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-22[PATCH] powerpc: Unconfuse htab_bolt_mapping() callersMichael Ellerman
htab_bolt_mapping() takes a vstart and pstart parameter, but all but one of its callers actually pass it vstart and vstart. Luckily before it passes paddr (calculated from paddr) to the hpte_insert routines it calls virt_to_abs() (aka. __pa()) on the address, so there isn't actually a bug. map_io_page() however does pass pstart properly, so currently it's broken AFAICT because we're calling __pa(paddr) which will get us something very large. Presumably no one's calling map_io_page() in the right context. Anyway, change htab_bolt_mapping() callers to properly pass pstart, and then use it properly in htab_bolt_mapping(), ie. don't call __pa() on it again. Booted on p5 LPAR, iSeries and Power3. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-28Merge ../powerpc-mergePaul Mackerras
2006-02-28[PATCH] powerpc/iseries: Fix double phys_to_abs bug in htab_bolt_mappingMichael Ellerman
Before the merge I updated create_pte_mapping() to work for iSeries, by calling iSeries_hpte_bolt_or_insert. (4c55130b2aa93370f1bf52d2304394e91cf8ee39) Later we changed iSeries_hpte_insert to cope with the bolting case, and called that instead from create_pte_mapping() (which was renamed to htab_bolt_mapping) (3c726f8dee6f55e96475574e9f645327e461884c). Unfortunately that change introduced a subtle bug, where we pass an absolute address to iSeries_hpte_insert() where it expects a physical address. This leads to us calling phys_to_abs() twice on the physical address, which is seriously bogus. This only causes a problem if the absolute address from the first translation can be looked up again in the chunk_map, which depends on the size and layout of memory. I've seen it fail on one box, but not others. The minimal fix is to pass the physical address to iSeries_hpte_insert(). For 2.6.17 we should make phys_to_abs() BUG if we try to double-translate an address. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24Merge ../powerpc-mergePaul Mackerras
2006-02-24[PATCH] powerpc: Only calculate htab_size in one place for kexecMichael Ellerman
For kexec we need to know the size of the MMU hash table. Currently we calculate the size once in the htab code, and then twice more in the kexec code, once using htab_hash_mask and once using ppc64_pft_size. On some machines the ppc64_pft_size calculation is broken because ppc64_pft_size is not set. So we need to fix the second calculation, but better still we should just calculate the size once and use it everywhere else. Tested on Power5 LPAR, Power4 non-LPAR and Power3. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-07[PATCH] powerpc: Always panic if lmb_alloc() failsMichael Ellerman
Currently most callers of lmb_alloc() don't check if it worked or not, if it ever does weird bad things will probably happen. The few callers who do check just panic or BUG_ON. So make lmb_alloc() panic internally, to catch bugs at the source. The few callers who did check the result no longer need to. The only caller that did anything interesting with the return result was careful_allocation(). For it we create __lmb_alloc_base() which _doesn't_ panic automatically, a little messy, but passable. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-01-10spelling: s/retreive/retrieve/Adrian Bunk
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-01-09[PATCH] powerpc: Separate usage of KERNELBASE and PAGE_OFFSETMichael Ellerman
This patch separates usage of KERNELBASE and PAGE_OFFSET. I haven't looked at any of the PPC32 code, if we ever want to support Kdump on PPC we'll have to do another audit, ditto for iSeries. This patch makes PAGE_OFFSET the constant, it'll always be 0xC * 1 gazillion for 64-bit. To get a physical address from a virtual one you subtract PAGE_OFFSET, _not_ KERNELBASE. KERNELBASE is the virtual address of the start of the kernel, it's often the same as PAGE_OFFSET, but _might not be_. If you want to know something's offset from the start of the kernel you should subtract KERNELBASE. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-01-09[PATCH] spufs: The SPU file system, baseArnd Bergmann
This is the current version of the spu file system, used for driving SPEs on the Cell Broadband Engine. This release is almost identical to the version for the 2.6.14 kernel posted earlier, which is available as part of the Cell BE Linux distribution from http://www.bsc.es/projects/deepcomputing/linuxoncell/. The first patch provides all the interfaces for running spu application, but does not have any support for debugging SPU tasks or for scheduling. Both these functionalities are added in the subsequent patches. See Documentation/filesystems/spufs.txt on how to use spufs. Signed-off-by: Arnd Bergmann <arndb@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-12-29[PATCH] ppc64: htab_initialize_secondary cannot be marked __initAnton Blanchard
Sonny has noticed hotplug CPU on ppc64 is broken in 2.6.15-*. One of the problems is that htab_initialize_secondary is called when a cpu is being brought up, but it is marked __init. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>