aboutsummaryrefslogtreecommitdiff
path: root/include/asm-sh/page.h
AgeCommit message (Collapse)Author
2008-07-29sh: migrate to arch/sh/include/Paul Mundt
This follows the sparc changes a439fe51a1f8eb087c22dd24d69cebae4a3addac. Most of the moving about was done with Sam's directions at: http://marc.info/?l=linux-sh&m=121724823706062&w=2 with subsequent hacking and fixups entirely my fault. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-28sh: Add support for 16kB PAGE_SIZE.Paul Mundt
16kB is a useful size on nommu, while 64kB still tends to be too big to be useful. Newer MMUs are likely to support this as well, so plug it in in anticipation of those, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-24PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architecturesAndrea Righi
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit boundary. For example: u64 val = PAGE_ALIGN(size); always returns a value < 4GB even if size is greater than 4GB. The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for example): #define PAGE_SHIFT 12 #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1)) ... #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK) The "~" is performed on a 32-bit value, so everything in "and" with PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary. Using the ALIGN() macro seems to be the right way, because it uses typeof(addr) for the mask. Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in include/linux/mm.h. See also lkml discussion: http://lkml.org/lkml/2008/6/11/237 [akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c] [akpm@linux-foundation.org: fix v850] [akpm@linux-foundation.org: fix powerpc] [akpm@linux-foundation.org: fix arm] [akpm@linux-foundation.org: fix mips] [akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c] [akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c] [akpm@linux-foundation.org: fix powerpc] Signed-off-by: Andrea Righi <righi.andrea@gmail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-14sh: Get SH-5 caches working again post-unification.Paul Mundt
A number of cleanups to get the SH-5 cache management code in line with the rest of the SH backend. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-02-08CONFIG_HIGHPTE vs. sub-page page tables.Martin Schwidefsky
Background: I've implemented 1K/2K page tables for s390. These sub-page page tables are required to properly support the s390 virtualization instruction with KVM. The SIE instruction requires that the page tables have 256 page table entries (pte) followed by 256 page status table entries (pgste). The pgstes are only required if the process is using the SIE instruction. The pgstes are updated by the hardware and by the hypervisor for a number of reasons, one of them is dirty and reference bit tracking. To avoid wasting memory the standard pte table allocation should return 1K/2K (31/64 bit) and 2K/4K if the process is using SIE. Problem: Page size on s390 is 4K, page table size is 1K or 2K. That means the s390 version for pte_alloc_one cannot return a pointer to a struct page. Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one cannot return a pointer to a pte either, since that would require more than 32 bit for the return value of pte_alloc_one (and the pte * would not be accessible since its not kmapped). Solution: The only solution I found to this dilemma is a new typedef: a pgtable_t. For s390 pgtable_t will be a (pte *) - to be introduced with a later patch. For everybody else it will be a (struct page *). The additional problem with the initialization of the ptl lock and the NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and a destructor pgtable_page_dtor. The page table allocation and free functions need to call these two whenever a page table page is allocated or freed. pmd_populate will get a pgtable_t instead of a struct page pointer. To get the pgtable_t back from a pmd entry that has been installed with pmd_populate a new function pmd_pgtable is added. It replaces the pmd_page call in free_pte_range and apply_to_pte_range. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07Cleanup asm/{elf,page,user}.h: #ifdef __KERNEL__ is no longer neededKirill A. Shutemov
asm/elf.h, asm/page.h and asm/user.h don't export to userspace now, so we can drop #ifdef __KERNEL__ for them. [k.shutemov@gmail.com: remove #ifdef __KERNEL_] Signed-off-by: Kirill A. Shutemov <k.shutemov@gmail.com> Reviewed-by: David Woodhouse <dwmw2@infradead.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Kirill A. Shutemov <k.shutemov@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-01-28sh: Clean up places that make 29-bit physical assumptions.Stuart Menefy
Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Bump up ARCH_KMALLOC_MINALIGN for DMA cases.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Move PXSEG comments to addrspace.h.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Set HPAGE_SHIFT for 512MB hugetlb pages.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Tidy up various clear_page()/copy_page() definitions.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Split out pgtable.h in to _32 and _64 variants.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Consolidate slab/kmalloc minalign values.Paul Mundt
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-11-07sh: Kill off __{copy,clear}_user_page().Paul Mundt
Now that copy_to_user_page()/copy_from_user_page() are wired up, we can drop the old __copy_xxx() implementations. Now that the page colouring scheme has changed via kmap_coherent(), we can avoid the flush in these specific helpers. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-11-07sh: Wire up clear_user_highpage().Paul Mundt
With the kmap_coherent() API in place, this is trivial to implement, and lets us avoid the cache flush in certain cases. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-10-30sh: Correct pte_page() breakage.Paul Mundt
As noted by David: pte_page() is a macro defined as follows; include/asm-sh/pgtable.h #define pte_page(x) phys_to_page(pte_val(x)&PTE_PHYS_MASK) include/asm-sh/page.h #define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) So as you can see the phys_to_page() macro doesn't wrap the 'phys' parameter in parentheses so we end up with; pte_val(x)&PTE_PHYS_MASK >> PAGE_SHIFT Which is not what we wanted as '>>' has a higher precedence than bitwise AND. I dug into the git repository and I believe this bug was added with this commit (104b8deaa5c0144cccfc7d914413ff80c7176af1); 2006-03-27 KAMEZAWA Hiroyuki [PATCH] unify pfn_to_page: sh pfn_to_page -#define phys_to_page(phys) (mem_map + (((phys)-__MEMORY_START) >> PAGE_SHIFT)) -#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START) +#define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) +#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) Reported-by: David ADDISON <david.addison@st.com> Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-09-21sh: Fix up extended mode TLB for SH-X2+ cores.Paul Mundt
The extended mode TLB requires both 64-bit PTEs and a 64-bit pgprot, correspondingly, the PGD also has to be 64-bits, so fix that up. The kernel and user permission bits really are decoupled in early cuts of the silicon, which means that we also have to set corresponding kernel permissions on user pages or we end up with user pages that the kernel simply can't touch (!). Finally, with those things corrected, really enable MMUCR.ME and correct the PTEA value (this simply needs to be the upper 32-bits of the PTE, with the size and protection bit encoding). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-09-21sh: Support explicit L1 cache disabling.Paul Mundt
This reworks the cache mode configuration in Kconfig, and allows for explicit selection of write-back/write-through/off configurations. All of the cache flushing routines are optimized away for the off case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-06-08sh: Default to 4-byte alignment for SLUB objects.Paul Mundt
Slub currently defaults to 8-byte alignment for the kmalloc and slab minalign values, where 4 will suffice. In the slab case BYTES_PER_WORD == 4 already, so defining the minalign values outright doesn't cause any regressions there either. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-06-08sh: pfn_valid() depends on flatmem.Paul Mundt
pfn_valid() is already defined in the sparsemem case, so we only need to define this for CONFIG_FLATMEM. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-06-08sh: __user annotations for __get/__put_user().Paul Mundt
This adds in some more __user annotations. These weren't being handled properly in some of the __get_user and __put_user paths, so tidy those up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-05-07sh: bootmem tidying for discontig/sparsemem preparation.Paul Mundt
This reworks some of the node 0 bootmem initialization in preparation for discontigmem and sparsemem support. ARCH_POPULATES_NODE_MAP is switched to as a result of this. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-02-13sh: Move __KERNEL__ up in asm/page.h.Paul Mundt
This was breaking the uClibc build, which triggered the bogus page size error. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06sh: Preliminary support for SH-X2 MMU.Paul Mundt
This adds some preliminary support for the SH-X2 MMU, used by newer SH-4A parts (particularly SH7785). This MMU implements a 'compat' mode with SH-X MMUs and an 'extended' mode for SH-X2 extended features. Extended features include additional page sizes (8kB, 4MB, 64MB), as well as the addition of page execute permissions. The extended mode attributes are placed in a second data array, which requires us to switch to 64-bit PTEs when in X2 mode. With the addition of the exec perms, we also overhaul the mmap prots somewhat, now that it's possible to handle them more intelligently. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Calculate shm alignment at runtime.Paul Mundt
Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Initial vsyscall page support.Paul Mundt
This implements initial support for the vsyscall page on SH. At the moment we leave it configurable due to having nommu to support from the same code base. We hook it up for the signal trampoline return at present, with more to be added later, once uClibc catches up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Clean up PAGE_SIZE definition for assembly use.Paul Mundt
We want to be able to use PAGE_SIZE all over the place, this is the same approach adopted by other architectures.. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: stack debugging support.Paul Mundt
This adds a DEBUG_STACK_USAGE and DEBUG_STACKOVERFLOW for SH. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Various nommu fixes.Yoshinori Sato
This fixes up some of the various outstanding nommu bugs on SH. Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: Make PAGE_OFFSET configurable.Paul Mundt
nommu needs to be able to shift PAGE_OFFSET, so we switch it to a non-user-visible CONFIG_PAGE_OFFSET and use that in the few places where it matters. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: page table alloc cleanups and page fault optimizations.Paul Mundt
Cleanup of page table allocators, using generic folded PMD and PUD helpers. TLB flushing operations are moved to a more sensible spot. The page fault handler is also optimized slightly, we no longer waste cycles on IRQ disabling for flushing of the page from the ITLB, since we're already under CLI protection by the initial exception handler. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27sh: hugetlb updates.Paul Mundt
For some of the larger sizes we permitted spanning pages across several PTEs, but this turned out to not be generally useful. This reverts the sh hugetlbpage interface to something more sensible using huge pages at single PTE granularity. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-21Fix 'make headers_check' on shPaul Mundt
Cleanup for user headers, as noted: asm-sh/page.h requires asm-generic/memory_model.h, which does not exist in exported headers asm-sh/ptrace.h requires asm/ubc.h, which does not exist in exported headers Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-09-08[PATCH] sh: fix FPN_START typoAlexey Dobriyan
Not that it passes allmodconfig without it... Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Mark Haverkamp <markh@osdl.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-04-26Don't include linux/config.h from anywhere else in include/David Woodhouse
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-03-27[PATCH] unify pfn_to_page: sh pfn_to_pageKAMEZAWA Hiroyuki
sh can use generic funcs. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07[PATCH] sh: Drop hp690 discontig supportPaul Mundt
There was only one board using this (hp690 specifically), and it just so happens that it's only physically discontiguous at the "normal" P1 offset. If we bump up the P1 offset, it's possible to hit a shadowed region of memory where we suddenly become magically contiguous. As people have been using this shadowed region workaround for quite some time (and without any adverse effects), it's time to drop the left over discontig bits that no longer have any practical use (it was always very much hp690-centric to begin with). Signed-off-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05[PATCH] mm: consolidate get_orderStephen Rothwell
Someone mentioned that almost all the architectures used basically the same implementation of get_order. This patch consolidates them into asm-generic/page.h and includes that in the appropriate places. The exceptions are ia64 and ppc which have their own (presumably optimised) versions. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21[PATCH] Hugepage consolidationDavid Gibson
A lot of the code in arch/*/mm/hugetlbpage.c is quite similar. This patch attempts to consolidate a lot of the code across the arch's, putting the combined version in mm/hugetlb.c. There are a couple of uglyish hacks in order to covert all the hugepage archs, but the result is a very large reduction in the total amount of code. It also means things like hugepage lazy allocation could be implemented in one place, instead of six. Tested, at least a little, on ppc64, i386 and x86_64. Notes: - this patch changes the meaning of set_huge_pte() to be more analagous to set_pte() - does SH4 need s special huge_ptep_get_and_clear()?? Acked-by: William Lee Irwin <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2Linus Torvalds
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!