aboutsummaryrefslogtreecommitdiff
path: root/include/asm-x86/page_64.h
AgeCommit message (Collapse)Author
2008-07-09x86: map UV chipset space - pagetableJack Steiner
Add boot-time function for creating additional 2MB page table entries for mapping chipset specific cached/uncached ranges. Signed-off-by: Jack Steiner <steiner@sgi.com> Cc: linux-mm@kvack.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-08paravirt/x86, 64-bit: move __PAGE_OFFSET to leave a space for hypervisorEduardo Habkost
Set __PAGE_OFFSET to the most negative possible address + 16*PGDIR_SIZE. The gap is to allow a space for a hypervisor to fit. The gap is more or less arbitrary, but it's what Xen needs. When booting native, kernel/head_64.S has a set of compile-time generated pagetables used at boot time. This patch removes their absolutely hard-coded layout, and makes it parameterised on __PAGE_OFFSET (and __START_KERNEL_map). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-08x86: remove end_pfn in 64bitYinghai Lu
and use max_pfn directly. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-08x86: introduce initmem_init for 64 bitYinghai Lu
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17x86: account overlapped mappings in max_pfn_mappedAndi Kleen
When end_pfn is not aligned to 2MB (or 1GB) then the kernel might map more memory than end_pfn. Account this in max_pfn_mapped. Signed-off-by: Andi Kleen <ak@suse.de> Cc: andreas.herrmann3@amd.com Cc: mingo@elte.hu Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17x86: implement true end_pfn_mapped for 32bitAndi Kleen
Even on 32bit 2MB pages can map more memory than is in the true max_low_pfn if end_pfn is not highmem and not aligned to 2MB. Add a end_pfn_map similar to x86-64 that accounts for this fact. This is important for code that really needs to know about all mapping aliases. Signed-off-by: Andi Kleen <ak@suse.de> Cc: andreas.herrmann3@amd.com Cc: mingo@elte.hu Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17include/asm-x86/page_64.h: checkpatch cleanups - formatting onlyJoe Perches
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17x86: increase the kernel text limit to 512 MBIngo Molnar
people sometimes do crazy stuff like building really large static arrays into their kernels or building allyesconfig kernels. Give more space to the kernel and push modules up a bit: kernel has 512 MB and modules have 1.5 GB. Should be enough for a few years ;-) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-26x86: rename KERNEL_TEXT_SIZE => KERNEL_IMAGE_SIZEIngo Molnar
The KERNEL_TEXT_SIZE constant was mis-named, as we not only map the kernel text but data, bss and init sections as well. That name led me on the wrong path with the KERNEL_TEXT_SIZE regression, because i knew how big of _text_ my images have and i knew about the 40 MB "text" limit so i wrongly thought to be on the safe side of the 40 MB limit with my 29 MB of text, while the total image size was slightly above 40 MB. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-26x86: fix spontaneous reboot with allyesconfig bzImageIngo Molnar
recently the 64-bit allyesconfig bzImage kernel started spontaneously rebooting during early bootup. after a few fun hours spent with early init debugging, it turns out that we've got this rather annoying limit on the size of the kernel image: #define KERNEL_TEXT_SIZE (40*1024*1024) which limit my vmlinux just happened to pass: text data bss dec hex filename 29703744 4222751 8646224 42572719 2899baf vmlinux 40 MB is 42572719 bytes, so my vmlinux was just 1.5% above this limit :-/ So it happily crashed right in head_64.S, which - as we all know - is the most debuggable code in the whole architecture ;-) So increase the limit to allow an up to 128MB kernel image to be mapped. (should anyone be that crazy or lazy) We have a full 4K of pagetable (level2_kernel_pgt) allocated for these mappings already, so there's no RAM overhead and the limit was rather pointless and arbitrary. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-02-08CONFIG_HIGHPTE vs. sub-page page tables.Martin Schwidefsky
Background: I've implemented 1K/2K page tables for s390. These sub-page page tables are required to properly support the s390 virtualization instruction with KVM. The SIE instruction requires that the page tables have 256 page table entries (pte) followed by 256 page status table entries (pgste). The pgstes are only required if the process is using the SIE instruction. The pgstes are updated by the hardware and by the hypervisor for a number of reasons, one of them is dirty and reference bit tracking. To avoid wasting memory the standard pte table allocation should return 1K/2K (31/64 bit) and 2K/4K if the process is using SIE. Problem: Page size on s390 is 4K, page table size is 1K or 2K. That means the s390 version for pte_alloc_one cannot return a pointer to a struct page. Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one cannot return a pointer to a pte either, since that would require more than 32 bit for the return value of pte_alloc_one (and the pte * would not be accessible since its not kmapped). Solution: The only solution I found to this dilemma is a new typedef: a pgtable_t. For s390 pgtable_t will be a (pte *) - to be introduced with a later patch. For everybody else it will be a (struct page *). The additional problem with the initialization of the ptl lock and the NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and a destructor pgtable_page_dtor. The page table allocation and free functions need to call these two whenever a page table page is allocated or freed. pmd_populate will get a pgtable_t instead of a struct page pointer. To get the pgtable_t back from a pmd entry that has been installed with pmd_populate a new function pmd_pgtable is added. It replaces the pmd_page call in free_pte_range and apply_to_pte_range. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-04x86: add PUD_PAGE_SIZEAndi Kleen
a PUD entry covers 1GB of virtual memory. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: make pte_t a union to always includeJeremy Fitzhardinge
Make sure pte_t, whatever its definition, has a pte element with type pteval_t. This allows common code to access it without needing to be specifically parameterised on what pagetable mode we're compiling for. For 32-bit, this means that pte_t becomes a union with "pte" and "{ pte_low, pte_high }" (PAE) or just "pte_low" (non-PAE). Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: move things back to their own filesJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199321648 28800 # Node ID 22f6a5902285b58bfc1fbbd9e183498c9017bd78 # Parent bba9287641ff90e836d090d80b5c0a846aab7162 x86: page.h: move things back to their own files Oops, asm/page.h has turned into an #ifdef hellhole. Move 32/64-specific things back to their own headers to make it somewhat comprehensible... Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: move remaining bits and piecesJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199319657 28800 # Node ID bba9287641ff90e836d090d80b5c0a846aab7162 # Parent d617b72a0cc9d14bde2087d065c36d4ed3265761 x86: page.h: move remaining bits and pieces Move the remaining odds and ends into page.h. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: move pa and va related thingsJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199319656 28800 # Node ID d617b72a0cc9d14bde2087d065c36d4ed3265761 # Parent 3bd7db6e85e66e7f3362874802df26a82fcb2d92 x86: page.h: move pa and va related things Move and unify the virtual<->physical address space conversion functions. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: move and unify types for pagetable entry, #5Ingo Molnar
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: move and unify types for pagetable entry, #3Ingo Molnar
based on: Subject: x86: page.h: move and unify types for pagetable entry From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: unify page copying and clearingJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199317362 28800 # Node ID 4d9a413a0f4c1d98dbea704f0366457b5117045d # Parent ba0ec40a50a7aef1a3153cea124c35e261f5a2df x86: page.h: unify page copying and clearing Move, and to some extent unify, the various page copying and clearing functions. The only unification here is that both architectures use the same function for copying/clearing user and kernel pages. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: unify constantsJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199317360 28800 # Node ID ba0ec40a50a7aef1a3153cea124c35e261f5a2df # Parent c45c263179cb78284b6b869c574457df088027d1 x86: page.h: unify constants There are many constants which are shared by 32 and 64-bit. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x64/page.h: convert some macros to inlinesRandy Dunlap
Convert clear_page/copy_page macros to inline functions for type-checking. Andrew wants to extirpate these ugly macros. (Ingo too. Thomas as well. Please send us more "kill ugly macros" patches! :-) Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: 64-bit, make sparsemem vmemmap the only memory modelChristoph Lameter
Use sparsemem as the only memory model for UP, SMP and NUMA. Measurements indicate that DISCONTIGMEM has a higher overhead than sparsemem. And FLATMEMs benefits are minimal. So I think its best to simply standardize on sparsemem. Results of page allocator tests (test can be had via git from slab git tree branch tests) Measurements in cycle counts. 1000 allocations were performed and then the average cycle count was calculated. Order FlatMem Discontig SparseMem 0 639 665 641 1 567 647 593 2 679 774 692 3 763 967 781 4 961 1501 962 5 1356 2344 1392 6 2224 3982 2336 7 4869 7225 5074 8 12500 14048 12732 9 27926 28223 28165 10 58578 58714 58682 (Note that FlatMem is an SMP config and the rest NUMA configurations) Memory use: SMP Sparsemem ------------- Kernel size: text data bss dec hex filename 3849268 397739 1264856 5511863 541ab7 vmlinux total used free shared buffers cached Mem: 8242252 41164 8201088 0 352 11512 -/+ buffers/cache: 29300 8212952 Swap: 9775512 0 9775512 SMP Flatmem ----------- Kernel size: text data bss dec hex filename 3844612 397739 1264536 5506887 540747 vmlinux So 4.5k growth in text size vs. FLATMEM. total used free shared buffers cached Mem: 8244052 40544 8203508 0 352 11484 -/+ buffers/cache: 28708 8215344 2k growth in overall memory use after boot. NUMA discontig: text data bss dec hex filename 3888124 470659 1276504 5635287 55fcd7 vmlinux total used free shared buffers cached Mem: 8256256 56908 8199348 0 352 11496 -/+ buffers/cache: 45060 8211196 Swap: 9775512 0 9775512 NUMA sparse: text data bss dec hex filename 3896428 470659 1276824 5643911 561e87 vmlinux 8k text growth. Given that we fully inline virt_to_page and friends now that is rather good. total used free shared buffers cached Mem: 8264720 57240 8207480 0 352 11516 -/+ buffers/cache: 45372 8219348 Swap: 9775512 0 9775512 The total available memory is increased by 8k. This patch makes sparsemem the default and removes discontig and flatmem support from x86. [ akpm@linux-foundation.org: allnoconfig build fix ] Acked-by: Andi Kleen <ak@suse.de> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move page related declarationThomas Gleixner
end_pfn is in page.h, so end_pfn_map has a place there as well Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-10-16x86_64: SPARSEMEM_VMEMMAP 2M page size supportChristoph Lameter
x86_64 uses 2M page table entries to map its 1-1 kernel space. We also implement the virtual memmap using 2M page table entries. So there is no additional runtime overhead over FLATMEM, initialisation is slightly more complex. As FLATMEM still references memory to obtain the mem_map pointer and SPARSEMEM_VMEMMAP uses a compile time constant, SPARSEMEM_VMEMMAP should be superior. With this SPARSEMEM becomes the most efficient way of handling virt_to_page, pfn_to_page and friends for UP, SMP and NUMA on x86_64. [apw@shadowen.org: code resplit, style fixups] [apw@shadowen.org: vmemmap x86_64: ensure end of section memmap is initialised] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Andi Kleen <ak@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-11i386/x86_64: move headers to include/asm-x86Thomas Gleixner
Move the headers to include/asm-x86 and fixup the header install make rules Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>