diff options
author | Jeremy Fitzhardinge <jeremy@goop.org> | 2008-06-06 10:21:39 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-06-19 10:08:44 +0200 |
commit | ad524d46f36bbc32033bb72ba42958f12bf49b06 (patch) | |
tree | 6147475fc20e81345dcf04486d8ea422df141951 /include | |
parent | 9bedbcb207ed9a571b239231d99c8fd4a34ae24d (diff) |
x86: set PAE PHYSICAL_MASK_SHIFT to 44 bits.
When a 64-bit x86 processor runs in 32-bit PAE mode, a pte can
potentially have the same number of physical address bits as the
64-bit host ("Enhanced Legacy PAE Paging"). This means, in theory,
we could have up to 52 bits of physical address in a pte.
The 32-bit kernel uses a 32-bit unsigned long to represent a pfn.
This means that it can only represent physical addresses up to 32+12=44
bits wide. Rather than widening pfns everywhere, just set 2^44 as the
Linux x86_32-PAE architectural limit for physical address size.
This is a bugfix for two cases:
1. running a 32-bit PAE kernel on a machine with
more than 64GB RAM.
2. running a 32-bit PAE Xen guest on a host machine with
more than 64GB RAM
In both cases, a pte could need to have more than 36 bits of physical,
and masking it to 36-bits will cause fairly severe havoc.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r-- | include/asm-x86/page_32.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/include/asm-x86/page_32.h b/include/asm-x86/page_32.h index 424e82f8ae2..ccf0ba3c3ab 100644 --- a/include/asm-x86/page_32.h +++ b/include/asm-x86/page_32.h @@ -14,7 +14,8 @@ #define __PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL) #ifdef CONFIG_X86_PAE -#define __PHYSICAL_MASK_SHIFT 36 +/* 44=32+12, the limit we can fit into an unsigned long pfn */ +#define __PHYSICAL_MASK_SHIFT 44 #define __VIRTUAL_MASK_SHIFT 32 #define PAGETABLE_LEVELS 3 |