From bead9a3abd15710b0bdfd418daef606722d86282 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Wed, 16 Apr 2008 01:40:00 +0200 Subject: mm: sparsemem memory_present() fix Fix memory corruption and crash on 32-bit x86 systems. If a !PAE x86 kernel is booted on a 32-bit system with more than 4GB of RAM, then we call memory_present() with a start/end that goes outside the scope of MAX_PHYSMEM_BITS. That causes this loop to happily walk over the limit of the sparse memory section map: for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) { unsigned long section = pfn_to_section_nr(pfn); struct mem_section *ms; sparse_index_init(section, nid); set_section_nid(section, nid); ms = __nr_to_section(section); if (!ms->section_mem_map) ms->section_mem_map = sparse_encode_early_nid(nid) | SECTION_MARKED_PRESENT; 'ms' will be out of bounds and we'll corrupt a small amount of memory by encoding the node ID and writing SECTION_MARKED_PRESENT (==0x1) over it. The corruption might happen when encoding a non-zero node ID, or due to the SECTION_MARKED_PRESENT which is 0x1: mmzone.h:#define SECTION_MARKED_PRESENT (1UL<<0) The fix is to sanity check anything the architecture passes to sparsemem. This bug seems to be rather old (as old as sparsemem support itself), but the exact incarnation depended on random details like configs, which made this bug more prominent in v2.6.25-to-be. An additional enhancement might be to print a warning about ignored or trimmed memory ranges. Signed-off-by: Ingo Molnar Tested-by: Christoph Lameter Cc: Pekka Enberg Cc: Mel Gorman Cc: Nick Piggin Cc: Andrew Morton Cc: Rafael J. Wysocki Cc: Yinghai Lu Cc: KAMEZAWA Hiroyuki Signed-off-by: Linus Torvalds --- mm/sparse.c | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'mm') diff --git a/mm/sparse.c b/mm/sparse.c index f6a43c09c32..98d6b39c347 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -149,8 +149,18 @@ static inline int sparse_early_nid(struct mem_section *section) /* Record a memory area against a node. */ void __init memory_present(int nid, unsigned long start, unsigned long end) { + unsigned long max_arch_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT); unsigned long pfn; + /* + * Sanity checks - do not allow an architecture to pass + * in larger pfns than the maximum scope of sparsemem: + */ + if (start >= max_arch_pfn) + return; + if (end >= max_arch_pfn) + end = max_arch_pfn; + start &= PAGE_SECTION_MASK; for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) { unsigned long section = pfn_to_section_nr(pfn); -- cgit v1.2.3