Age | Commit message (Collapse) | Author |
|
On our raw spinlocks, we currently have an attempt at the lock, and if we do
not get it we enter a spin loop. This spinloop will likely continue for
awhile, and we pridict likely.
Shouldn't we predict that we will get out of the loop so our next instructions
are already prefetched. Even when we miss because the lock is still held, it
won't matter since we are waiting anyways.
I did a couple quick benchmarks, but the results are inconclusive.
16-way 690 running specjbb with original code
# ./specjbb 3000 16 1 1 19 30 120
...
Valid run, Score is 59282
16-way 690 running specjbb with unlikely code
# ./specjbb 3000 16 1 1 19 30 120
...
Valid run, Score is 59541
I saw a smaller increase on a JS20 (~1.6%)
JS20 specjbb w/ original code
# ./specjbb 400 2 1 1 19 30 120
...
Valid run, Score is 20460
JS20 specjbb w/ unlikely code
# ./specjbb 400 2 1 1 19 30 120
...
Valid run, Score is 20803
Anton said:
Mispredicting the spinlock busy loop also means we slow down the rate at which
we do the loads which can be good for heavily contended locks.
Note: There are some gcc issues with our default build and branch prediction,
but a CONFIG_POWER4_ONLY build should emit them correctly. I'm working with
Alan Modra on it now.
Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
We no longer use any ppcdebug stuff in a.out.h, so remove the define.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
There were a few issues with the ppc64 noexec support:
The 64bit ABI has a non executable stack by default. At the moment 64bit apps
require a PT_GNU_STACK section in order to have a non executable stack.
Disable the read implies exec workaround on the 64bit ABI. The 64bit
toolchain has never had problems with incorrect mmap permissions (the 32bit
has, thats why we need to retain the workaround).
With these fixes as well as a gcc fix from Alan Modra (that was recently
committed) 64bit apps work as expected.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch converts ppc64 to use the generic pgtable-nopud.h instead of the
"fixup" header.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Replace misleading definition of FIRST_USER_PGD_NR 0 by definition of
FIRST_USER_ADDRESS 0 in all the MMU architectures beyond arm and arm26.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
called, and it wasn't obvious what to do about them.
The ppc64 case turns out to be easy: the associated tables are noted elsewhere
and freed later, safe to either skip its hugetlb areas or go through the
motions of freeing nothing. Since ia64 does need a special case, restore to
ppc64 the special case of skipping them.
The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
probably appeared to work okay if you just had one such area; in fact it's
been broken much longer if you consider a long munmap spanning from another
region into the hugetlb region.
In the ia64 hugetlb region, more virtual address bits are available than in
the other regions, yet the page tables are structured the same way: the page
at the bottom is larger. Here we need to scale down each addr before passing
it to the standard free_pgd_range. Was about to write a hugely_scaled_down
macro, but found htlbpage_to_page already exists for just this purpose. Fixed
off-by-one in ia64 is_hugepage_only_range.
Uninline free_pgd_range to make it available to ia64. Make sure the
vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
other (safe to join huges? probably but don't bother).
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro
because mm doesn't contain the (32-bit emulation?) info needed. But it too is
only needed because we ignore the end from the vma list.
We could make flush_pgtables return that end, or unmap_vmas. Choose the
latter, since it's a natural fit with unmap_mapping_range_vma needing to know
its restart addr. This does make more than minimal change, but if unmap_vmas
had returned the end before, this is how we'd have done it, rather than
storing the break_addr in zap_details.
unmap_vmas used to return count of vmas scanned, but that's just debug which
hasn't been useful in a while; and if we want the map_count 0 on exit check
back, it can easily come from the final remove_vm_struct loop.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
For prefetches of NULL (as when walking a short linked list), PPC64 will in
some cases take a performance hit. The hardware needs to do the TLB walk,
and said walk will always miss, which means (up to) two L2 misses as
penalty. This seems to hurt overall performance, so for NULL pointers skip
the prefetch alltogether.
Signed-off-by: Olof Johansson <olof@austin.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch reworks the way the ppc64 is mapped in user memory by the kernel
to make it more robust against possible collisions with executable
segments. Instead of just whacking a VMA at 1Mb, I now use
get_unmapped_area() with a hint, and I moved the mapping of the vDSO to
after the mapping of the various ELF segments and of the interpreter, so
that conflicts get caught properly (it still has to be before
create_elf_tables since the later will fill the AT_SYSINFO_EHDR with the
proper address).
While I was at it, I also changed the 32 and 64 bits vDSO's to link at
their "natural" address of 1Mb instead of 0. This is the address where
they are normally mapped in absence of conflict. By doing so, it should be
possible to properly prelink one it's been verified to work on glibc.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|