diff options
author | Nick Piggin <npiggin@suse.de> | 2009-01-06 14:40:44 +1100 |
---|---|---|
committer | Lachlan McIlroy <lachlan@redback.melbourne.sgi.com> | 2009-01-06 14:40:44 +1100 |
commit | d2859751cd0bf586941ffa7308635a293f943c17 (patch) | |
tree | 24f5f4ba78bf3722609e20a9346976226b95878a /include/asm-m32r/cache.h | |
parent | 195ec037ff8f6fa800616e0dad8d57a98b6fb37e (diff) |
[XFS] remove old vmap cache
XFS's vmap batching simply defers a number (up to 64) of vunmaps, and keeps
track of them in a list. To purge the batch, it just goes through the list and
calls vunamp on each one. This is pretty poor: a global TLB flush is generally
still performed on each vunmap, with the most expensive parts of the operation
being the broadcast IPIs and locking involved in the SMP callouts, and the
locking involved in the vmap management -- none of these are avoided by just
batching up the calls. I'm actually surprised it ever made much difference.
(Now that the lazy vmap allocator is upstream, this description is not quite
right, but the vunmap batching still doesn't seem to do much)
Rip all this logic out of XFS completely. I will improve vmap performance
and scalability directly in subsequent patch.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
Diffstat (limited to 'include/asm-m32r/cache.h')
0 files changed, 0 insertions, 0 deletions