diff options
author | Marcelo Tosatti <mtosatti@redhat.com> | 2009-07-15 15:34:41 -0300 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2009-09-10 08:33:14 +0300 |
commit | 6a1ac77110ee3e8d8dfdef8442f3b30b3d83e6a2 (patch) | |
tree | 8ca7be75dcdd59b271869b8a002faaf8a4fa8857 /arch/x86/kvm | |
parent | 3662cb1cd6ed26873ca808f3e16cc54246ad40ca (diff) |
KVM: MMU: fix missing locking in alloc_mmu_pages
n_requested_mmu_pages/n_free_mmu_pages are used by
kvm_mmu_change_mmu_pages to calculate the number of pages to zap.
alloc_mmu_pages, called from the vcpu initialization path, modifies this
variables without proper locking, which can result in a negative value
in kvm_mmu_change_mmu_pages (say, with cpu hotplug).
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Diffstat (limited to 'arch/x86/kvm')
-rw-r--r-- | arch/x86/kvm/mmu.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 87c67f44927..86c2551fe13 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2728,12 +2728,14 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu) ASSERT(vcpu); + spin_lock(&vcpu->kvm->mmu_lock); if (vcpu->kvm->arch.n_requested_mmu_pages) vcpu->kvm->arch.n_free_mmu_pages = vcpu->kvm->arch.n_requested_mmu_pages; else vcpu->kvm->arch.n_free_mmu_pages = vcpu->kvm->arch.n_alloc_mmu_pages; + spin_unlock(&vcpu->kvm->mmu_lock); /* * When emulating 32-bit mode, cr3 is only 32 bits even on x86_64. * Therefore we need to allocate shadow page tables in the first |