aboutsummaryrefslogtreecommitdiff
path: root/drivers/pci/intel-iommu.c
AgeCommit message (Collapse)Author
2009-08-06intel-iommu: Fix enabling snooping feature by mistakeSheng Yang
Two defects work together result in KVM device passthrough randomly can't work: 1. iommu_snooping is not initialized to zero when vm_iommu_init() called. So it is possible to get a random value. 2. One line added by commit 2c2e2c38("IOMMU Identity Mapping Support") change the code path, let it bypass domain_update_iommu_cap(), as well as missing the increment of domain iommu reference count. The latter is also likely to cause a leak of domains on repeated VMM assignment and deassignment. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-08-05intel-iommu: Mask physical address to correct page size in intel_map_single()Fenghua Yu
The physical address passed to domain_pfn_mapping() should be rounded down to the start of the MM page, not the VT-d page. This issue causes kernel panic on PAGE_SIZE>VTD_PAGE_SIZE platforms e.g. ia64 platforms. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-08-05intel-iommu: Correct sglist size calculation.Fenghua Yu
In domain_sg_mapping(), use aligned_nrpages() instead of hand-coded rounding code for calculating the size of each sg elem. This means that on IA64 we correctly round up to the MM page size, not just to the VT-d page size. Also remove the incorrect mm_to_dma_pfn() when intel_map_sg() calls domain_sg_mapping() -- the 'size' variable is in VT-d pages already. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-08intel-iommu: Fix intel_iommu_unmap_range() with size 0Sheng Yang
After some API change, intel_iommu_unmap_range() introduced a assumption that parameter size != 0, otherwise the dma_pte_clean_range() would have a overflowed argument. But the user like KVM don't have this assumption before, then some BUG() triggered. Fix it by ignoring size = 0. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-04intel-iommu: Don't use identity mapping for PCI devices behind bridgesDavid Woodhouse
Our current strategy for pass-through mode is to put all devices into the 1:1 domain at startup (which is before we know what their dma_mask will be), and only _later_ take them out of that domain, if it turns out that they really can't address all of memory. However, when there are a bunch of PCI devices behind a bridge, they all end up with the same source-id on their DMA transactions, and hence in the same IOMMU domain. This means that we _can't_ easily move them from the 1:1 domain into their own domain at runtime, because there might be DMA in-flight from their siblings. So we have to adjust our pass-through strategy: For PCI devices not on the root bus, and for the bridges which will take responsibility for their transactions, we have to start up _out_ of the 1:1 domain, just in case. This fixes the BUG() we see when we have 32-bit-capable devices behind a PCI-PCI bridge, and use the software identity mapping. It does mean that we might end up using 'normal' mapping mode for some devices which could actually live with the faster 1:1 mapping -- but this is only for PCI devices behind bridges, which presumably aren't the devices for which people are most concerned about performance. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Use iommu_should_identity_map() at startup time too.David Woodhouse
At boot time, the dma_mask won't have been set on any devices, so we assume that all devices will be 64-bit capable (and thus get a 1:1 map). Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: No mapping for non-PCI devicesDavid Woodhouse
This should fix kernel.org bug #11821, where the dcdbas driver makes up a platform device and then uses dma_alloc_coherent() on it, in an attempt to get memory < 4GiB. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Restore DMAR_BROKEN_GFX_WA option for broken graphics driversDavid Woodhouse
We need to give people a little more time to fix the broken drivers. Re-introduce this, but tied in properly with the 'iommu=pt' support this time. Change the config option name and make it default to 'no' too. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Add iommu_should_identity_map() functionDavid Woodhouse
We do this twice, and it's about to get more complicated. This makes the code slightly clearer about what it's doing, too. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Fix reattaching of devices to identity mapping domainDavid Woodhouse
When we reattach a device to the si_domain (because it's been removed from a VM), we weren't calling domain_context_mapping() to actually tell the hardware about that. We should really put the call to domain_context_mapping() into domain_add_dev_info() -- we never call the latter without also doing the former, and we can keep the error paths simple that way. But that's a cleanup which can wait for 2.6.32 now. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Don't set identity mapping for bypassed graphics devicesDavid Woodhouse
We should check iommu_dummy() _first_, because that means it's attached to an iommu that we've just disabled completely. At the moment, we might try to put the device into the identity mapping domain. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-04intel-iommu: Fix dma vs. mm page confusion with aligned_nrpages()David Woodhouse
The aligned_nrpages() function rounds up to the next VM page, but returns its result as a number of DMA pages. Purely theoretical except on IA64, which doesn't boot with VT-d right now anyway. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-02intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable()David Woodhouse
Check dma_pte_present() and only free the page if there _is_ one. Kind of surprising that there was no warning about this. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-02intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loopsDavid Woodhouse
On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote: > I also _really_ hate how you do > > (unsigned long)pte >> VTD_PAGE_SHIFT == > (unsigned long)first_pte >> VTD_PAGE_SHIFT Kill this, in favour of just looking to see if the incremented pte pointer has 'wrapped' onto the next page. Which means we have to check it _after_ incrementing it, not before. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-01intel-iommu: Use cmpxchg64_local() for setting PTEsDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-01intel-iommu: Warn about unmatched unmap requestsDavid Woodhouse
This would have found the bug in i386 pci_unmap_addr() a long time ago. We shouldn't just silently return without doing anything. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-01intel-iommu: Kill superfluous mapping_lockDavid Woodhouse
Since we're using cmpxchg64() anyway (because that's the only way to do an atomic 64-bit store on i386), we might as well ditch the extra locking and just use cmpxchg64() to ensure that we don't add the page twice. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-07-01intel-iommu: Ensure that PTE writes are 64-bit atomic, even on i386David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30intel-iommu: Performance improvement for dma_pte_free_pagetable()David Woodhouse
As with other functions, batch the CPU data cache flushes and don't keep recalculating PTE addresses. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30intel-iommu: Don't free too much in dma_pte_free_pagetable()David Woodhouse
The loop condition was wrong -- we should free a PMD only if its _entire_ range is within the range we're intending to clear. The early-termination condition was right, but not the loop. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30intel-iommu: dump mappings but don't die on pte already setDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg()David Woodhouse
Instead of calling domain_pfn_mapping() repeatedly with single or small numbers of pages, just pass the sglist in. It can optimise the number of cache flushes like domain_pfn_mapping() does, and gives a huge speedup for large scatterlists. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Simplify __intel_alloc_iova()David Woodhouse
There's no need for the separate iommu_alloc_iova() function, and certainly not for it to be global. Remove the underscores while we're at it. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Performance improvement for domain_pfn_mapping()David Woodhouse
As with dma_pte_clear_range(), don't keep flushing a single PTE at a time. And also micro-optimise the setting of PTE values rather than using the helper functions to do all the masking. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Performance improvement for dma_pte_clear_range()David Woodhouse
It's a bit silly to repeatedly call domain_flush_cache() for each PTE individually, as we clear it. Instead, batch them up and flush a whole range at a time. We might as well refrain from recalculating the PTE address from scratch each time round the loop too. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Clean up iommu_domain_identity_map()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARsDavid Woodhouse
This is fairly broken anyway -- it doesn't take hotplug into account. We should probably be checking page_is_ram() instead. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argumentDavid Woodhouse
Most of its callers are having to shift for themselves anyway, so we might as well do it in iommu_flush_iotlb_psi(). Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Change aligned_size() to aligned_nrpages()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Use domain_pfn_mapping() in intel_iommu_map_range()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Use domain_pfn_mapping() in __intel_map_single()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Introduce domain_pfn_mapping()David Woodhouse
... and use it in the trivial cases; the other callers want individual (and bisectable) attention, since I screwed them up the first time... Make the BUG_ON() happen on too-large virtual address rather than physical address, too. That's the one we care about. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Clean up address handling in domain_page_mapping()David Woodhouse
No more masking and alignment; just use pfns. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Change addr_to_dma_pte() to pfn_to_dma_pte()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Clean up intel_iommu_unmap_range()David Woodhouse
Use unaligned address for domain->max_addr. That algorithm isn't ideal anyway -- we should probably just look at the last iova in the tree. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make dma_pte_free_pagetable() take pfns as argumentDavid Woodhouse
With some cleanup of intel_unmap_page(), intel_unmap_sg() and vm_domain_exit() to no longer play with 64-bit addresses. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make dma_pte_free_pagetable() use pfnsDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make dma_pte_clear_range() take pfns as argumentDavid Woodhouse
Noting that this is now an _inclusive_ range. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make dma_pte_clear_range() use pfnsDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Don't just mask out too-big physical addresses; BUG() insteadDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Make dma_pte_clear_one() take pfn not addressDavid Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Change dma_addr_level_pte() to dma_pfn_level_pte()David Woodhouse
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Change address_level_offset() to pfn_level_offset()David Woodhouse
We're shifting the inputs for now, but that'll change... Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Change dma_set_pte_addr() to dma_set_pte_pfn()David Woodhouse
Add some helpers for converting between VT-d and normal system pfns, since system pages can be larger than VT-d pages. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Clean up identity mapping code, remove CONFIG_DMAR_GFX_WADavid Woodhouse
There's no need for the GFX workaround now we have 'iommu=pt' for the cases where people really care about performance. There's no need to have a special case for just one type of device. This also speeds up the iommu=pt path and reduces memory usage by setting up the si_domain _once_ and then using it for all devices, rather than giving each device its own private page tables. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Create new iommu_domain_identity_map() functionDavid Woodhouse
We'll want to do this to a _domain_ (the si_domain) rather than a PCI device. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29intel-iommu: Only avoid flushing device IOTLB for domain ID 0 in caching modeYu Zhao
In caching mode, domain ID 0 is reserved for non-present to present mapping flush. Device IOTLB doesn't need to be flushed in this case. Previously we were avoiding the flush for domain zero, even if the IOMMU wasn't in caching mode and domain zero wasn't special. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-26intel-iommu: fix Identity Mapping to be arch independentChris Wright
Drop the e820 scanning and use existing function for finding valid RAM regions to add to 1:1 mapping. Signed-off-by: Chris Wright <chrisw@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>