Skip to content

Commit

Permalink
x86/mm/p2m: stop checking for IOMMU shared page tables in mmio_order()
Browse files Browse the repository at this point in the history
Now that the iommu_map() and iommu_unmap() operations take an order
parameter and elide flushing there's no strong reason why modifying MMIO
ranges in the p2m should be restricted to a 4k granularity simply because
the IOMMU is enabled but shared page tables are not in operation.

Signed-off-by: Paul Durrant <[email protected]>
Reviewed-by: Jan Beulich <[email protected]>
  • Loading branch information
Paul Durrant authored and andyhhp committed Jan 3, 2019
1 parent e8afe11 commit a5b0eb3
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions xen/arch/x86/mm/p2m.c
Original file line number Diff line number Diff line change
Expand Up @@ -2210,13 +2210,12 @@ static unsigned int mmio_order(const struct domain *d,
unsigned long start_fn, unsigned long nr)
{
/*
* Note that the !iommu_use_hap_pt() here has three effects:
* - cover iommu_{,un}map_page() not having an "order" input yet,
* Note that the !hap_enabled() here has two effects:
* - exclude shadow mode (which doesn't support large MMIO mappings),
* - exclude PV guests, should execution reach this code for such.
* So be careful when altering this.
*/
if ( !iommu_use_hap_pt(d) ||
if ( !hap_enabled(d) ||
(start_fn & ((1UL << PAGE_ORDER_2M) - 1)) || !(nr >> PAGE_ORDER_2M) )
return PAGE_ORDER_4K;

Expand Down

0 comments on commit a5b0eb3

Please sign in to comment.