1. 04 Jul, 2009 2 commits
  2. 02 Jul, 2009 7 commits
    • Linus Torvalds's avatar
      Merge git://git.infradead.org/iommu-2.6 · 405d7ca5
      Linus Torvalds authored
      * git://git.infradead.org/iommu-2.6: (38 commits)
        intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable()
        intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loops
        intel-iommu: Use cmpxchg64_local() for setting PTEs
        intel-iommu: Warn about unmatched unmap requests
        intel-iommu: Kill superfluous mapping_lock
        intel-iommu: Ensure that PTE writes are 64-bit atomic, even on i386
        intel-iommu: Make iommu=pt work on i386 too
        intel-iommu: Performance improvement for dma_pte_free_pagetable()
        intel-iommu: Don't free too much in dma_pte_free_pagetable()
        intel-iommu: dump mappings but don't die on pte already set
        intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()
        intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg()
        intel-iommu: Simplify __intel_alloc_iova()
        intel-iommu: Performance improvement for domain_pfn_mapping()
        intel-iommu: Performance improvement for dma_pte_clear_range()
        intel-iommu: Clean up iommu_domain_identity_map()
        intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARs
        intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argument
        intel-iommu: Change aligned_size() to aligned_nrpages()
        intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping()
        ...
      405d7ca5
    • Yinghai Lu's avatar
      x86: add boundary check for 32bit res before expand e820 resource to alignment · 7c5371c4
      Yinghai Lu authored
      fix hang with HIGHMEM_64G and 32bit resource.  According to hpa and
      Linus, use (resource_size_t)-1 to fend off big ranges.
      
      Analyzed by hpa
      Reported-and-tested-by: default avatarMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: default avatarYinghai Lu <yinghai@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7c5371c4
    • Linus Torvalds's avatar
      x86: fix power-of-2 round_up/round_down macros · 43644679
      Linus Torvalds authored
      These macros had two bugs:
       - the type of the mask was not correctly expanded to the full size of
         the argument being expanded, resulting in possible loss of high bits
         when mixing types.
       - the alignment argument was evaluated twice, despite the macro looking
         like a fancy function (but it really does need to be a macro, since
         it works on arbitrary integer types)
      
      Noticed by Peter Anvin, and with a fix that is a modification of his
      suggestion (bug noticed by Yinghai Lu).
      
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      43644679
    • David Woodhouse's avatar
      intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable() · 6a43e574
      David Woodhouse authored
      Check dma_pte_present() and only free the page if there _is_ one.
      Kind of surprising that there was no warning about this.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      6a43e574
    • David Woodhouse's avatar
      intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loops · 75e6bf96
      David Woodhouse authored
      On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote:
      > I also _really_ hate how you do
      >
      >         (unsigned long)pte >> VTD_PAGE_SHIFT ==
      >         (unsigned long)first_pte >> VTD_PAGE_SHIFT
      
      Kill this, in favour of just looking to see if the incremented pte
      pointer has 'wrapped' onto the next page. Which means we have to check
      it _after_ incrementing it, not before.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      75e6bf96
    • David Howells's avatar
      FRV: Add basic performance counter support · 42ca4fb6
      David Howells authored
      Add basic performance counter support to the FRV arch.
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42ca4fb6
    • David Howells's avatar
      FRV: Implement atomic64_t · 00460f41
      David Howells authored
      Implement atomic64_t and its ops for FRV.  Tested with the following patch:
      
      	diff --git a/arch/frv/kernel/setup.c b/arch/frv/kernel/setup.c
      	index 55e4fab..086d50d 100644
      	--- a/arch/frv/kernel/setup.c
      	+++ b/arch/frv/kernel/setup.c
      	@@ -746,6 +746,52 @@ static void __init parse_cmdline_early(char *cmdline)
      
      	 } /* end parse_cmdline_early() */
      
      	+static atomic64_t xxx;
      	+
      	+static void test_atomic64(void)
      	+{
      	+	atomic64_set(&xxx, 0x12300000023LL);
      	+
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0x12300000023LL);
      	+	mb();
      	+	if (atomic64_inc_return(&xxx) != 0x12300000024LL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0x12300000024LL);
      	+	mb();
      	+	if (atomic64_sub_return(0x36900000050LL, &xxx) != -0x2460000002cLL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != -0x2460000002cLL);
      	+	mb();
      	+	if (atomic64_dec_return(&xxx) != -0x2460000002dLL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != -0x2460000002dLL);
      	+	mb();
      	+	if (atomic64_add_return(0x36800000001LL, &xxx) != 0x121ffffffd4LL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0x121ffffffd4LL);
      	+	mb();
      	+	if (atomic64_cmpxchg(&xxx, 0x123456789abcdefLL, 0x121ffffffd4LL) != 0x121ffffffd4LL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0x121ffffffd4LL);
      	+	mb();
      	+	if (atomic64_cmpxchg(&xxx, 0x121ffffffd4LL, 0x123456789abcdefLL) != 0x121ffffffd4LL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0x123456789abcdefLL);
      	+	mb();
      	+	if (atomic64_xchg(&xxx, 0xabcdef123456789LL) != 0x123456789abcdefLL)
      	+		BUG();
      	+	mb();
      	+	BUG_ON(atomic64_read(&xxx) != 0xabcdef123456789LL);
      	+	mb();
      	+}
      	+
      	 /*****************************************************************************/
      	 /*
      	  *
      	@@ -845,6 +891,8 @@ void __init setup_arch(char **cmdline_p)
      	 //	asm volatile("movgs %0,timerd" :: "r"(10000000));
      	 //	__set_HSR(0, __get_HSR(0) | HSR0_ETMD);
      
      	+	test_atomic64();
      	+
      	 } /* end setup_arch() */
      
      	 #if 0
      
      Note that this doesn't cover all the trivial wrappers, but does cover all the
      substantial implementations.
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      00460f41
  3. 01 Jul, 2009 31 commits