An error occurred fetching the project authors.
  1. 19 Apr, 2007 1 commit
  2. 12 Feb, 2007 1 commit
  3. 22 Sep, 2006 2 commits
  4. 24 Jul, 2006 1 commit
  5. 30 Mar, 2006 1 commit
  6. 20 Mar, 2006 1 commit
    • Jack Morgenstein's avatar
      IB/umad: Add support for large RMPP transfers · f36e1793
      Jack Morgenstein authored
      Add support for sending and receiving large RMPP transfers.  The old
      code supports transfers only as large as a single contiguous kernel
      memory allocation.  This patch uses linked list of memory buffers when
      sending and receiving data to avoid needing contiguous pages for
      larger transfers.
      
        Receive side: copy the arriving MADs in chunks instead of coalescing
        to one large buffer in kernel space.
      
        Send side: split a multipacket MAD buffer to a list of segments,
        (multipacket_list) and send these using a gather list of size 2.
        Also, save pointer to last sent segment, and retrieve requested
        segments by walking list starting at last sent segment. Finally,
        save pointer to last-acked segment.  When retrying, retrieve
        segments for resending relative to this pointer.  When updating last
        ack, start at this pointer.
      Signed-off-by: default avatarJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: default avatarSean Hefty <sean.hefty@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      f36e1793
  7. 09 Dec, 2005 1 commit
  8. 28 Nov, 2005 1 commit
  9. 18 Nov, 2005 1 commit
    • Roland Dreier's avatar
      IB/umad: make sure write()s have sufficient data · eabc7793
      Roland Dreier authored
      Make sure that userspace passes in enough data when sending a MAD.  We
      always copy at least sizeof (struct ib_user_mad) + IB_MGMT_RMPP_HDR
      bytes from userspace, so anything less is definitely invalid.  Also,
      if the length is less than this limit, it's possible for the second
      copy_from_user() to get a negative length and trigger a BUG().
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      eabc7793
  10. 10 Nov, 2005 3 commits
  11. 06 Nov, 2005 1 commit
  12. 03 Nov, 2005 1 commit
  13. 28 Oct, 2005 4 commits
  14. 25 Oct, 2005 1 commit
    • Sean Hefty's avatar
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty authored
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: default avatarSean Hefty <sean.hefty@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      34816ad9
  15. 20 Oct, 2005 2 commits
  16. 19 Sep, 2005 1 commit
  17. 27 Aug, 2005 3 commits
  18. 27 Jul, 2005 2 commits
  19. 25 May, 2005 1 commit
  20. 16 Apr, 2005 2 commits