1. 27 Aug, 2008 21 commits
  2. 25 Aug, 2008 12 commits
  3. 24 Aug, 2008 6 commits
    • Lennert Buytenhek's avatar
      c4560318
    • Lennert Buytenhek's avatar
      mv643xx_eth: enforce multiple-of-8-bytes receive buffer size restriction · abe78717
      Lennert Buytenhek authored
      The mv643xx_eth hardware ignores the lower three bits of the buffer
      size field in receive descriptors, causing the reception of full-sized
      packets to fail at some MTUs.  Fix this by rounding the size of
      allocated receive buffers up to a multiple of eight bytes.
      
      While we are at it, add a bit of extra space to each receive buffer so
      that we can handle multiple vlan tags on ingress.
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      abe78717
    • Lennert Buytenhek's avatar
      mv643xx_eth: fix NULL pointer dereference in rxq_process() · 9e1f3772
      Lennert Buytenhek authored
      When we are low on memory, the assumption that every descriptor in the
      receive ring will have an skbuff associated with it does not hold.
      
      rxq_process() was assuming that if the receive descriptor it is working
      on is not owned by the hardware, it can safely be processed and handed
      to the networking stack.  But a descriptor in the receive ring not being
      owned by the hardware can also happen when we are low on memory and did
      not manage to refill the receive ring fully.
      
      This patch changes rxq_process()'s bailout condition from "the first
      receive descriptor to be processed is owned by the hardware" to "the
      first receive descriptor to be processed is owned by the hardware OR
      the number of valid receive descriptors in the ring is zero".
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      9e1f3772
    • Lennert Buytenhek's avatar
      mv643xx_eth: fix inconsistent lock semantics · 8e0b1bf6
      Lennert Buytenhek authored
      Nicolas Pitre noted that mv643xx_eth_poll was incorrectly using
      non-IRQ-safe locks while checking whether to wake up the netdevice's
      transmit queue.  Convert the locking to *_irq() variants, since we
      are running from softirq context where interrupts are enabled.
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      8e0b1bf6
    • Lennert Buytenhek's avatar
      mv643xx_eth: fix double add_timer() on the receive oom timer · 92c70f27
      Lennert Buytenhek authored
      Commit 12e4ab79 ("mv643xx_eth: be
      more agressive about RX refill") changed the condition for the receive
      out-of-memory timer to be scheduled from "the receive ring is empty"
      to "the receive ring is not full".
      
      This can lead to a situation where the receive out-of-memory timer is
      pending because a previous rxq_refill() didn't manage to refill the
      receive ring entirely as a result of being out of memory, and
      rxq_refill() is then called again as a side effect of a packet receive
      interrupt, and that rxq_refill() call then again does not succeed to
      refill the entire receive ring with fresh empty skbuffs because we are
      still out of memory, and then tries to call add_timer() on the already
      scheduled out-of-memory timer.
      
      This patch fixes this issue by changing the add_timer() call in
      rxq_refill() to a mod_timer() call.  If the OOM timer was not already
      scheduled, this will behave as before, whereas if it was already
      scheduled, this patch will push back its firing time a bit, which is
      safe because we've (unsuccessfully) attempted to refill the receive
      ring just before we do this.
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      92c70f27
    • Lennert Buytenhek's avatar
      mv643xx_eth: fix NAPI 'rotting packet' issue · 819ddcaf
      Lennert Buytenhek authored
      When a receive interrupt occurs, mv643xx_eth would first process the
      receive descriptors and then ACK the receive interrupt, instead of the
      other way round.
      
      This would leave a small race window between processing the last
      receive descriptor and clearing the receive interrupt status in which
      a new packet could come in, which would then 'rot' in the receive
      ring until the next receive interrupt would come in.
      
      Fix this by ACKing (clearing) the receive interrupt condition before
      processing the receive descriptors.
      Signed-off-by: default avatarLennert Buytenhek <buytenh@marvell.com>
      819ddcaf
  4. 23 Aug, 2008 1 commit
    • Stephen Hemminger's avatar
      ipv6: protocol for address routes · f410a1fb
      Stephen Hemminger authored
      This fixes a problem spotted with zebra, but not sure if it is
      necessary a kernel problem.  With IPV6 when an address is added to an
      interface, Zebra creates a duplicate RIB entry, one as a connected
      route, and other as a kernel route.
      
      When an address is added to an interface the RTN_NEWADDR message
      causes Zebra to create a connected route. In IPV4 when an address is
      added to an interface a RTN_NEWROUTE message is set to user space with
      the protocol RTPROT_KERNEL. Zebra ignores these messages, because it
      already has the connected route.
      
      The problem is that route created in IPV6 has route protocol ==
      RTPROT_BOOT.  Was this a design decision or a bug? This fixes it. Same
      patch applies to both net-2.6 and stable.
      Signed-off-by: default avatarStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f410a1fb