- 18 May, 2009 36 commits
-
-
Peter P Waskiewicz Jr authored
This patch adds the generic XAUI device support for 82599 controllers. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The performance of hardware RSC is greatly reduced if the total for max rsc descriptors multiplied by the buffer size is greater than 65535. To prevent this we need to adjust the max rsc descriptors appropriately. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter P Waskiewicz Jr authored
When running in DCB mode, switching between link flow control and priority flow control shouldn't need to reset the hardware. This removes that reset. This also extends the set_all() dcbnl callback to return a value indicating that the HW config changed, however a reset was not required. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter P Waskiewicz Jr authored
Ethtool should report that link flow control is disabled when in priority flow control mode. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter P Waskiewicz Jr authored
82599 supports using either link flow control or priority flow control when in DCB mode. The dcbnl interface already supports sending down configurations through rtnetlink that can enable LFC when DCB is enabled, so the driver should take advantage of this. 82598 does not support using LFC when DCB is enabled, so explicitly disable it when we're in DCB mode. This means we always run in PFC mode when DCB is enabled. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter P Waskiewicz Jr authored
This sets the low water threshhold for priority flow control for 82598 and 82599 controllers in DCB mode. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
Enable jumbo frame when FCoE feature is enabled in 82599. Use 3K as the receive queue buffer size for receive queues used by FCoE to address for max Fiber Channel frame size as 2148 bytes (with max 2112 bytes of payload). Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
Enable using FCoE redirection table feature in 82599. The FCoE redirection table has maximum of eight entries, corresponding to maximum of eight receive queues to be used for distributing incoming FCoE packets. This patch sets up the FCoE redirection table when multiple receive queues are available for FCoE. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
Add ring feature for FCoE to make use of the FCoE redirection table in 82599. The FCoE redirection table is a receive side scaling feature for Fiber Channel over Ethernet feature in 82599, enabling distributing FCoE packets to different receive queues based on the exchange id. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vasu Dev authored
If we can find a type NETDEV_HW_ADDR_T_SAN mac address from the corresponding netdev for a fcoe interface then sets up added the fc->ctlr.spma flag and stores spma mode address in ctl_src_addr. In case the spma flag is set then:- 1. Adds spma mode MAC address in ctl_src_addr as secondary MAC address, the FLOGI for FIP and pre-FIP will go out using this address. 2. Cleans up stored spma MAC address in ctl_src_addr in fcoe_netdev_cleanup. 3. Sets up spma bit in fip_flags for FIP solicitations along with exiting FPMA bit setting. 4. Initialize the FLOGI FIP MAC descriptor to stored spma MAC address in ctl_src_addr. This is used as proposed FCoE MAC address from initiator along with both SPMA and FPMA bit set in FIP solicitation, in response the switch may grant any FPMA or SPMA mode MAC address to initiator. Removes FIP descriptor type checking against ELS type ELS_FLOGI in fcoe_ctlr_encaps to update a FIP MAC descriptor, instead now checks against FIP_DT_FLOGI. I've tested this with available FPMA-only FCoE switch but since data_src_addr is updated using same old code for both FPMA and SPMA modes with FIP or pre-FIP links, so added SPMA mode will work with SPMA-only switch also provided that switch grants a valid MAC address. Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vasu Dev authored
Currently fcoe_netdev_config adds netdev pkt handler for fcoe pkts, fcoe_if_create adds netdev pkt handler for fip packets, a secondary MAC address is added by fcoe_netdev_config and then later cleanup for these netdev related config/adds is done only during fcoe_if_destroy and no cleanup done on error during fcoe interface creation after above netdev config calling in fcoe_if_create. So this patch adds single func for above mentioned cleanup the fcoe_netdev_cleanup and then calls this func on either fcoe interface destroy or exiting from fcoe_if_create due to an error after fcoe/fip related above netdev config is done. Moved netdev pkt handler addition code blocks for fip pkts close to similar code block for foce pkt in fcoe_netdev_config, so that added fcoe_netdev_cleanup could be called on error from fcoe_netdev_config to undo these both additions for fcoe/fip pkt handlers. This move required reference to fcoe_fip_recv in fcoe_netdev_config, so moved fip related functions fcoe_fip_recv, fcoe_fip_send and fcoe_update_src_mac above fcoe_netdev_config. This consolidation will enable spma mode support in next patch to easily add or delete spma mode mac address beside fixing current no cleanup issue during error. Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Waskiewicz Jr, Peter P authored
After acquiring the SAN MAC address from the EEPROM, we need to program it into one of the RARs. Also, DCB will use this MAC address to run DCBX commands, so it doesn't have to play musical MAC addresses when things like bonding enter the picture. So we need to return the MAC address through the netlink interface to userspace. This also moves the init_rx_addrs() call out of start_hw() and into reset_hw(). We shouldn't try to read any of the RAR information before initializing our internal accounting of the RAR table, which was what was happening. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
PJ Waskiewicz authored
This patch implements the Storage Address entrypoint from the net device. It will read the SAN MAC addresses from the EEPROM of the 82599 hardware, and make them available to the FCoE stack through the net device. Also, add/del the SAN MAC address to the netdev dev_addr_list via the kernel api dev_addr_add()/dev_addr_del() when there is a valid SAN MAC supported by the HW. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
There are no users of it. Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
Also remove DE620_DEBUG and de620_debug. Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
Also remove de600_debug as it is not needed. Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Beregalov authored
It seems it always was here. Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
These registers were originally defined for XENPAK modules, but are also implemented by many other 10G PHYs. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
These do not have an in-kernel user but may be useful to user-space. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Korsgaard authored
The smsc95xx driver was forwarding the trailing fcs on received frames up the stack leading to confusion in tcpdump. Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk> Tested-by: Steve Glendinning <steve.glendinning@smsc.com> Acked-by: Steve Glendinning <steve.glendinning@smsc.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Korsgaard authored
The comments describing the rx/tx headers used a combination of zero- and 1-based indexing, leading to confusion. Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
struct net_device trans_start field is a hot spot on SMP and high performance devices, particularly multi queues ones, because every transmitter dirties it. Is main use is tx watchdog and bonding alive checks. But as most devices dont use NETIF_F_LLTX, we have to lock a netdev_queue before calling their ndo_start_xmit(). So it makes sense to move trans_start from net_device to netdev_queue. Its update will occur on a already present (and in exclusive state) cache line, for free. We can do this transition smoothly. An old driver continue to update dev->trans_start, while an updated one updates txq->trans_start. Further patches could also put tx_bytes/tx_packets counters in netdev_queue to avoid dirtying dev->stats (vlan device comes to mind) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
The B channel data structure member rcvbytes was never set to anything else but zero, so drop it. Impact: cleanup Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Drop the kernel config option GIGASET_UNDOCREQ, permanently activating the code it controlled, as there have been no reports of problems caused by its activation but many problems caused by it being disabled. Also fix a few bad comments while we're at it. Impact: cleanup Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
In preparation for porting to kernel CAPI subsystem, include the Gigaset driver's Kconfig directly from ISDN's instead of I4L's. Impact: Kconfig reorganisation, no functional change Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Mention handling of unregisteted DECT wireless datasets in README.gigaset. Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
gigaset_register_to_LL() is expected to print a message and return 0 on failure. Make it do so consistently. Impact: error handling bugfix Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Don't generate the hex representation of the payload data if it isn't actually used afterwards. Impact: optimization Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Use pr_warning() / pr_err() instead of dev_warn() / dev_err() in two places where the dev pointer isn't guaranteed to be valid. Impact: error handling bugfix Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
The separation of state tables for base and M10x has long been removed. Clean up remaining traces of it. Impact: cleanup Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When using bnx2 in a high transmit load, bnx2_tx_int() cost is pretty high. There are two reasons. One is an expensive call to bnx2_get_hw_tx_cons(bnapi) for each freed skb One is cpu stalls when accessing skb_is_gso(skb) / skb_shinfo(skb)->nr_frags because of two cache line misses. (One to get skb->end/head to compute skb_shinfo(skb), one to get is_gso/nr_frags) This patch : 1) avoids calling bnx2_get_hw_tx_cons(bnapi) too many times. 2) makes bnx2_start_xmit() cache is_gso & nr_frags into sw_tx_bd descriptor. This uses a litle bit more ram (256 longs per device on x86), but helps a lot. 3) uses a prefetch(&skb->end) to speedup dev_kfree_skb(), bringing cache line that will be needed in skb_release_data() result is 5 % bandwidth increase in benchmarks, involving UDP or TCP receive & transmits, when a cpu is dedicated to ksoftirqd for bnx2. bnx2_tx_int going from 3.33 % cpu to 0.5 % cpu in oprofile Note : skb_dma_unmap() still very expensive but this is for another patch, not related to bnx2 (2.9 % of cpu, while it does nothing on x86_32) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Dykstra authored
When TCP frees up write buffer space, avoid waking up tasks that have done a poll() or select() on the same socket specifying read-side events. This is an extension of a read-side patch by Eric Dumazet. Signed-off-by: John Dykstra <john.dykstra1@gmail.com> Acked-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 17 May, 2009 4 commits
-
-
Stephen Rothwell authored
netdev->dev_addr changed from being an array to being a pointer, so we should not take its address for memcpy(). Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
This adds FCoE related statistics to 82599, including number Rx-ed and Tx-ed FCoE packets, number of Rx-ed and Tx-ed FCoE packets in dwords, number of bad Fiber Channel CRCs detected in FCoE packets, and number of FCoE packets dropped on the Rx side. Signed-off-by: Yi Zou <yi.zou@intel.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
This patch implements the FCoE Rx side offload feature in ixgbe_main.c to 82599 using the Rx offload infrastructure code added in the previous patch. The large receive offload by Direct Data Placement (DDP) for FCoE is achieved by implementing the ndo_fcoe_ddp_setup and ndo_fcoe_ddp_done in net_device_ops via netdev. It is up to the ULD, i.e., fcoe and libfc to query and setup large receive offload accordingly through the corresponding netdev upon creating fcoe instances. Signed-off-by: Yi Zou <yi.zou@intel.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yi Zou authored
This adds infrastructure code for FCoE Rx side offload feature to 82599, which provides large receive offload for FCoE by Direct Data Placement (DDP). The ixgbe_fcoe_ddp_get() and ixgbe_fcoe_ddp_put() pair corresponds to the netdev support to FCoE by the function pointers provided in net_device_ops as ndo_fcoe_ddp_setup and ndo_fcoe_ddp_done. The implementation of these in ixgbe is shown in the next patch. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-