- 06 Jul, 2005 21 commits
-
-
Linus Torvalds authored
-
Deepak Saxena authored
Patch from Deepak Saxena This patch implements the iomap API for Intel IXP4xx NPU systems. We need to implement our own version of the API functions b/c of the PCI hostbridge does not provide the capability to map PCI I/O space into the CPU's physical memory space. In addition, if a system has more than 64M of PCI memory mapped BARs, PCI memory must also be accessed indirectly. This patch changes the assignment of PCI I/O resources to fall into to 0x0000:0xffff range so that we can trap I/O areas in our ioread/iowrite macros. Signed-off-by: Deepak Saxena Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Todd Poynor authored
Patch from Todd Poynor Fix module versioning for 3 ARM symbols that do not have CRCs added, avoid "disagrees about version of symbol struct_module" errors at module load time. From David Singleton. Signed-off-by: Todd Poynor <tpoynor@mvista.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Stefan Sorensen authored
coyote Patch from Stefan Sorensen On the ixdp425 and coyote platforms, the plat_serial8250_port arrays are missing the terminating entry required by serial8250_probe. Signed-off-by: Stefan Sorensen <ssoe@kirktelecom.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Catalin Marinas authored
Patch from Catalin Marinas The VFP instructions trigger undefined exceptions because the access to CP11 is disabled (only CP10 is currently enabled by the kernel). The patch fixes this problem. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Linus Torvalds authored
"ack_code" is assigned (and tested against) negative numbers, but was declared as "char". Which only works if "char" is signed - which it necessarily isn't. So make that signedness assumption specific.
-
Jeff Mahoney authored
This adds the hotplug routine for generating hotplug events when devices are seen on the macio bus. It uses the attributed created by the sysfs nodes to generate the hotplug environment vars for userspace. Since the characters allowed inside the 'compatible' field are NUL terminated, they are exported as individual OF_COMPATIBLE_# variables, with OF_COMPATIBLE_N maintaining a count of how many there are. In order for hotplug to work with macio devices, patches to module-init-tools and hotplug must be applied. Those patches are available at: ftp://ftp.suse.com/pub/people/jeffm/linux/macio-hotplug/Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Jeff Mahoney authored
This adds sysfs nodes that the hotplug userspace can use to load the appropriate modules. In order for hotplug to work with macio devices, patches to module-init-tools and hotplug must be applied. Those patches are available at: ftp://ftp.suse.com/pub/people/jeffm/linux/macio-hotplug/ Changes: The previous versions were built on 2.6.12. 2.6.13-rcX introduced a device_attribute parameter to the show functions. Since that parameter was treated as the output buffer, memory corruption would result, causing Oopsen very quickly. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Jeff Mahoney authored
This converts the usage of struct of_match to struct of_device_id, similar to pci_device_id. This allows a device table to be generated, which can be parsed by depmod(8) to generate a map file for module loading. In order for hotplug to work with macio devices, patches to module-init-tools and hotplug must be applied. Those patches are available at: ftp://ftp.suse.com/pub/people/jeffm/linux/macio-hotplug/Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Dave Jones authored
Just the declaration fix wasn't enough to fix things in bt78x.c Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Christoph Lameter authored
This patch used to be in Andrew's tree before the NUMA slab allocator went in. Either this patch or the NUMA slab allocator is needed in order for kmalloc_node to work correctly. pcibus_to_node may be used to generate the node information passed to kmalloc_node. pcibus_to_node returns -1 if it was not able to determine on which node a pcibus is located. For that case kmalloc_node must work like kmalloc. Signed-off-by: Christoph Lameter <christoph@lameter.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Greg KH authored
Missing forward declaration Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Greg KH authored
Here's a patch to fix the build issue when CONFIG_HOTPLUG is not enabled in 2.6.13-rc2. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Linus Torvalds authored
-
Linus Torvalds authored
-
David S. Miller authored
The membar changes made the size of __cheetah_flush_tlb_pending grow by one instruction, but the boot-time code patching was not updated to match. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rusty Lynch authored
The following renames arch_init, a kprobes function for performing any architecture specific initialization, to arch_init_kprobes in order to cleanup the namespace. Also, this patch adds arch_init_kprobes to sparc64 to fix the sparc64 kprobes build from the last return probe patch. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Eugene Surovegin authored
Add explicit disabling of 440GP IRQ compatibility mode when configuring 440GX interrupt controller. This helps when board firmware for some reason uses this compatibility mode and leaves it enabled. It breaks 440GX interrupt code because it assumes native 440GX IRQ mode. People seems to be continuously bitten by this. Signed-off-by: Eugene Surovegin <ebs@ebshome.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
john stultz authored
As part of my timeofday rework, I've been looking at the NTP code and I noticed that the PPC architecture is apparently misusing the NTP's time_offset (it is a terrible name!) value as some form of timezone offset. This could cause problems when time_offset changed by the NTP code. This patch changes the PPC code so it uses a more clear local variable: timezone_offset. Signed-off-by: John Stultz <johnstul@us.ibm.com> Acked-by: Tom Rini <trini@kernel.crashing.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andrei Konovalov authored
This patch adds the Freescale MPC86xADS board support. The supported devices are SMC UART and 10Mbit ethernet on SCC1. The manual for the board says that it "is compatible with the MPC8xxFADS for software point of view". That's why this patch extends FADS instead of introducing a new platform. FEC is not supported as the "combined FCC/FEC ethernet driver" driver by Pantelis Antoniou should replace the current FEC driver. Signed-off-by: Gennadiy Kurtsman <gkurtsman@ru.mvista.com> Signed-off-by: Andrei Konovalov <akonovalov@ru.mvista.com> Acked-by: Tom Rini <trini@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Linus Torvalds authored
-
- 05 Jul, 2005 19 commits
-
-
Robert Olsson authored
Signed-off-by: Robert Olsson <Robert.Olsson@data.slu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Congestion window recover after loss depends upon the fact that if we have a full MSS sized frame at the head of the send queue, we will send it. TSO deferral can defeat the ACK clocking necessary to exit cleanly from recovery. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Useful in combination with classful qdiscs to drop or temporary disable certain flows, e.g. one could block specific ds flows with dsmark. Unlike the noop qdisc it can be controlled by the user and statistic accounting is done. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Make TSO segment transmit size decisions at send time not earlier. The basic scheme is that we try to build as large a TSO frame as possible when pulling in the user data, but the size of the TSO frame output to the card is determined at transmit time. This is guided by tp->xmit_size_goal. It is always set to a multiple of MSS and tells sendmsg/sendpage how large an SKB to try and build. Later, tcp_write_xmit() and tcp_push_one() chop up the packet if necessary and conditions warrant. These routines can also decide to "defer" in order to wait for more ACKs to arrive and thus allow larger TSO frames to be emitted. A general observation is that TSO elongates the pipe, thus requiring a larger congestion window and larger buffering especially at the sender side. Therefore, it is important that applications 1) get a large enough socket send buffer (this is accomplished by our dynamic send buffer expansion code) 2) do large enough writes. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This makes it easier to understand, and allows easier tweaking of the heuristic later on. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
In tcp_clean_rtx_queue(), if the TSO packet is not even partially acked, do not waste time calling tcp_tso_acked(). Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Everything stated there is out of data. tcp_trim_skb() does adjust the available socket send buffer space and skb->truesize now. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Only put user data purely to pages when doing TSO. The extra page allocations cause two problems: 1) Add the overhead of the page allocations themselves. 2) Make us do small user copies when we get to the end of the TCP socket cache page. It is still beneficial to purely use pages for TSO, so we will do it for that case. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
tcp_snd_test() is run for every packet output by a single call to tcp_write_xmit(), but this is not necessary. For one, the congestion window space needs to only be calculated one time, then used throughout the duration of the loop. This cleanup also makes experimenting with different TSO packetization schemes much easier. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
tcp_snd_test() does several different things, use inline functions to express this more clearly. 1) It initializes the TSO count of SKB, if necessary. 2) It performs the Nagle test. 3) It makes sure the congestion window is adhered to. 4) It makes sure SKB fits into the send window. This cleanup also sets things up so that things like the available packets in the congestion window does not need to be calculated multiple times by packet sending loops such as tcp_write_xmit(). Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
'nonagle' should be passed to the tcp_snd_test() function as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the tail of the write_queue. This is because Nagle does not apply to such frames since we cannot possibly tack more data onto them. However, while doing this __tcp_push_pending_frames() makes all of the packets in the write_queue use this modified 'nonagle' value. Fix the bug and simplify this function by just calling tcp_write_xmit() directly if sk_send_head is non-NULL. As a result, we can now make tcp_data_snd_check() just call tcp_push_pending_frames() instead of the specialized __tcp_data_snd_check(). Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
tcp_write_xmit() uses tcp_current_mss(), but some of it's callers, namely __tcp_push_pending_frames(), already has this value available already. While we're here, fix the "cur_mss" argument to be "unsigned int" instead of plain "unsigned". Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Put the main basic block of work at the top-level of tabbing, and mark the TCP_CLOSE test with unlikely(). Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
The tcp_cwnd_validate() function should only be invoked if we actually send some frames, yet __tcp_push_pending_frames() will always invoke it. tcp_write_xmit() does the call for us, so the call here can simply be removed. Also, tcp_write_xmit() can be marked static. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
When we add any new packet to the TCP socket write queue, we must call skb_header_release() on it in order for the TSO sharing checks in the drivers to work. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
It reimplements portions of tcp_snd_check(), so it we move it to tcp_output.c we can consolidate it's logic much easier in a later change. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This just moves the code into tcp_output.c, no code logic changes are made by this patch. Using this as a baseline, we can begin to untangle the mess of comparisons for the Nagle test et al. We will also be able to reduce all of the redundant computation that occurs when outputting data packets. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
On each packet output, we call tcp_dec_quickack_mode() if the ACK flag is set. It drops tp->ack.quick until it hits zero, at which time we deflate the ATO value. When doing TSO, we are emitting multiple packets with ACK set, so we should decrement tp->ack.quick that many segments. Note that, unlike this case, tcp_enter_cwr() should not take the tcp_skb_pcount(skb) into consideration. That function, one time, readjusts tp->snd_cwnd and moves into TCP_CA_CWR state. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
The ideal and most optimal layout for an SKB when doing scatter-gather is to put all the headers at skb->data, and all the user data in the page array. This makes SKB splitting and combining extremely simple, especially before a packet goes onto the wire the first time. So, when sk_stream_alloc_pskb() is given a zero size, make sure there is no skb_tailroom(). This is achieved by applying SKB_DATA_ALIGN() to the header length used here. Next, make select_size() in TCP output segmentation use a length of zero when NETIF_F_SG is true on the outgoing interface. Signed-off-by: David S. Miller <davem@davemloft.net>
-