Commit 9bffc4ac authored by Neil Horman's avatar Neil Horman Committed by David S. Miller

[SCTP]: Fix sctp to not return erroneous POLLOUT events.

Make sctp_writeable() use sk_wmem_alloc rather than sk_wmem_queued to
determine the sndbuf space available. It also removes all the modifications
to sk_wmem_queued as it is not currently used in SCTP.
Signed-off-by: default avatarNeil Horman <nhorman@tuxdriver.com>
Signed-off-by: default avatarSridhar Samudrala <sri@us.ibm.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 399c180a
...@@ -156,10 +156,6 @@ static inline void sctp_set_owner_w(struct sctp_chunk *chunk) ...@@ -156,10 +156,6 @@ static inline void sctp_set_owner_w(struct sctp_chunk *chunk)
sizeof(struct sk_buff) + sizeof(struct sk_buff) +
sizeof(struct sctp_chunk); sizeof(struct sctp_chunk);
sk->sk_wmem_queued += SCTP_DATA_SNDSIZE(chunk) +
sizeof(struct sk_buff) +
sizeof(struct sctp_chunk);
atomic_add(sizeof(struct sctp_chunk), &sk->sk_wmem_alloc); atomic_add(sizeof(struct sctp_chunk), &sk->sk_wmem_alloc);
} }
...@@ -4426,7 +4422,7 @@ cleanup: ...@@ -4426,7 +4422,7 @@ cleanup:
* tcp_poll(). Note that, based on these implementations, we don't * tcp_poll(). Note that, based on these implementations, we don't
* lock the socket in this function, even though it seems that, * lock the socket in this function, even though it seems that,
* ideally, locking or some other mechanisms can be used to ensure * ideally, locking or some other mechanisms can be used to ensure
* the integrity of the counters (sndbuf and wmem_queued) used * the integrity of the counters (sndbuf and wmem_alloc) used
* in this place. We assume that we don't need locks either until proven * in this place. We assume that we don't need locks either until proven
* otherwise. * otherwise.
* *
...@@ -4833,10 +4829,6 @@ static void sctp_wfree(struct sk_buff *skb) ...@@ -4833,10 +4829,6 @@ static void sctp_wfree(struct sk_buff *skb)
sizeof(struct sk_buff) + sizeof(struct sk_buff) +
sizeof(struct sctp_chunk); sizeof(struct sctp_chunk);
sk->sk_wmem_queued -= SCTP_DATA_SNDSIZE(chunk) +
sizeof(struct sk_buff) +
sizeof(struct sctp_chunk);
atomic_sub(sizeof(struct sctp_chunk), &sk->sk_wmem_alloc); atomic_sub(sizeof(struct sctp_chunk), &sk->sk_wmem_alloc);
sock_wfree(skb); sock_wfree(skb);
...@@ -4920,7 +4912,7 @@ void sctp_write_space(struct sock *sk) ...@@ -4920,7 +4912,7 @@ void sctp_write_space(struct sock *sk)
/* Is there any sndbuf space available on the socket? /* Is there any sndbuf space available on the socket?
* *
* Note that wmem_queued is the sum of the send buffers on all of the * Note that sk_wmem_alloc is the sum of the send buffers on all of the
* associations on the same socket. For a UDP-style socket with * associations on the same socket. For a UDP-style socket with
* multiple associations, it is possible for it to be "unwriteable" * multiple associations, it is possible for it to be "unwriteable"
* prematurely. I assume that this is acceptable because * prematurely. I assume that this is acceptable because
...@@ -4933,7 +4925,7 @@ static int sctp_writeable(struct sock *sk) ...@@ -4933,7 +4925,7 @@ static int sctp_writeable(struct sock *sk)
{ {
int amt = 0; int amt = 0;
amt = sk->sk_sndbuf - sk->sk_wmem_queued; amt = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
if (amt < 0) if (amt < 0)
amt = 0; amt = 0;
return amt; return amt;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment