Граф коммитов

9191 Коммитов

Автор SHA1 Сообщение Дата
Arnaldo Carvalho de Melo 2ad69c55a2 [NET] rename struct tcp_listen_opt to struct listen_sock
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-18 22:48:55 -07:00
Arnaldo Carvalho de Melo 0e87506fcc [NET] Generalise tcp_listen_opt
This chunks out the accept_queue and tcp_listen_opt code and moves
them to net/core/request_sock.c and include/net/request_sock.h, to
make it useful for other transport protocols, DCCP being the first one
to use it.

Next patches will rename tcp_listen_opt to accept_sock and remove the
inline tcp functions that just call a reqsk_queue_ function.

Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-18 22:47:59 -07:00
Arnaldo Carvalho de Melo 60236fdd08 [NET] Rename open_request to request_sock
Ok, this one just renames some stuff to have a better namespace and to
dissassociate it from TCP:

struct open_request  -> struct request_sock
tcp_openreq_alloc    -> reqsk_alloc
tcp_openreq_free     -> reqsk_free
tcp_openreq_fastfree -> __reqsk_free

With this most of the infrastructure closely resembles a struct
sock methods subset.

Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-18 22:47:21 -07:00
Arnaldo Carvalho de Melo 2e6599cb89 [NET] Generalise TCP's struct open_request minisock infrastructure
Kept this first changeset minimal, without changing existing names to
ease peer review.

Basicaly tcp_openreq_alloc now receives the or_calltable, that in turn
has two new members:

->slab, that replaces tcp_openreq_cachep
->obj_size, to inform the size of the openreq descendant for
  a specific protocol

The protocol specific fields in struct open_request were moved to a
class hierarchy, with the things that are common to all connection
oriented PF_INET protocols in struct inet_request_sock, the TCP ones
in tcp_request_sock, that is an inet_request_sock, that is an
open_request.

I.e. this uses the same approach used for the struct sock class
hierarchy, with sk_prot indicating if the protocol wants to use the
open_request infrastructure by filling in sk_prot->rsk_prot with an
or_calltable.

Results? Performance is improved and TCP v4 now uses only 64 bytes per
open request minisock, down from 96 without this patch :-)

Next changeset will rename some of the structs, fields and functions
mentioned above, struct or_calltable is way unclear, better name it
struct request_sock_ops, s/struct open_request/struct request_sock/g,
etc.

Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-18 22:46:52 -07:00
David S. Miller bcfff0b471 [NETFILTER]: ipt_recent: last_pkts is an array of "unsigned long" not "u_int32_t"
This fixes various crashes on 64-bit when using this module.

Based upon a patch by Juergen Kreileder <jk@blackdown.de>.

Signed-off-by: David S. Miller <davem@davemloft.net>
ACKed-by: Patrick McHardy <kaber@trash.net>
2005-06-15 20:51:14 -07:00
Patrick McHardy a96aca88ac [NETFILTER]: Advance seq-file position in exp_next_seq()
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-13 18:27:13 -07:00
J. Simonetti 1c2fb7f93c [IPV4]: Sysctl configurable icmp error source address.
This patch alows you to change the source address of icmp error
messages. It applies cleanly to 2.6.11.11 and retains the default
behaviour.

In the old (default) behaviour icmp error messages are sent with the ip
of the exiting interface.

The new behaviour (when the sysctl variable is toggled on), it will send
the message with the ip of the interface that received the packet that
caused the icmp error. This is the behaviour network administrators will
expect from a router. It makes debugging complicated network layouts
much easier. Also, all 'vendor routers' I know of have the later
behaviour.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-13 15:19:03 -07:00
Neil Horman cdac4e0774 [SCTP] Add support for ip_nonlocal_bind sysctl & IP_FREEBIND socket option
Signed-off-by: Neil Horman <nhorman@redhat.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-13 15:12:33 -07:00
Randy Dunlap 6efd8455cf [IPV4]: Multipath modules need a license to prevent kernel tainting.
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-13 14:29:06 -07:00
Andi Kleen e7626486c3 [TCP]: Adjust TCP mem order check to new alloc_large_system_hash
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-13 14:24:52 -07:00
Adrian Bunk 64a6c7aa38 [IPVS]: remove net/ipv4/ipvs/ip_vs_proto_icmp.c
ip_vs_proto_icmp.c was never finished.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-02 13:02:25 -07:00
Edgar E Iglesias 36839836e8 [IPSEC]: Fix esp_decap_data size verification in esp4.
Signed-off-by: Edgar E Iglesias <edgar@axis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-31 17:08:05 -07:00
Herbert Xu 208d89843b [IPV4]: Fix BUG() in 2.6.x, udp_poll(), fragments + CONFIG_HIGHMEM
Steven Hand <Steven.Hand@cl.cam.ac.uk> wrote:
> 
> Reconstructed forward trace: 
> 
>   net/ipv4/udp.c:1334   spin_lock_irq() 
>   net/ipv4/udp.c:1336   udp_checksum_complete() 
> net/core/skbuff.c:1069   skb_shinfo(skb)->nr_frags > 1
> net/core/skbuff.c:1086   kunmap_skb_frag()
> net/core/skbuff.h:1087   local_bh_enable()
> kernel/softirq.c:0140   WARN_ON(irqs_disabled());

The receive queue lock is never taken in IRQs (and should never be) so
we can simply substitute bh for irq.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-30 15:50:15 -07:00
Harald Welte 9bb7bc942d [NETFILTER]: Fix deadlock with ip_queue and tcp local input path.
When we have ip_queue being used from LOCAL_IN, then we end up with a
situation where the verdicts coming back from userspace traverse the TCP
input path from syscall context.  While this seems to work most of the
time, there's an ugly deadlock:

syscall context is interrupted by the timer interrupt.  When the timer
interrupt leaves, the timer softirq get's scheduled and calls
tcp_delack_timer() and alike.  They themselves do bh_lock_sock(sk),
which is already held from somewhere else -> boom.

I've now tested the suggested solution by Patrick McHardy and Herbert Xu to
simply use local_bh_{en,dis}able().

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-30 15:35:26 -07:00
Pravin B. Shelar 37e20a66db [IPV4]: Kill MULTIPATHHOLDROUTE flag.
It cannot work properly, so just ignore it in drr
and rr multipath algorithms just like the random
multipath algorithm does.

Suggested by Herbert Xu.

Signed-off by: Pravin B. Shelar <pravins@calsoftinc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-29 20:26:44 -07:00
Harald Welte 8f937c6099 [IPV4]: Primary and secondary addresses
Add an option to make secondary IP addresses get promoted
when primary IP addresses are removed from the device.
It defaults to off to preserve existing behavior.

Signed-off-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-29 20:23:46 -07:00
David S. Miller 314324121f [TCP]: Fix stretch ACK performance killer when doing ucopy.
When we are doing ucopy, we try to defer the ACK generation to
cleanup_rbuf().  This works most of the time very well, but if the
ucopy prequeue is large, this ACKing behavior kills performance.

With TSO, it is possible to fill the prequeue so large that by the
time the ACK is sent and gets back to the sender, most of the window
has emptied of data and performance suffers significantly.

This behavior does help in some cases, so we should think about
re-enabling this trick in the future, using some kind of limit in
order to avoid the bug case.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-23 12:03:06 -07:00
David S. Miller 8be58932ca [NETFILTER]: Do not be clever about SKB ownership in ip_ct_gather_frags().
Just do an skb_orphan() and be done with it.
Based upon discussions with Herbert Xu on netdev.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-19 12:36:33 -07:00
Julian Anastasov d9fa0f392b [IP_VS]: Remove extra __ip_vs_conn_put() for incoming ICMP.
Remove extra __ip_vs_conn_put for incoming ICMP in direct routing
mode. Mark de Vries reports that IPVS connections are not leaked anymore.

Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-19 12:29:59 -07:00
Herbert Xu 2fdba6b085 [IPV4/IPV6] Ensure all frag_list members have NULL sk
Having frag_list members which holds wmem of an sk leads to nightmares
with partially cloned frag skb's.  The reason is that once you unleash
a skb with a frag_list that has individual sk ownerships into the stack
you can never undo those ownerships safely as they may have been cloned
by things like netfilter.  Since we have to undo them in order to make
skb_linearize happy this approach leads to a dead-end.

So let's go the other way and make this an invariant:

	For any skb on a frag_list, skb->sk must be NULL.

That is, the socket ownership always belongs to the head skb.
It turns out that the implementation is actually pretty simple.

The above invariant is actually violated in the following patch
for a short duration inside ip_fragment.  This is OK because the
offending frag_list member is either destroyed at the end of the
slow path without being sent anywhere, or it is detached from
the frag_list before being sent.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-18 22:52:33 -07:00
Jesper Juhl 02c30a84e6 [PATCH] update Ross Biro bouncing email address
Ross moved.  Remove the bad email address so people will find the correct
one in ./CREDITS.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-05 16:36:49 -07:00
Patrick McHardy 60d5306553 [IPV4]: multipath_wrandom.c GPF fixes
multipath_wrandom needs to use GFP_ATOMIC.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-05 14:30:15 -07:00
Herbert Xu aabc9761b6 [IPSEC]: Store idev entries
I found a bug that stopped IPsec/IPv6 from working.  About
a month ago IPv6 started using rt6i_idev->dev on the cached socket dst
entries.  If the cached socket dst entry is IPsec, then rt6i_idev will
be NULL.

Since we want to look at the rt6i_idev of the original route in this
case, the easiest fix is to store rt6i_idev in the IPsec dst entry just
as we do for a number of other IPv6 route attributes.  Unfortunately
this means that we need some new code to handle the references to
rt6i_idev.  That's why this patch is bigger than it would otherwise be.

I've also done the same thing for IPv4 since it is conceivable that
once these idev attributes start getting used for accounting, we
probably need to dereference them for IPv4 IPsec entries too.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 16:27:10 -07:00
Patrick McHardy bd96535b81 [NETFILTER]: Drop conntrack reference in ip_dev_loopback_xmit()
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 16:21:37 -07:00
Herbert Xu 2a0a6ebee1 [NETLINK]: Synchronous message processing.
Let's recap the problem.  The current asynchronous netlink kernel
message processing is vulnerable to these attacks:

1) Hit and run: Attacker sends one or more messages and then exits
before they're processed.  This may confuse/disable the next netlink
user that gets the netlink address of the attacker since it may
receive the responses to the attacker's messages.

Proposed solutions:

a) Synchronous processing.
b) Stream mode socket.
c) Restrict/prohibit binding.

2) Starvation: Because various netlink rcv functions were written
to not return until all messages have been processed on a socket,
it is possible for these functions to execute for an arbitrarily
long period of time.  If this is successfully exploited it could
also be used to hold rtnl forever.

Proposed solutions:

a) Synchronous processing.
b) Stream mode socket.

Firstly let's cross off solution c).  It only solves the first
problem and it has user-visible impacts.  In particular, it'll
break user space applications that expect to bind or communicate
with specific netlink addresses (pid's).

So we're left with a choice of synchronous processing versus
SOCK_STREAM for netlink.

For the moment I'm sticking with the synchronous approach as
suggested by Alexey since it's simpler and I'd rather spend
my time working on other things.

However, it does have a number of deficiencies compared to the
stream mode solution:

1) User-space to user-space netlink communication is still vulnerable.

2) Inefficient use of resources.  This is especially true for rtnetlink
since the lock is shared with other users such as networking drivers.
The latter could hold the rtnl while communicating with hardware which
causes the rtnetlink user to wait when it could be doing other things.

3) It is still possible to DoS all netlink users by flooding the kernel
netlink receive queue.  The attacker simply fills the receive socket
with a single netlink message that fills up the entire queue.  The
attacker then continues to call sendmsg with the same message in a loop.

Point 3) can be countered by retransmissions in user-space code, however
it is pretty messy.

In light of these problems (in particular, point 3), we should implement
stream mode netlink at some point.  In the mean time, here is a patch
that implements synchronous processing.  

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 14:55:09 -07:00
Folkert van Heusden 0b2531bdc5 [TCP]: Optimize check in port-allocation code.
Signed-off-by: Folkert van Heusden <folkert@vanheusden.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 14:36:08 -07:00
Thomas Graf db46edc6d3 [RTNETLINK] Cleanup rtnetlink_link tables
Converts remaining rtnetlink_link tables to use c99 designated
initializers to make greping a little bit easier.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 14:29:39 -07:00
Patrick McHardy 31da185d81 [NETFILTER]: Don't checksum CHECKSUM_UNNECESSARY skbs in TCP connection tracking
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 14:23:50 -07:00
Patrick McHardy b433095784 [NETFILTER]: Missing owner-field initialization in iptable_raw
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-03 14:23:13 -07:00
Olaf Rempel 5bec0039f4 [NET]: /proc/net/stat/* header cleanup
Signed-off-by: Olaf Rempel <razzor@kopf-tisch.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-28 12:16:08 -07:00
Dave Jones 7e3e0360b7 [IPV4]: Incorrect permissions on route flush sysctl
This has been brought up before.. http://lkml.org/lkml/2000/1/21/116
but didnt seem to get resolved.  This morning I got someone
file a bugzilla about it breaking sysctl(8).

Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-28 12:11:03 -07:00
Al Viro 5523662c4c [NET]: kill gratitious includes of major.h
A lot of places in there are including major.h for no reason
whatsoever.  Removed.  And yes, it still builds.

	The history of that stuff is often amusing.  E.g. for net/core/sock.c
the story looks so, as far as I've been able to reconstruct it: we used to
need major.h in net/socket.c circa 1.1.early.  In 1.1.13 that need had
disappeared, along with register_chrdev(SOCKET_MAJOR, "socket", &net_fops)
in sock_init().  Include had not.  When 1.2 -> 1.3 reorg of net/* had moved
a lot of stuff from net/socket.c to net/core/sock.c, this crap had followed...

Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-25 21:40:39 -07:00
James Morris 088dd3a45f [TCP]: Trivial tcp_data_queue() cleanup
This patch removes a superfluous intialization from tcp_data_queue().

Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-25 21:39:29 -07:00
Patrick McHardy b31e5b1bb5 [NETFILTER]: Drop conntrack reference when packet leaves IP
In the event a raw socket is created for sending purposes only, the creator
never bothers to check the socket's receive queue.  But we continue to
add skbs to its queue until it fills up.

Unfortunately, if ip_conntrack is loaded on the box, each skb we add to the
queue potentially holds a reference to a conntrack.  If the user attempts
to unload ip_conntrack, we will spin around forever since the queued skbs
are pinned.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-25 12:01:07 -07:00
Yasuyuki KOZAKAI f649a3bfd1 [NETFILTER]: Fix truncated sequence numbers in FTP helper
Signed-off-by: Yasuyuki KOZAKAI <yasuyuki.kozkaai@toshiba.co.jp>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-25 12:00:04 -07:00
David S. Miller d5ac99a648 [TCP]: skb pcount with MTU discovery
The problem is that when doing MTU discovery, the too-large segments in
the write queue will be calculated as having a pcount of >1.  When
tcp_write_xmit() is trying to send, tcp_snd_test() fails the cwnd test
when pcount > cwnd.

The segments are eventually transmitted one at a time by keepalive, but
this can take a long time.

This patch checks if TSO is enabled when setting pcount.

Signed-off-by: John Heffner <jheffner@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-24 19:12:33 -07:00
Patrick McHardy 3b2d59d1fc [NETFILTER]: Ignore PSH on SYN/ACK in TCP connection tracking
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-24 18:42:39 -07:00
Patrick McHardy e281e3ac2b [NETFILTER]: Fix NAT sequence number adjustment
The NAT changes in 2.6.11 changed the position where helpers
are called and perform packet mangling. Before 2.6.11, a NAT
helper was called before the packet was NATed and had its
sequence number adjusted. Since 2.6.11, the helpers get packets
with already adjusted sequence numbers.

This breaks sequence number adjustment, adjust_tcp_sequence()
needs the original sequence number to determine whether
a packet was a retransmission and to store it for further
corrections. It can't be reconstructed without more information
than available, so this patch restores the old order by
calling helpers from a new conntrack hook two priorities
below ip_conntrack_confirm() and adjusting the sequence number
from a new NAT hook one priority below ip_conntrack_confirm().

Tracked down by Phil Oester <kernel@linuxace.com>

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-24 18:41:38 -07:00
Herbert Xu 4d78b6c78a [IPSEC]: COW skb header in UDP decap
The following patch just makes the header part of the skb writeable.
This is needed since we modify the IP headers just a few lines below.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-19 22:48:59 -07:00
Stephen Hemminger 9c2b3328f7 [NET]: skbuff: remove old NET_CALLER macro
Here is a revised alternative that uses BUG_ON/WARN_ON
(as suggested by Herbert Xu) to eliminate NET_CALLER.

Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-04-19 22:39:42 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00