Noritoshi Demizu | 5 Jul 10:41 2005
Picon

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

Thanks for your reply.

>     But I will note that the saved blocks are in fact the number of
>     discontiguous segment ranges, not single segments.  That should make it
>     fairly independant of the bandwidth delay product on a real network.
>     If you use dummynet to inject random errors... well, that isn't really
>     a characteristic of a real network.

I agree that my network environment is unusual. :-)  I just thought
that saying "I want xxx because..." is better than saying "I think
some people might need xxx because..."

BTW, I use dummynet to set delays and router queue length.  I do not
use it to inject random losses.  Even in such case, if both send and
receive buffers are large enough, packets are lost when the queue on
my router becomes full.  In that case, my TCP receiver receives tens
of discontiguous data ranges.  In my environment, the number of
discontiguous data ranges depends on the value of HZ on the router.
The larger the value of HZ, the bigger the number of discontiguous
ranges.  Currently, I'm using HZ=10000 on my router.

>     I still believe that there are
>     certain situations where one might need to bump the number up, which
>     is why I like the sysctl idea.  I'm not sure the default needs to be
>     increased, though.

I guess the reason of having MAXSAVEDBLOCKS is to protect from some
attack that injects many small SACK blocks.  If so, I do not think
there would be a problem if the value is increased to, say, 64.

(Continue reading)

Matthew Dillon | 5 Jul 11:08 2005

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c


:Thanks for your reply.
:
:>     But I will note that the saved blocks are in fact the number of
:>     discontiguous segment ranges, not single segments.  That should make it
:>     fairly independant of the bandwidth delay product on a real network.
:>     If you use dummynet to inject random errors... well, that isn't really
:>     a characteristic of a real network.
:
:I agree that my network environment is unusual. :-)  I just thought
:that saying "I want xxx because..." is better than saying "I think
:some people might need xxx because..."
:
:BTW, I use dummynet to set delays and router queue length.  I do not
:use it to inject random losses.  Even in such case, if both send and
:receive buffers are large enough, packets are lost when the queue on
:my router becomes full.  In that case, my TCP receiver receives tens
:of discontiguous data ranges.  In my environment, the number of
:discontiguous data ranges depends on the value of HZ on the router.
:The larger the value of HZ, the bigger the number of discontiguous
:ranges.  Currently, I'm using HZ=10000 on my router.

    Ouch.  Dummynet is probably not the best solution.  Actually, what
    I would do is buy a cisco with a relatively recent IOS and run
    fair-queue or RED, but if that isn't in the cards then I recommend 
    playing around with Packet Filter (pf).  It has a number of queueing
    solutions.  I haven't used PF much myself so I don't know if it can
    do RED, but I believe it does have a fair-queueing mechanism.

    In anycase, RED or fair-queueing tend to do a much better job 
(Continue reading)

Noritoshi Demizu | 5 Jul 11:55 2005
Picon

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

>     In anycase, RED or fair-queueing tend to do a much better job 
>     reducing the number of fragmented ranges.  SACK running through a
>     RED router (which is most of the routers on the internet) is a good
>     combination.

Thanks.  If my purpose is to transfer data as fast as possible,
it would be a good solution.  But what I want to do now is to observe
TCP behaviors in the slow start phase after retransmission timeouts.
So, I think my environment is quite good :-) for me.

>     If you have a lot of outgoing bandwidth and the servers are running
>     FreeBSD or DragonFly, you can turn on the inflight bandwidth limiting
>     sysctl (net.inet.tcp.inflight_enable).  This only works on the machines
>     doing the actual initiation of the packets, it won't work on the 
>     routers.  It does a fairly good job reducing queue lengths.

Sorry, actually, I always set net.inet.tcp.inflight_enable to zero
both on DragonFlyBSD and FreeBSD in my experiences (I know it is zero
on DragonFlyBSD by default, but I want to make it sure), because
unfortunately it reduces throughput in my experiences in an unexpected
way.

Regards,
Noritoshi Demizu

P. de Boer | 5 Jul 19:16 2005
Picon

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

Noritoshi Demizu wrote:

> Sorry, actually, I always set net.inet.tcp.inflight_enable to zero
> both on DragonFlyBSD and FreeBSD in my experiences (I know it is zero
> on DragonFlyBSD by default, but I want to make it sure), because
> unfortunately it reduces throughput in my experiences in an unexpected
> way.
Just some extra input: I can confirm this. During TCP-performance tests (on
FreeBSD 5.4 machines) I saw a drop from around 700MBit/s to 300Mbit/s just
by turning on 'inflight'. Quite substantial, I'd say. 

--

-- 
Pieter

Matthew Dillon | 5 Jul 19:34 2005

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c


:Noritoshi Demizu wrote:
:
:> Sorry, actually, I always set net.inet.tcp.inflight_enable to zero
:> both on DragonFlyBSD and FreeBSD in my experiences (I know it is zero
:> on DragonFlyBSD by default, but I want to make it sure), because
:> unfortunately it reduces throughput in my experiences in an unexpected
:> way.
:Just some extra input: I can confirm this. During TCP-performance tests (on
:FreeBSD 5.4 machines) I saw a drop from around 700MBit/s to 300Mbit/s just
:by turning on 'inflight'. Quite substantial, I'd say. 
:
:-- 
:Pieter

    Well, you can mess around with the other related controls, but inflight
    is not designed to do well on performance tests, it is designed to manage
    an overloaded network in real life situations.  So if your network is
    overloaded and turning on inflight helps, and nobody complains about it,
    then it has served its purpose.

					-Matt
					Matthew Dillon 
					<dillon <at> backplane.com>

Joerg Sonnenberger | 5 Jul 14:47 2005
Picon

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

On Tue, Jul 05, 2005 at 02:08:50AM -0700, Matthew Dillon wrote:
>     Ouch.  Dummynet is probably not the best solution.  Actually, what
>     I would do is buy a cisco with a relatively recent IOS and run
>     fair-queue or RED, but if that isn't in the cards then I recommend 
>     playing around with Packet Filter (pf).  It has a number of queueing
>     solutions.  I haven't used PF much myself so I don't know if it can
>     do RED, but I believe it does have a fair-queueing mechanism.

It's not PF, but ALTQ. ALTQ certainly support RED, it's very easy to setup.
PF is only used as interface for the queue setup and the tagging to decide
which queue discipline to use.

Joerg

Noritoshi Demizu | 5 Jul 16:24 2005
Picon

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

> On Tue, Jul 05, 2005 at 02:08:50AM -0700, Matthew Dillon wrote:
> >     Ouch.  Dummynet is probably not the best solution.  Actually, what
> >     I would do is buy a cisco with a relatively recent IOS and run
> >     fair-queue or RED, but if that isn't in the cards then I recommend
> >     playing around with Packet Filter (pf).  It has a number of queueing
> >     solutions.  I haven't used PF much myself so I don't know if it can
> >     do RED, but I believe it does have a fair-queueing mechanism.
>
> It's not PF, but ALTQ. ALTQ certainly support RED, it's very easy to setup.
> PF is only used as interface for the queue setup and the tagging to decide
> which queue discipline to use.

According to man page, it seems that dummynet also has RED.

Regards,
Noritoshi Demizu

Dunceor . | 9 Jul 16:52 2005
Picon

Fwd: MAXSAVEDBLOCKS in netinet/tcp_sack.c

>
>
>---------- Forwarded message ----------
>From: Joerg Sonnenberger <joerg <at> britannica.bec.de>
>Date: Jul 5, 2005 2:47 PM
>Subject: Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c
>To: kernel <at> crater.dragonflybsd.org
>
>
>On Tue, Jul 05, 2005 at 02:08:50AM -0700, Matthew Dillon wrote:
>>     Ouch.  Dummynet is probably not the best solution.  Actually, what
>>     I would do is buy a cisco with a relatively recent IOS and run
>>     fair-queue or RED, but if that isn't in the cards then I recommend
>>     playing around with Packet Filter (pf).  It has a number of queueing
>>     solutions.  I haven't used PF much myself so I don't know if it can
>>     do RED, but I believe it does have a fair-queueing mechanism.
>
>It's not PF, but ALTQ. ALTQ certainly support RED, it's very easy to setup.
>PF is only used as interface for the queue setup and the tagging to decide
>which queue discipline to use.
>
>Joerg

Just a note, since OpenBSD 3.3 ALTQ is intigrated into ALTQ so I guess
all later copies of PF on Free/DF should have ALTQ initgrated also.
I think it supports RED also.
Read more here: http://www.openbsd.org/faq/pf/queueing.html

Sorry a few days late but better late than never :)

(Continue reading)

Joerg Sonnenberger | 9 Jul 22:43 2005
Picon

Re: Fwd: MAXSAVEDBLOCKS in netinet/tcp_sack.c

On Sat, Jul 09, 2005 at 04:52:26PM +0200, Dunceor . wrote:
> >It's not PF, but ALTQ. ALTQ certainly support RED, it's very easy to setup.
> >PF is only used as interface for the queue setup and the tagging to decide
> >which queue discipline to use.
> >
> >Joerg
> 
> Just a note, since OpenBSD 3.3 ALTQ is intigrated into ALTQ so I guess
> all later copies of PF on Free/DF should have ALTQ initgrated also.
> I think it supports RED also.
> Read more here: http://www.openbsd.org/faq/pf/queueing.html

They are different things. Basically PF does two things for ALTQ:
(a) It provides the interface to create queues.
(b) It provides the interface to tag packets for specific queues.

That's all the interaction between PF and ALTQ. In the research
implementation of ALTQ, it was possible to do the matching
directly in ALTQ too, but I didn't include that code because it is
very messy and clearly doesn't belong there.

Joerg

Hiten Pandya | 5 Jul 14:45 2005

Re: MAXSAVEDBLOCKS in netinet/tcp_sack.c

Matthew Dillon wrote:
>     Ouch.  Dummynet is probably not the best solution.  Actually, what
>     I would do is buy a cisco with a relatively recent IOS and run
>     fair-queue or RED, but if that isn't in the cards then I recommend 
>     playing around with Packet Filter (pf).  It has a number of queueing
>     solutions.  I haven't used PF much myself so I don't know if it can
>     do RED, but I believe it does have a fair-queueing mechanism.
> 

	RED is one of many queueing mechanisms supported by ALTQ.

				Hiten Pandya


Gmane