This is the mail archive of the ecos-patches@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: RedBoot tcp hack


On Mon, 2003-12-22 at 17:15, John Newlin wrote:
> > On Mon, 2003-12-22 at 09:18, Mark Salter wrote:
> > > >>>>> Gary Thomas writes:
> > >
> > > +	* src/net/tcp.c (tcp_send): Add [restore] delay into TCP write
> > > +	path.  Sadly, there seems to be some issue where some ACK packets
> > > +	get lost unless this is present (at least on some hardware).
> > > +	n.b. a small delay here is definitely preferable to the horrendous
> > > +	delays imposed by TCP retries if this condition occurs.
> > >
> > > We can't leave this in. It creates a 2ms overhead for every transmitted
> > > tcp segment. The actual problem and a proper fix needs to be found. I've
> > > never seen anything like described above. Could you elaborate on how to
> > > reproduce, etc.
> >
> > Actually, it's nothing new!  This delay used to be in the ethernet
> > driver layer (io/eth/current/src/stand/eth_drv.c), in the "write block"
> > path.  When I took it out (8/19/03), certain platforms started giving
> > terrible performance :-(  I put it back where it is now, with the
> > comment, to get back to where things were
> >
> > I've only seen it on certain hardware, but all it takes is to connect
> > via GDB and then download a file.  Without the change, it can take
> > minutes to download something that can take seconds with.
> >
> > I could probably generate a TCP dump if you wanted to analyze it.
> 
> 
> Sounds like the ethernet devices/drivers are dropping frames.  I've seen
> this on hardware where the Ring buffer size is not large enough, on a busy
> segment you can get fill up you're rx queue quite fast especially when
> you are running polled mode.
> 
> I'm curious which hardware exhibits this behavior, do you know off the top
> of your heard?  If I get bored I might look at it.  ;)

The particular case is a PowerPC 860T based system (A&M Viper).  It is
configured for 16 buffers each (Tx, Rx).  I can't see how it could be
missing any buffers - I've done a tcpdump analysis when it fails and
there is no traffic between the packets I know it gets and the ones
that go missing.

-- 
Gary Thomas <gary@mlbassoc.com>
MLB Associates


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]