This is the mail archive of the
mailing list for the eCos project.
[Bug 1001522] Array index out of bounds in tftp_server.c
- From: bugzilla-daemon at bugs dot ecos dot sourceware dot org
- To: ecos-bugs at ecos dot sourceware dot org
- Date: Sun, 11 Mar 2012 03:56:20 +0000
- Subject: [Bug 1001522] Array index out of bounds in tftp_server.c
- Auto-submitted: auto-generated
- References: <email@example.com/>
Please do not reply to this email. Use the web interface provided at:
--- Comment #7 from Jonathan Larmour <firstname.lastname@example.org> 2012-03-11 03:56:15 GMT ---
(In reply to comment #6)
> > > Wouldn't that leak sockets if more than one is open? The loop would
> > > set all of them to 0 [which is also a bug, see below], but it only
> > > remembers the last one for closing later.
> > Oh true. Yes you have to close them all _except_ the one that
> > matched select.
> Then you can have parallel operations only when requests are coming in
> simultaneously on different protocols.
Well, my idea was (theoretically) that you could still open a new socket while
still using the previous one to talk to the remote host. But anyway, we happily
seem to have moved onto getting rid of this closing sockets malarkey.
> > Or we make a new socket to use just to send the error.
> That would work, and the overhead is only there when the error
> condition happens.
> It seems to me it would be simpler to check the opcode and send the
> error resonse before closing the sockets. It's simple and keeps the
> parallelism, but it doesn't fix the port-unreachable problem (if it is
> indeed a problem in the real world).
It's a small window, but no less present. That's arguably the worst sort of
failure - stuff that works 99% of the time, but inexplicably not the other 1%.
> > A proper solution would involve a master thread, and a number of
> > child threads.
> That's the first solution that occured to me, but it's a lot of
> rework. I don't use the tftp support and don't use filesystem
> support, so it would be pretty far down my list of things to do (to be
> honest, it would probably never make it to the top).
> > But maybe a good enough solution which isn't too far from current is
> > to make the socket non-blocking, and just get threads to loop back
> > to the select if they wake up and there's no data after all
> > (recvfrom() returns -1 with errno set to EWOULDBLOCK).
> That's probably the easiest way to fix the port-unreachable problem,
> but it does have the overhead of waking all the threads for every
I think that's a small price to pay. Most likely the number of threads would be
very small anyway.
> Another solution that occurred to me is to have a set of threads for
> each socket (IOW for each protocol). Leave the sockets blocking and
> keep them open the whole time. All the threads make blocking read
> calls, and only one wakes up for each packet. It's all nice and
> simple, but you need more thread instances _if_ ipv6 is enabled.
That's not too bad. Although the code may get messy when still retaining the
non-multi-threaded case. But it this was implemented, it wouldn't be a huge
deal to insist in CDL that the number of threads is an even number if ipv6 is
enabled (in the multi-threaded case i.e. >1 ). Or even have the user set the
number of ipv4 threads and ipv6 threads separately.
I think I would prefer the non-blocking socket and blocking select() option
though as IMO the overall penalty for waking multiple threads unnecessarily
occasionally is much less than the overall penalty of having extra threads
> Having a single thread doing the select()/read() and passing the
> incoming packets to a pool of worker threads would be a bit more
> elegent, but it's also a bit more complicated, and you pay that extra
> overhead even when you only have one protocol enabled.
Indeed. Let's not go that route.
Configure bugmail: http://bugs.ecos.sourceware.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.