This is the mail archive of the ecos-bugs@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug 1001522] Array index out of bounds in tftp_server.c


Please do not reply to this email. Use the web interface provided at:
http://bugs.ecos.sourceware.org/show_bug.cgi?id=1001522

--- Comment #8 from Grant Edwards <grant.b.edwards@gmail.com> 2012-03-12 14:31:57 GMT ---
(In reply to comment #7)
> (In reply to comment #6)


>>> Oh true. Yes you have to close them all _except_ the one that
>>> matched select.
>> 
>> Then you can have parallel operations only when requests are coming in
>> simultaneously on different protocols.
> 
> Well, my idea was (theoretically) that you could still open a new
> socket while still using the previous one to talk to the remote
> host.

Ah.  I had assumed that wasn't possible (and that's why the sockets
were being closed in the first place).


>>> But maybe a good enough solution which isn't too far from current
>>> is to make the socket non-blocking, and just get threads to loop
>>> back to the select if they wake up and there's no data after all
>>> (recvfrom() returns -1 with errno set to EWOULDBLOCK).
>> 
>> That's probably the easiest way to fix the port-unreachable
>> problem, but it does have the overhead of waking all the threads
>> for every packet.
> 
> I think that's a small price to pay. Most likely the number of
> threads would be very small anyway.

True.  Efficient but broken is a bit of false economy.

> > Another solution that occurred to me is to have a set of threads for
> > each socket (IOW for each protocol).  Leave the sockets blocking and
> > keep them open the whole time.  All the threads make blocking read
> > calls, and only one wakes up for each packet.  It's all nice and
> > simple, but you need more thread instances _if_ ipv6 is enabled.
> 
> That's not too bad. Although the code may get messy when still retaining the
> non-multi-threaded case. But it this was implemented, it wouldn't be a huge
> deal to insist in CDL that the number of threads is an even number if ipv6 is
> enabled (in the multi-threaded case i.e. >1 ). Or even have the user set the
> number of ipv4 threads and ipv6 threads separately.

Right.  I had assumed that the minimal case would be one thread per
socket (protocol) -- the thread code would be the same for all cases
(do a blocking read and handle the received packet).  How the threads
are apportioned amongst the protocols would be an new question.
Letting the user set the number of threads for each protcol would
probably be the simplest.

> I think I would prefer the non-blocking socket and blocking select()
> option though as IMO the overall penalty for waking multiple threads
> unnecessarily occasionally is much less than the overall penalty of
> having extra threads around unused.

Compared to the pool-per-socket idea, that approach doesn't require
any CDL changes and would be more transparent to current users, so it
sounds like the best option.

-- 
Grant

-- 
Configure bugmail: http://bugs.ecos.sourceware.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]