This is the mail archive of the
libc-alpha@sources.redhat.com
mailing list for the glibc project.
Re: [libc-alpha] Re: [open-source] Re: Wish for 2002
- From: Linus Torvalds <torvalds at transmeta dot com>
- To: "Thomas Bushnell, BSG" <tb at becket dot net>
- Cc: Roland McGrath <roland at frob dot com>, Kaz Kylheku <kaz at ashi dot footprints dot net>, Russ Allbery <rra at stanford dot edu>, <libc-alpha at sources dot redhat dot com>
- Date: Wed, 9 Jan 2002 19:02:37 -0800 (PST)
- Subject: Re: [libc-alpha] Re: [open-source] Re: Wish for 2002
On 9 Jan 2002, Thomas Bushnell, BSG wrote:
> Linus Torvalds <torvalds@transmeta.com> writes:
>
> > Oh, lots of people have. A number of the people who have not upgraded from
> > the "old" library are refusing to upgrade to glibc exactly because it
> > makes their systems slower.
>
> I'm not sure we can compare libc5 to libc6 on the basis that libc6 has
> more functions, or a larger memory fingerprint.
I'm saying that the two go hand in hand.
If you add functions eagerly because adding functions is "cheap" (like you
claimed), then you WILL have a big footprint. And a big footprint is bad
on small machines because it slows the whole machine down, both from a
loading standpoint and a resource use (TLB, cache, memory etc) standpoint.
> (Someone with really old hardware may have independent reasons for not
> wanting a large library, for example, because it no longer fits in
> core at once; but that's not relevant here, I think. It's a separate
> issue.)
No, it's the same issue. And it happens with _new_ hardware too. The
palmtops of today are not actually all that different from the desktops of
5-10 years ago.
If you think "performance" is just CPU cycles, you're very wrong.
Even on the kinds of machines _I_ have (ie multiple CPU's, gigabytes of
memory, 700-2000 MHz), the single slowest thing for many programs I run is
actually program startup and tear-down which is very much proportional to
the size of the thing, especially the number of pages touched.
A lot of library "tuning" is done by timing loops of something for
millions of iterations. But the UNIX way of doing things (and the GUI way
of doing things) tends to be the other way around - you load something,
you do something, you unload it.
Cache-misses, page miss fault handlers etc are quite noticeable. It gets
worse if you have to do IO.
Those milli- and micro-seconds really add up. And they do not do _any_
real work.
> > I see embedded people complaining quite often, and there are at least
> > three different "small libc" projects going on exactly because glibc
> > simply is too big for many people (ulibc, dietlibc and something I
> > forget).
>
> But that suggests, indeed, that glibc and ulibc (et al) simply fulfil
> separate niches.
My point is that it is NOT free to just add code to libraries. There's a
huge cost involved. And glibc _is_ bloated already, so I suspect your
strongest argument is "it's already too damn fat for people who care, so
why not ignore it?".
I just don't believe in that kind of argument.
Linus