This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

eCos and Quality [was: How do you like eCos]


Hi Lewin

You mentioned concern over quality in just picking something off the net and
where to start development in an actively developing sourcebase.  A valid
question so I figured this would be a good point where we, Red Hat, can share
some of our long term plans with you, the eCos users.  Plus also bounce some
ideas off you so we can provide something useful and some form of assurance of
the quality of the code that you are getting.

Testing
=======
This is no secret but is not particularly publicised.  We have an eCos test
farm that consists of numerous development platforms, some are publically
available development or evaluation boards and the rest are customer
confidential platforms.  These boards run tests 24x7, currently covering
roughly 80000 tests per day across almost 50 platforms.  These tests are the
same tests we ship with the sources, but are run and built in almost 200
different configurations.  We have around 15 PCs that constantly build and run
these tests automatically from the latest sources or from customer releases.

Releases
========
As you would expect, we use a code version control system internally.  No
guesses there.  We have a main code "trunk" that is used for development and
contributions.  Hence this trunk can be unstable at times when there is a lot
of development or radical changes (such as EL/IX support) happening.  For
paying customers, we work off a branch.  This branch is derived from the trunk,
but is stabalised to 0 unknown failures before it is released to a customer. 
In addition, paying customers receive the benefit of manual testing and quality
assurance (QA) where we have a 25 page checklist that has to be completed for
each release.  This covers areas from documentation and installation, through
to configuring and building eCos, running tests and example applications,
building, installing and testing the gdb stubs or RedBoot etc.  

For anonymous CVS, the code you get comes straight out of the trunk, normally
on a weekly basis, and indeed does have not have the level of testing and QA
that you would see from the commercially supported versions.  This does mean
that anonymous provides code that is normally no more than a week old and is
the same code that our engineers are developing.  The cost of this, as pointed
out, is that you run the risk of instability. There is no guarantee that what
you download and use will work on the platform of your choice.  

The quality of anonymous CVS is not as bleak a picture as I paint it above. 
Our eCos Test Farm results are monitored on a daily basis for new unexpected
problems and normally someone is dispatched to fix the problem.  Depending on
the engineer's workload and schedule, this problem may or may not be fixed by
the time comes for release into anonymous CVS.  So one week CVS may be
unstable, and for the next four it may be stable again.  We do try our best
only to release stable code into anonymous CVS, but there can be no guarantees.

Naturally, for customer releases, the engineer is assigned time to fix a
problem.  This is done on a priority basis: First into the customer's release
branch, and then back into the trunk (if generic and applicable).

Test Results
============
So to get around this problem, we are thinking of publishing the test results
of the eCos test farm for publicly supported platforms against the anonymous
CVS releases.  We have a fair way to go to start making these public since our
"result tabulator" is also automated and needs some work to keep customer
platforms confidential :-)  And of course we need some development time to make
these modifications since we are not getting paid for publishing the results
either :-/  This is more of a "public service".

Our aim will then to be to keep a set of test results against every CVS release
so you can pick and choose what you feel is the most stable release for your
platform.  For example, the ARM platform may be stable in one CVS release but
the PowerPC platform may be unstable due to some new enhancements which have
yet to be debugged.  You can then decide which tagged CVS release is best for
you.  And of course you are welcomed, if not encouraged, to fix any problems
thrown up by the automated testing yourselves.

While we are providing it as a public service, we unfortunately cannot
guarantee the service.  For example, we cannot specify what platform results
will be published, when and for how long.  The reasons for this are simple.
Development and evaluation platforms come and go, as do customers.  Our
emphasis has to be on our customers and supporting them.  So if the resources
in the eCos Test Farm have to be used exclusively for customer releases, we can
withdraw them from "public service" ;-).  Similarly if we simply we need the
board for internal training, testing etc. we reserve the right to withdraw the
board from service.  Or if the board dies and we cannot get/fund a replacement
(these boards cost money you know) :-(

Public access
=============
In the even longer term, we are thinking of making some publically available
development and evaluation platforms accessable remotely.  This means that you
will be able to evaluate eCos by not only playing with the sources and the
tools, but also by running and debugging executables on real hardware.  Of
course most of you can do this on the Linux Synthetic target already, but that
is more a development target than real embedded hardware.

As with the Test Results, this service will not be guaranteed.

Toolchains
==========
Of course this level of testing does not go into any assurances on whether the
toolchain you download off the net is doing the right thing and exactly how
stable that toolchain is.  At Red Hat, we do have similar procedures and
testing in place for our commercially supported versions of the toolchains that
are run through a similar quality assurance process.  Similar to eCos, the gnu
toolchain uncludes numerous tests that you can run yourself to verify its
stability.  I have no idea if the FSF run a similar scheme whereby the
toolchain test results are run automatically and published on the net.

Hope this helps expain a few things.

Cheers
-- Alex

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]