Voting Booth/Poll

Bill Maltby, LFS Organizational bill at nospam.dot
Sun Nov 9 06:44:32 PST 2003


On Sun, 9 Nov 2003, Jeroen Coumans wrote:

> Hi Bill Maltby, LFS Organizational. You said the following on 11/08/03
> 16:04:
> > Not sure if there would be 1) enough interest by the dev'ers, or 2)
> > enough use from the user community.
>
> With the QA team and toolchain team formalized, wouldn't they know best
> when a release is ready? It seems to me you're problematising something
> (a release) which has never been a problem before.

Only if you ignore standard procedures that as somewhat pervasive in the
industry and the fact that we have a very diverse environment.

Before the "theoretical" considerations, just look at the reality of
posts after "release" from both LFS and BLFS. Considering only LFS,
where we have QA and test teams, we still have typs, misunderstandings
by the user and things that fail to work because of the age of the host
or whatever.

And we can't tell if they are important because we have no idea of how
many people have/are trying the product. "Should we address them or
not" may be partly a function of "what percentage of our users seem to
have an issue about this?". Well, we can't tell.

Now the "theoretical" side. We, as editors, testers, regular readers of
the list/CVS/chat, ... are constantly exposed to the concepts, specifics
and solutions to various issues. By the time we are considering
"release", our judgements are influenced by all that has gone before. We
assess the "readiness" through the "filter" of all this recent
experience. We now "cannot see the forest through the trees". We are too
close to the product to be assured our judgement is sound.

Think of how many times you have reviewed a sentence and failed to see
an error that is noticed in a half-second by someone else. You see what
you meant to write rather than what you actually wrote. The more complex
the item, the more likely this is. That's why formal "code reviews"
have proven so useful in the software industry.

The development model that we follow now is much less formal and
rigorous in terms of advanced planning, design and review than it was
when computers were expensive and the 'net was non-existent. We depend
on a "slash and burn and release and wait for feedback" model. We plan
only for near-term goals, ignoring long-term consequences. We are
subject rapidly changing demands enforced by the environment of our
product (fast application changes, user-base changes, other technology
changes). True QC is near impossible in this environment (Richard
Demming's process would probably not work well here).

The response to this to try and get some sort of genuine "quality" is to
play the numbers game. For each of these steps, we depend on having a
"large number" of people participating: planning, reviewing, designing,
implementing, testing, posting "results" and concerns.

What is a "large number"? Does it matter? Maybe not. But what does
matter is that if we choose to follow the paradigm I've described, we
should *at least* be aware (roughly) of what numbers are involved. This
allows us to make some kind of reasoned estimate of "have we got enough
input to have some confidence in our judgement?".

With *no* numbers of any kind, we live on our "gut instincts". Those
with more experience may or may not have better "instincts". Even so,
there will be instances of mis-judgement that may have been prevented by
have some numbers to look at a say "do these seem to lend support to my
judgement?". With *some* numbers, we move from "gut instinct" to SWAG
(Scientific Wild-Assed Guess) status, which is marginally better than
"gut instinct".

Regardless of all that, numbers may prove useful in another way. If we
have some idea of how many have tested, we may wnat to accelerate or
decelerate a release, or some other activity, because we see a
large/small number of folks have tested.

Do I make a problem out of what was not a problem? Could be. But maybe I
only see a *potential* problem that has not been publicly discussed yet.
This might be due to different POVs, background, objectives or
experiences. Or it could be different standards of "acceptable level of
quality", whatever that means to each of us.

-- 
Bill Maltby,
LFS Organizational
billATlinuxfromscratchDOTorg
Use fixed above line to mail me direct



More information about the website mailing list