New design, what is the Goal here?

pak_lfs at pak_lfs at
Wed Nov 30 01:52:41 PST 2005

rl at wrote:
> On 2005-11-29 18:24:02 +0000, Kendrick wrote:
> > rl at wrote:
> > >I cannot see the advantage of creating yet another network
> > >authentication protocol.

We (at least I) cannot see such an advantage either. That 's why I stated in
the requirements:

"4. The protocol must be largely based on existing solutions as much as 
possible, in order to be implementable. We don't want to reinvent TLS, as I 
don't think we would improve it. On the other hand, we want to keep the 
number of external dependencies as small as possible (most probably, at most 

OTOH, if we document and agree on a set of special simplifying
assumptions/conditions that hold in our case, but not in the general
case these protocols were designed for, maybe we can make the
*paraphernalia* of these protocols (e.g. key/cert management for SSL/TLS)
 *much* easier.
> openssh is open source. It uses openssl for authentication and
> encryption. Assymetric algorithms are used to exchange a secret
> that allows less computationally intensive symetric algorithms
> to transport the protocol.

Of course, the protocols SSH and SSL/TLS have *nothing to do with each other*.
SSH is not layered over SSL, they are on the same level and not interoperable.
They just have so much in common either, so openssh is layered over openssl
even if the protocols don't share this relationship.

> The symmetric algorithms would be susceptable to plain text attack:
> Before ssh got more complicated, each time you pressed a button,
> one charater is echoed back - unless you pressed enter, in which
> case lots comes back. This told crackers which bytes in the stream
> were 'enter', so they had some knowledge of the plain text. Over
> time they could collect enough knowledge calculate the secret used
> for the seesion. Ssh now sends some random data at random times
> so crackers cannot tell when you pressed enter.

All these stuff (random padding, HMAC etc etc) and most other low-level
transport details are mostly shared between SSH/SSL and are *library code*.
There has been no suggestion (at least none I 've seen) to do any change
in this code. 

> Are you begginning to see why I would have no confidence in the
> security of a home brewed protocol? Instead re-inventing ssh, or
> rippling bits of its source code into your project, just use ssh.

I still don 't see in what way a system that took the SSL/TLS transport layer
100% with absolutely no change and just added RSA-like authentication on
top, plus easier key/cert management would be a "home brewed protocol".
Nobody "reinvents" anything, nobody even "modifies" anything. We are just
taking ready-made blocks and *integrate* them to our application.

> I think this adds a huge amount of pointless complexity. Imagine that
> three of the 90 machines you want to update are turned off. An email
> based system would use e-mails built-in store and foreward abilities.
> An http/cron solution would also let those machines catch up in their
> own time.

So,  you are just saying that there are well-known solutions to well-known 
problems. There is nothing stopping us choose the solutions that best fit
our use cases. Please mention all your (potential or actual) issues so that
they are documented and accounted for eventually. From the user's perspective,
it really doesn't matter much how things are done "under the hood" as long as
the job gets done.

> ssh is fine on a home network. Maintaining the keys over a large
> network does not sound like fun.

But perhaps we could use some special simplifying conditions to make it

> By including authentication in alfs you are either adding a restriction
> like "Use kerberos", or you are adding complexity: learn how to
> configure alfs to work with PAM.

You are just forgetting that not everybody uses PAM (and sometimes
for good reason, since if you don't actually study the beast carefully
you might be making it *less* secure with PAM). Every action that
adds more convenience/feature to anything, takes away flexibility.
Its a fact of life and one of the most important lessons I 've learnt
in engineering school.

Amoebae and bacteria are *much* more flexible organisms than
a human, but they don't have much functionality, do they?
In the end of the day, only the environment ("users") through natural
selection can decide whether a feature, or the flexibility it removes
(or both!) are more important.

> If instead, alfs is just a program for installing programs on the local
> machine, you can use ssh methods to extend it to work on a remote
> machine - and the knowledge you gain from ssh can be applied to control
> other tools with ssh. Likewise, a set of mailing-list sending signed
> scripts teaches people how to run a mailing list, how to use gpg, and
> how to deliver email to programs.

I acknowledge the importance of education. After all, the central LFS project
is a *book* not a set of scripts. Yet, what's the educational value of needing
to set up a mailing list just to do system administration (a conceptually 
different task), especially if the administered system has no need for
mailing lists? Also, please don't forget the requirement to minimize 
dependencies. Several LFSers may not like the idea of *having* to install
a bunch of unneeded programs to do a *seemingly unrelated* task.

> You can make alfs into an emacs/netscape product with a built-in mail
> user agent, a built-in editer, a built-in coffee maker.
No way, or over my dead body (kidding :).

> I think you 
> would be better off using unix philosophy: a program should do one
> thing, and do it well. Make alfs an excellent tool for installing
> software on the local machine. Then you can use another tool that
> is specialised at working remotely, and another tool that is
> specialised at maintaining lists, and another tool that is specialised
> at handling store and forward and so on. That way you can pick the
> best components and glue them together to meet your needs - instead of
> having to work arround a built-in feature that is alright, but not the
> best in the field.

Reusing code from available *libraries* is just as "unix philosophy" IMO.
Also, it does not seem to be (implementation-wise) feasible to decouple
the network part of alfs to a different *executable*. What seems more
doable is to implement parts like ui, engine and network as loosely-coupled 
"components"  that can be composed at compile time to build your own
"dream" alfs (standalone or client/server).


____________________________________________________________________ - δωρεάν υπηρεσία ηλεκτρονικού ταχυδρομείου. - free email service for the Greek-speaking.

More information about the alfs-discuss mailing list