al593181 at mail.mty.itesm.mx
Fri Jan 25 22:00:53 PST 2002
Well after all the discussions I've been I've came up with some toughts,
might be you'll like to hear them.
In summary these are my ideas:
I think grouping is essential in order to control exactly where we
are, which commands are going to be executed, save the current
state, restore from where we left, etc. I think no scatered
commands should lay around, it's like seeing just a bunch of
commands in the lfs book witouth information belonging to which
chapter it is and which part of the chapter you'll not know if to
type those commands or not.
* Profile's Limit
In my opinion the xml profile should not be a bible of what to do
with a package, there should be some limits and distribute tasks
along the profile and the implementation. If we add tagas for each
and every feature that we want to implement, like <if> or <remote>
or stuff like that the usefullness of xml is going to be
missunderstanded and we'll end up with another project like a bash
script that assemble .asm files.
If we can agree here, then lot's of stuff can move from profile
space to code space, it will all depend on the kind of
functionalitly, if it's data or code. Something very important that
I think the current profiles-implementations are lacking is the
relationship data-code, like for example a global <base> tag for
the whole profile so you don't have to type it everywere, just
where the <base> is not the one of the profile. The code will take
this information and act on a specified way.
I think a single xml profile for everything is not good. Now it
seems to be the best solution, and most stuff can do it fine with
just that information. But if we envison the future we'll probably
see that a single place to have everything is not good, it's hard
to maintain, extend, design and even read. Spliting the information
of the packages seems natural to me, I mean, you have your sources,
the setup instructions, some miscleanious information like author,
homepage, licence, etc. Oviously The code to handle this
information is different.
Think about the preparation part, where all the sources are
extracted, and patched, etc. What if I alredy have the sources, and
just want to run the setup instructions, like for example I
downloaded the code from cvs. We are thinking that we should always
<unpack> something, if not, then we should change the profile, just
because I want an alternative mode of operation.
So, I think the preparation part (source tree) should have a
different handling than the setup instructions, I consider them
* Posible Implementations
From my point of view there are 4 kinds of posible implementations:
1. The xml profile is translated to bash, then executed.
2. The profile is translated to shell commands and each one of them
is piped to a coshell that executes them and return the status.
3. Each command is translated to shell command and simply executed
with system() or something.
4. Each command is handled directly by a system call or function.
Each one of them has it's pros and cons, and it all depend on the
language of the implementation. Probably the implementations will
have to use some sort of convination, since for example an
implementation of the 4th alone will not handle "./configure", ther
is no way to do that without system().
So, if we want to make a good profile we should think about this 4
approaches of executing commands.
There is more stuff, but I agree with Jesse, we should start with
something now. If these ideas are at least considered I'll be happy with
the path alfs will take.
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe alfs-discuss' in the subject header of the message
More information about the alfs-discuss