Alfs Update - have time before reading.........

Bryan Dumm bdumm at bobby.bcpub.com
Sat Dec 23 02:53:57 PST 2000


Howdy,

Welp I got to stop for now. The Holidays are upon us, 
and I got family matters to attend too... I will be back
late Tuesday, but should be raring to go and wanting to 
finish off this first revision of alfs. I get excited 
just watching it go by.....

Ok some ideas of action needed, where I suggest we go, our plans....

Things Needed - 

Profile needs updated to 2.4.3 (I say we hold there and 
make no furthur upgrades to the profile until Gerard has
3.0 out and the new big infrastructure changes(ie. 2.4/glibc2.2)

The profile as in stands now, you will find this attached in
the following pieces

Chap4
Chap5 up to before GCC
GCC-Linux-Glibc(*umm I didn't get this together*)
Rest of Chap5
Chap6

Should be fairly self explanitory. I have not added Gerard's 
new entities that you will find out about later which is in the 
later half of Chap6. So none of the other stuff has this. Also I 
have not added unpack and remove tags to the first part of 
Chap6 up through bzip2. The profile needs more work....
I think if we divide it up by chapters it will be easier to 
debug, and we can always include a "complete profile". 

newalfs is the new code I wrote and integrated some with Neven's.
I want to continue to do this but we need to talk about issues, guidelines,
decide who wants to do what, etc.

The code is like 313 lines and has quite a few comments in it. I have
purposely tried to make the leanest skeleton that I can. Reason why is
that even though error handling, etc. makes a good program, we have to have
the xml syntax working in a easy to handle way for these backends. Otherwise
I think every backend coder, probably even frontend coders as well, would 
want to mangle the tags to make them fit into their coding styles.

To put it another way, look at what we had before in a xml element. 

<cd dir="&LFS;/usr" />
<mkdir dirs="bin etc include lib sbin share src tmp var" />
<link source="share/man" destination="man" type="symbolic" />

Now looking at these commands, they make no sense by themselves.
Without the cd command you don't know where to do the other two
commands. If you look at the link command, you also have no idea
w/o prior link knowledge, what the attributes mean, or what they
really do. Not that we can teach people through the structure per
se, but look at this change

<mkdir dir="&LFS;/usr">bin etc include lib sbin share src var</mkdir>
<link dir="&LFS;/usr" source="share/man" type="symbolic">man</link>  

Not only did we get rid of the cd element altogether, but we have
also eliminated the idea of "being somewhere in the filestructure"
and as a bonus, made each command above it's own seperate entity.
Plus the design of each command can be similiar to

<command param1="whatever" param2="this thing">thing command is acting 
on</command>

Doing it this way clears up how, some attributes might be called
"dirs" in one element but "dir" in another and so on. We might also want
to come up with some guidelines on present-future attribute naming etc. 

If you look through my code, you will see that I have taken this command
structure above and used it throughout the code. I've tried to assume that
the backend has 

1. No idea of the tag, or if it even has that subroutine even. It just
tries to process it and responds according, hence my reasoning of using

&{$egi} ($element, $e_element);

instead of calling each subroutine based on what the tag reads....

2. Each command/subroutine also assumes nothing. I am still working 
this out in some of the commands, but we really need to be prepared for
any stupid thing people might throw at it in their profile, and some 
way to bow out gracefully.

I've left some ideas in the code, and think most of it should be 
readable, useable. One of Gerard's "first" complaints about alfs was
the whole ability to "debug" my output. Right now, alot of information
just flows by... :) I tried to leave out anything above me saying "I am 
doing this command, and what the command itself does, and I am finishing
up this command". I do though want to create a whole message(s) arch? 
As I think that the way the backend responds will be important.
I think we can at least create something like the following....


$message = "Unable to open $file";
open (FILE, "$file")|| error($message);


That way we can have error or even message/output type routines, that
process output to the user. I say this now, cause at some point most
of the messaging coming from the backend will be xml based messages
(think xml-rpc) that are then accepted by the frontend. If the frontend
told the backend what type of client it was, we could even format messages
appropriately? I know for now, since we will be using the console the 
appropriate
way to do it will be to dump text back in the error subroutine. But I 
want to stir up some discussion on this method of doing things. 

We also need to develop the graceful exits for all the non things
alfs doesn't know how to do like tags not known. I've left this 
area blank in the parsing the profile section. But I am looking 
for ideas on that, and the error messages. I added to_continue
and system_return from Neven's original code, as I added unpack, 
remove and 

chomp(my $curr_dir = `pwd`);
print "Executing $command in $curr_dir:\n";

some places in the code. I only did this while I was debugging and
it is inconsistant throughout the code, as I had these whole messaging
issues in mind. I wanted to get responses before you or I implement 
them. 

I also think that we can use this final profile we mangle into reality
as the basis for the upcoming LFS infosystem. If you think about it, 
we already have built the "database" by building this profile. To build
that database, it should not be much more in XML::Twig than adding a package
handler to get all the package tags and then a simple package subroutine
that say loads up all the xml information in a big hash and then dumps 
the entire hash to database table(s).... We could also use this code for
later package/profile additions to the infosystem.

Also another helper script that I think we need is one that does the 
following.

Untars a package
Looks for an INSTALL file
Reads it and looks for "This is a standard Install file"
If found, then we run the ./configure --help and capture that output
And based on that output, and the package info, 
we make up an instant xml profile for that package

Not all packages are standard, and not all packages will work with
our instant output. But I bet I could drain the archives say over 
at freshmeat, run this script on those archives I collected and 
probably have a fair amount of "working" ready to use profiles. Ideas
on this are welcomed. 

I think that is all my thoughts for the moment, Neven please fix my 
broken or horribly done code :). If anyone wants to take the initiative
and implement anything I mentioned, please do. I'll be back late Mon. 
but am going out that evening. So I will just be checking in till late
Tuesday.... 

Looking forward to reading more....


I included Gerard's last email about the Chap6 he did, and my comments...
BTW Gerard is there a backend list for alfs? Would like to put on it, 
anyone who has written a script for automating LFS.......

<Gerard's Email>
A few notes:

Chapter3 is outdated in the lfs book. The profile I have attached contains 
newer packages that chapter3 has (a newer lilo, modutils, util-linux and 
possible another one or two I can't remember off hand). 

We use:

<unpack archive="&packages_dir;/console-tools-0.2.3.tar.bz2">/usr/src</unpack>

but in <package> we declare a version and name. So can we use a construction 
like the following:

<package name="console-tools" version="0.2.3" toplevel="&name;-&version;">
         and

<unpack archive="&packages_dir;/&name;-&version;.tar.bz2">/usr/src</unpack>

I have assumed this in my work. I hope it wasn't in vain. else just let me 
know and I will change it again. But that was the whole point of those vafs 
in the <package> tag so they can be used in the commands in that section.


<agreed>

also, &packages_dir must change after we enter chroot because &LFS; won't be 
valid anymore. We can do a quick dirty fix for that by running this command 
immediately after we enter chroot:

cd /mnt
ln -s / lfs

that way &LFS; will point to / so &LFS;/usr/src is just /usr/src

Or, we do it the proper way and redefine a few things.


<Yea let's work out chroot element and the proper way, don't want kludges
if we can avoid them>

I assume the copy tag is still used? It's needed to manually copy a few files 
that aren't installed by a make install.

<Yes both copy and move are still there. BTW you can look at the code
and check the Subroutines section out to see what commands I axed.>

Since we use <system_command>make</system_command> and make install so often, 
same with ./configure perhaps we can make a default tag for this so we don't 
have to make them system_commands.

<agreed, getting away from system_command dependence is a _good_ thing>

Example, if you just want to run make && make install perhaps this is cleaner:

         <build>
                 &def-build;
         </build>

         <install>
                 &def-install;
         </install>

just an idea of course. You can blow it off ;)


<
So then if I am following right those would translate out to
&def-build=make
&def-install=make install

?? It would solve a problem I had in my code(err bug) where 
attributes came before install, and if install is a attribute
I have no way of "knowing" as we have no guidelines in general
on attributes. This could make things like

make param1 install param2
or 
make install param1 param2 

and so on. Maybe it is just my illogical code. :)
>


About the gcc problem:
I don't fully understand why it doesn't work. What wouldn't work with this:
         <build>
                 <cd>&builddir;</cd>
                 <system_command>make</system_command>
         </build>

I'm sure there are problems with that, so I haven't included a few packages 
that require a cd to a different directory before building. Though with the 
tar patch I did this quick fix:

<system_command dir="&toplevel;/src" param1="-i 
.../../gnutarpatch.txt">patch</system_command>

Tell me what you think of that (i know it's not a clean way but i'm not going 
to 'invent' new tags right now).

< I think we need two new tags/elements(what should I call it) to deal with
this, and those are <patch> and <builddir> or something like that. Those 
are really the two issues that seem to keep popping up, causing us headaches
and so on.>

<I also think that as we go through more packages, build more profiles, we 
will have to add new elements which help out with other such concepts like 
how a patch is, or how the builddir concept works....>

textdump tag: was it <textdump file="filename"> or textdump target="filename" 
?  I assumed file="filename"

<I got file= but I think Neven used target=, and coming up with some 
guidelines for attributes would end such questions, or minimize them...> 
-------------------------------------
Bryan Dumm 
who is Sore Loserman, I want him for president :)
-------------------------------------



More information about the alfs-discuss mailing list