new ssp patch sysctl-erandom

Robert Connolly robert at
Sat Apr 24 20:48:46 PDT 2004

I have attached two patches, they both depend on this one:

Here is my new logic for __guard_setup:

This glibc ssp patch tries to use sysctl random_erandom first. If that fails 
it will fall back on libarc4random. Arc4random first tries to open /dev/
erandom, if that fails it tries sysctl random_erandom; I know this is being 
done twice, but the delay caused by checking again will add to the entropy 
since this is called 8 times to fill the elements of __guard. /dev/erandom 
can still work if frandom was built as a module, sysctl only works built in. 
If that fails /dev/urandom is checked, if that fails sysctl random_uuid is 
checked. No matter the case, in libarc4random, gettimofday and getpid are 
used in the stir. In the event you run a program inside a chroot, with a 
grsec kernel, without the frandom patch, and without urandom in the chroot, 
this is the conditions that gettimeofday+getpid are only used for the stir. 
Random_uuid is read only by root in a grsec kernel. This entropy is run threw 
the arcfour stream cipher and returns 65536 byte strings for __guard[i]. If 
arc4random fails, and it shouldn't, ssp will still fallback on the terminator 

The arc4random patch installs a header that can be used by other software. If 
the system is running sysctl erandom the only fallback that will be checked 
is the final one, with the terminator canary. Its about 8 syscalls to fill 
the __guard array. If all the fallbacks are used its about 30 syscalls, if 
erandom and urandom are missing and sysctl doesn't work. Durring normal 
operation with sysctl erandom it should work smoothly.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glibc-2.3.3-arc4random-1.patch
Type: text/x-diff
Size: 8678 bytes
Desc: not available
URL: <>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glibc-2.3.3-ssp-functions-2.patch
Type: text/x-diff
Size: 10070 bytes
Desc: not available
URL: <>

More information about the hlfs-dev mailing list