Thanks for the reply. I've switched over into 2.1b and confirmed sighting a triangle.
I'm a bit shakey on the terminology, and I don't understand the notation form [0.0, 1.0), so I looked up double-precision floats on Wikipedia. Is this right, or do I have the precision wrong?
On a typical 32 bit computer system, using double precision floating point, which utilizes 64 bits in total, a mantissa of 52 bits, an exponent of 11 bits and 1 bit for the sign, floating point numbers have an approximate range of 10^-308 to 10^308 --
Range of Floating Point Numbers
Re:erand48. What the CFDG (pre)parser does, if I understand you correctly, is normalize the weights it finds, then at each decision point generate an erand48() value that is compared to the normalized table... but you aren't saying that erand48 never generates anything above precision 10^+/-7, right? Instead, you are saying that there is a particular blind spot between .999999 and 1 (but not a a corresponding blind spot between .000001 and 0), and the order of rules in the table corresponds to their order in the source code, which is the reason that reversing the rule order changes the behavior from a single edge case to normal probability.
I'm still a bit unclear on the edge case. If erand48 only goes as far as 9*10^-6, then anything in the order 10^-8 should fail to match 100% of the time, right? e.g. of two rules:
rule always .99999999
rule never 0.00000001
...never would never match .999999? Or is there rounding up/down behavior on matching to the erand48 value that I don't understand?
I've written a test CDFG that just crunches 10,000,000 empty shapes in 2.1b trying to match on a 10^-8 rule weight. It has the nice feature that it keeps ahead of needing temp expansion files, and I tested it at higher magnitude weights (.9999/.0001) to make sure that the matches all display correctly as a nice rainbow burst... then left it running in both Context Free 2.1b:
startshape weight_test
// 10 million tries, stops on first result
rule weight_test { 100000 * { } row { } }
rule row { 100000 * { h 5 r 2 } try {} }
rule try 0.9999999 { }
rule try 0.0000001 { TRIANGLE { z 1 b 1 sat 1 hue 0 } } // erand48's 0.999999 should never match...?
But eventually (after ~20 minutes) it matched.
Of course, never matching isn't a bug for me, but a *feature*. I'm really interested in creating rules that mask other rules completely, because it allows me to effectively define rules with weight 0 - that is, they run if they are the only rule defined, but as soon as they are defined elsewhere, they *never* run (rather than only running 0.00001% of the time etc.). This makes possible rich CFDG library interactions without configuration (e.g. defaults, stub rules, overriding rules). I've got sets of two dozen CDFG files interacting right now, and libraries with 10^-7 weight rules behave normally when loaded independently, but "defer" correctly when the rules are redefined by the parent file.
Of course, perhaps the more elegant way to do this would be to just make a special case in the weight parser allowing rules of weight 0 during normalization, then discarding them before writing the table unless weight zero rules are the only entries.
[Note: I generally leave the high probability rules default (1) rather than 0.9999999 - but this means that low probability rules should have a slightly different value at normalization, e.g. 0.00000001 = 0.0000000099999999]