Compile CFDG to postscript?

Here you can discuss and share functionality improvements and helper programs to make Context Free better.

Moderators: MtnViewJohn, chris, mtnviewmark

User avatar
mtnviewmark
Site Admin
Posts: 81
Joined: Wed May 04, 2005 12:46 pm
Location: Mountain View, CA
Contact:

Post by mtnviewmark »

You'll want to render rendering_tests.cfdg (in the source distribution) to tune your PS implementation. The comments in the CFDG file should tell you what to look for in the rendered output.

Each of our implementations had to have the graphics rendering tuned to get it all to work. You might want to look at how we expanded the size of squares slightly. See the functions at the very start of aggCanvas.cpp.
I'm the "m" in "mtree.cfdg"

futrelle
Posts: 15
Joined: Sat Jun 11, 2005 5:59 pm

Post by futrelle »

Thanks! I have a question about recursive vs. iterative rendering. It seems to me that both techniques behave identically when the CFDG grammar is monotonically decreasing in scale--in other words, where the scaling factors are all <= 1.0 in any rule that involves infinite recursion.

But when there are > 1.0 scaling factors involved, the recursive technique can't simply prune the expansion at a given scaling factor without generating in some cases very different looking output than a theoretical "complete" expansion of an infinite recursion.

And neither can the iterative technique; if a too-small-to-render rule occasionally invokes itself at a significantly larger scaling factor (enough larger that the invocation would be large enough to render), the iterative technique will also be pruning the expansion "too soon". As in the following grammar, whose appearance in Context Free depends on the resolution it's rendered at:

Code: Select all

startshape foo
// yrf

rule foo {
 CIRCLE {}
 foo { x 0.3 s 0.99 }
}

rule foo 0.04 {
 CIRCLE {}
 foo { x 0.3 s 0.9 r 85 }
}

rule foo 0.04 {
 CIRCLE {}
 foo { x 0.3 s 0.9 r -85 }
}

rule foo 0.04 {
 CIRCLE {}
 foo { x 0.3 s 0.9 r -90 y -0.3  }
 foo { x 0.3 s 0.9 r 90 y 0.3  }
}

rule foo 0.0007 {
 foo { y -20 s 5 }
}
Is the pruning problem for CFDG equivalent to the halting problem, or can it be solved by traversing the grammar and looking at the scaling factors in each cycle in the rule network?
--
Joe Futrelle

User avatar
MtnViewJohn
Site Admin
Posts: 882
Joined: Fri May 06, 2005 2:26 pm
Location: Mountain View, California
Contact:

Post by MtnViewJohn »

An interatively rendered CFDG will look different from a recursively rendered CFDG for two reasons:
  1. The random number generator will make different rule expansion decisions because the decisions will be made in a different order (you did port drand48 to Postscript and not use the native random number generator right?)
  2. The final list of shapes will be rendered in a different order. If they are all black then this does not change the end result. But if any are gray then rendering order matters.
I think that the decision to stop expanding a rule because of its size only introduces differences between the two rendering methods because of the random sequence issue, not because the halting decision is itself different for the two methods. We can 'solve' this by adding a recursive rendering option to the Context Free clients.

User avatar
MtnViewJohn
Site Admin
Posts: 882
Joined: Fri May 06, 2005 2:26 pm
Location: Mountain View, California
Contact:

Random number sequence problem

Post by MtnViewJohn »

Actually, changes in the random number sequence will be rife in recursive rendering. In iterative rendering a given variation looks pretty much the same at different resolutions because the larger features are rendered first, so they always happen exactly the same whatever the resolution. With recursive rendering you don't start expanding a shape until you completely render all of the decendent shapes of its predecessor in a rule. A change in resolution changes the random number sequence used for the next shape in a rule.

The solution for this is pretty simple: stop using a simple sequential random number generator and use a generator that is aware of the tree structure of the shape expansion. Then the random number used to expand a shape would be the same whether rendering was recursive or iterative. Rather than have a single, global random number seed each shape in the shape tree would have a random seed.

Using a tree-like random number generator would also free us to implement multi-threaded rendering for the multi-core and muli-threaded systems that are soon to be ubiquitous.

root661

Design

Post by root661 »

Yeah, that would totally work; I need to familiarize myself with ps's matrix and array operators.
Very funky i love it. ?

jeremydouglass
Posts: 17
Joined: Fri Mar 30, 2007 9:10 pm

How are very large and small rule weights handled?

Post by jeremydouglass »

1. Put the alternatives for a rule in some fixed order: R = { r0, r1 ... rn }
2. Compute S = sum(R.weight-as-written, i = 0..n)


What is the behavior of rule weights at very high or low limits? Is there rounding involved, and if so where? For example:

Code: Select all

  startshape grid
  rule grid { 100 * [ x +1 ] row {} }
  rule row { 100 * [ y -1 ] this {} }
  rule this 1 { CIRCLE {  b 1 sat 1 h 0 } }
  rule this 0.000001 { TRIANGLE { s 100 } }
This prints a field of 10000 red circles, with a one in a million chance (per unit) of printing a huge black triangle. Yet no matter how many times I've refreshed it, I've never seen one.

I can write the same display this way, using a very large number:

Code: Select all

  startshape grid
  rule grid { 100 * [ x +1 ] row {} }
  rule row { 100 * [ y -1 ] this {} }
  rule this 1000000 { CIRCLE {  b 1 sat 1 h 0 } }
  rule this 1 { TRIANGLE { s 100 } }
...again, I've never seen a black triangle. Is the reason the same, or different?

My reason for asking: I'm currently doing some very interesting things in Context Free using extremely low and high probability rules. If this is just me getting extremely (un)lucky, I'd like to know. If this is rounding by design, I'd like the behavior to stick around (and I'd like to know what the precise cutoff is). If this is a behavior that could just go away when Context Free switches to 64bit or whatever, I'd like to request support for a rule weight of 0.

User avatar
MtnViewJohn
Site Admin
Posts: 882
Joined: Fri May 06, 2005 2:26 pm
Location: Mountain View, California
Contact:

Post by MtnViewJohn »

The rule weights are normalized at parse time so that they sum to 1.0 and they are stored as double-precision floats. So your two code samples should look exactly the same internally. I ran this on CF 2.1beta and saw some triangles, but not as many as you would expect. I think that the random number generator is at fault. erand48() is supposed to generate numbers in the range [0.0, 1.0), but it never seems to generate numbers >= 0.999999.

If you re-order your two this rules then you see the triangle more often. You should put the extremely low weight rules first or in the middle. I don't think that this behavior will go away, since we don't want to change the random number generator if we don't have to.

jeremydouglass
Posts: 17
Joined: Fri Mar 30, 2007 9:10 pm

Effectively zero-weight rule - bug AND feature!

Post by jeremydouglass »

Thanks for the reply. I've switched over into 2.1b and confirmed sighting a triangle.

I'm a bit shakey on the terminology, and I don't understand the notation form [0.0, 1.0), so I looked up double-precision floats on Wikipedia. Is this right, or do I have the precision wrong?
On a typical 32 bit computer system, using double precision floating point, which utilizes 64 bits in total, a mantissa of 52 bits, an exponent of 11 bits and 1 bit for the sign, floating point numbers have an approximate range of 10^-308 to 10^308 -- Range of Floating Point Numbers
Re:erand48. What the CFDG (pre)parser does, if I understand you correctly, is normalize the weights it finds, then at each decision point generate an erand48() value that is compared to the normalized table... but you aren't saying that erand48 never generates anything above precision 10^+/-7, right? Instead, you are saying that there is a particular blind spot between .999999 and 1 (but not a a corresponding blind spot between .000001 and 0), and the order of rules in the table corresponds to their order in the source code, which is the reason that reversing the rule order changes the behavior from a single edge case to normal probability.

I'm still a bit unclear on the edge case. If erand48 only goes as far as 9*10^-6, then anything in the order 10^-8 should fail to match 100% of the time, right? e.g. of two rules:

rule always .99999999
rule never 0.00000001

...never would never match .999999? Or is there rounding up/down behavior on matching to the erand48 value that I don't understand?

I've written a test CDFG that just crunches 10,000,000 empty shapes in 2.1b trying to match on a 10^-8 rule weight. It has the nice feature that it keeps ahead of needing temp expansion files, and I tested it at higher magnitude weights (.9999/.0001) to make sure that the matches all display correctly as a nice rainbow burst... then left it running in both Context Free 2.1b:

startshape weight_test
// 10 million tries, stops on first result
rule weight_test { 100000 * { } row { } }
rule row { 100000 * { h 5 r 2 } try {} }
rule try 0.9999999 { }
rule try 0.0000001 { TRIANGLE { z 1 b 1 sat 1 hue 0 } } // erand48's 0.999999 should never match...?

But eventually (after ~20 minutes) it matched.

Of course, never matching isn't a bug for me, but a *feature*. I'm really interested in creating rules that mask other rules completely, because it allows me to effectively define rules with weight 0 - that is, they run if they are the only rule defined, but as soon as they are defined elsewhere, they *never* run (rather than only running 0.00001% of the time etc.). This makes possible rich CFDG library interactions without configuration (e.g. defaults, stub rules, overriding rules). I've got sets of two dozen CDFG files interacting right now, and libraries with 10^-7 weight rules behave normally when loaded independently, but "defer" correctly when the rules are redefined by the parent file.

Of course, perhaps the more elegant way to do this would be to just make a special case in the weight parser allowing rules of weight 0 during normalization, then discarding them before writing the table unless weight zero rules are the only entries.

[Note: I generally leave the high probability rules default (1) rather than 0.9999999 - but this means that low probability rules should have a slightly different value at normalization, e.g. 0.00000001 = 0.0000000099999999]

Post Reply