Sorry for the late reply

Preston, definitely effects “should” be attached to the timeline, objects should be part of the effect but I feel they should not be the focus of the effect since adding/removing lights or changing object location should not alter much the final goal of the effect.
I’m sure on 2011 we will have a lot news about sequencing software so I’m not worried that we will not have applications to control Smart String so I want to focus Prancer on the hard problems even if they take a while to become a product, basically I want to invests in intellectual property instead in release a product soon.
I think get rid of the grid is already a huge step in the right direction since allow us to “manage or use” 5000+ channels but still I feel is half of the solution since we are limited to create effects for objects instead take the whole display as one big canvas.
Unless the software take in consideration the physical location of every single node it can’t create effects automatically and instead the user is who has to have in mind how to proper use a combination of several effects to obtain the desired goal.
A simple example:
I want to make a mega tree, every node in a SS will be separated every 3.5”, so I was thinking I want to have less separation between nodes about 1.75” and instead run 64 nodes per string I’ll be using 128 nodes per string but the string will go to the tip of the tree with 64 nodes and go down again with another 64 nodes in the same line. So I want to manage a 128 nodes string differently, in my case if we use virtual channels will be something like channel 1,2,3,4,5,6…128 but the physical channels are 1,128,2,127,3,126…etc since the nodes are intercalated.
And here is where the physical location has a lot to offer, from the user point of view he doesn’t care how to map virtual channels to physical channels, he just care about optimize the string length and space between nodes, so he knows has 128 nodes to put in any direction he wants, hell, even he may want to use a lot more nodes for the bottom of the tree where there are more surface to cover and less nodes on the top, because of that the sequencing programming could be a total nightmare, instead the application should be “physical location aware” and the creation of the effects should be independent of the location of the nodes until the effect is “rendered”.
I imagine the creation of the effect creating an effect using a start point and vectors to define the direction of the effect, so the user does not interact “directly” on the object instead he overlay and effect at that physical location.
For example I take a AVI file, I define a start location a vector with the direction and the width of the surface to cover, so if the object behind the effect is a mega tree the mega tree should render the AVI file and NOT mapping one to one pixel vs lights since the tree is a conical surface and looking at the tree from a 2D position like stand 30ft away, we can observe a lot more nodes on the left and right of the tree than in the center line and unless the AVI file is mapped taking that in consideration the location of the nodes the AVI won’t be seen properly.
I hope I could explain myself
CSF,
You took one of the hardest problems with xLights and is about abstracting from the hardware, I don’t have ANY problem at all if you create a sequencer for it but you might be trying to do too much for you alone, something that every old and new sequencer needs is interaction with the hardware and there is where I think you have the HUGE opportunity to create something unique for many people/applications, if you focus on a single problem about the hardware interaction and basically provide APIs to render a file or render at real time on any hardware then you solve a HUGE problem and everyone will want to use your layer, but if for example xLight can process .VIX files but it can’t take real time input then I’ll be forced to create my own hardware layer for Prancer in a future and then if I have my layer then I won’t need xLight anymore, if I don’t need xLight then probably xLight will create its own sequencer since won’t be compatible with Prancer output, and there is where the applications won’t be compatible anymore.
I think what you did is awesome and has a lot of potential to provide even a lot more for the rendering of the data in the hardware. Again is pretty cool if you do a sequencer as well since it will run on Linux/Mac but keep an eye and the main focus on the hardware interaction. Who knows… when you finish with xLight to manage any hardware Preston may already have something released for Mac/Linux and you don’t need to take care of that.
Cas.