sevenson.com.au logo

Lynx Fastlife – Behind the Scenes – Part 2

In my previous post I explained how I generated the sequences used in the Lynx Fastlife app – which you can read about here. Now I’m gonna run through how I did the overlays and data insertion for the app – the part that I think is the really cool part.

Content, tracking and data insertion

The types of content came in three possible flavours – videos, stills and swf animations. They were all slightly different, but shared a lot of common elements, so I created an abstract segment class to handle all the base elements.

This abstract class was set up in such a way that it would automatically load all of the resources you specified in the XML. This allowed the me to load in the main resources for images, videos or swfs, as well as any audio tracks, tracking data, support elements, etc. The basic XML structure for a segment looked a bit like this:

<segment type="video" duration="10"> 
    <resources> 
        <item id="main">url_to_video>/item> 
        <item id="audio">url_to_audio>/item> 
        <item id="other">url_to_other_item>/item> 
    </resources> 
</segment>

Basically, I could get it to preload anything I wanted. Once I had this set up, I simply extended it to suit each of the different types of segments, each of which I will go into now.

Video Segments

Video segment were the most interesting type of segment. A plain video segment was simple in that it just had to be able to load and play the required file. The fun part was the videos that had content dynamically overlaid.

To do an overlay took a number of steps.

Firstly, our video guru (JB) tracked the required areas in the video (and also gave them a clean up while he was at it). This dumped out a massive txt file with all the co-ordinates for the tracking rectangle’s corners. I think he used an app called Mocha?.. not sure about the name.

Next we ran this text file through a python script (that Rob created) that reformatted it into an XML document that we could use in flash.

Alrighty – now that we had it in a format that flash could understand, we could hook it up in the segment XML using a method similar to this:

<segment type="video" duration="10"> 
    <resources> 
        <item id="main">url_to_video</item> 
        <item id="audio">url_to_audio</item> 
        <item id="trackdata">url_to_overlay_data</item> 
    </resources> 
    <overlays> 
        <item userdata="user" trackingdata="trackdata"/> 
    </overlays> 
</segment>

From the above XML you should be able to see that I had an ‘overlay’ set of nodes that linked to a tracking XML file I loaded as a resource to this segment.

I wrote a class that would take in the XML, parse it into a Vector of Point objects (because it was faster that working with the pure XML), and then I would use the current time of the video over the video duration to create a percentage that I would use to figure out which set of points to use for the overlay. I had to do it that way because video’s don’t have a frame property, so we had to kinda fudge it using percentages.

For the most part this worked well, but on some videos there was a slight jitter, so I set it up so that when the XML was parsed into Point objects, I would interpolate an extra Point data between each node so that it would smooth out the motions a bit.

Because the rectangle shapes were never perfectly square I had to use the drawTriangles() method to distort the BitmapData. I wrote a small class to handle the actual drawing of the BitmapData where I could set any of the four corners of the rectangle (based on the Points we looked up) and the number of horizontal and vertical segments to use, and it would spit out all the data I needed to for a drawTriangles() call, including the vertices, indices and uvData info.

So, that’s how we did the actual overlays, but because the images weren’t always the same for each video I had to come up with a way to process the images on a case by case situation.

For starters, I made it in so that in the XML you could specify what data should be used to generate the overlay BitmapData. You could specify the user’s profile pic, the cropped pic, a random friend or a specific friend.

Then, to give myself some more control over how the images were displayed and help them blend in with the videos more, I added in some optional attributes that would adjust the actual BitmapData before it was displayed.

First I added in some basic width, height and rotation attributes to allow the images to be resized to a ratio that better suited the video overlay. Then I hooked up Quasimondo’s ColorMatrix class so that I could adjust the hue, brightness, etc. of the image at run time.

Adding in these properties made it really easy to adjust the overlay images so that they blended in with the videos and looked pretty damn seamless.

Finally, not all of the video overlays were to use the profile pics of the different users – some of them required things like the users name or a custom design. To do this, I came up with a ‘preprocessor’ approach that allowed me to do pretty much whatever I needed.

The way that the preprocessors worked was that I set up a interface that all processors would use. They would take in the FacebookUser object that was based on whatever user data they were told to use, generate a new BitmapData and then return it to the overlay object.

To make the processors more re-usable, I made it so that the actual XML node was passed to the processor as well, so you could add in extra custom data for the processor to work with (A good example of this was all the newspaper headlines that used the same processor, but passed in different titles as extra data in the XML)

What you ended up with them was some XML that looked like this:

<segment type="video" duration="10"> 
    <resources> 
        <item id="main">url_to_video</item> 
        <item id="audio">url_to_audio</item> 
        <item id="trackdata">url_to_overlay_data</item> 
        <item id="processor">url_to_processor_data</item> 
    </resources> 
    <overlays> 
        <item userdata="user" trackingdata="trackdata" processor="processor" width="100, height="150" brightness="-50"/>
    </overlays> 
</segment>

This approach meant that we could basically generate any graphic or style we needed for any video overlay in the entire site – all with a pretty minimal fuss πŸ™‚

SWF Segments

The other semi-complex type of segments were customised SWF segments.

Instead of trying to overlay the data on a SWF, we found it better to actually insert the data into the actual SWF- that way, we could lay out any basic animations using place holder graphics, then simply replace them at run time.

The process for this was to set the SWF up so that it implements and interface. Using this interface the app would pass in the data specified in the XML in a similar fashion to the way that the video overlays worked.

There was no real need for preprocessors with SWF objects because the SWFs could apply whatever effects they needed internally. I did leave in the ability to do the usual effects like resizing and brightness though, just in case.

<segment type="swf" duration="2"> 
    <resources> 
        <item id="main">url_to_swf>/item> 
    </resources> 
    <overlays> 
        <item userdata="user" width="100" height="150" brightness="-50"> 
    </overlays> 
</segment>

Image Segments

The final type of segment used in the site was an Image segment. These were the most basic type of segment in that all they did was load and display an image. There was no need for any sort of data insertion with this type of segment so I was able to keep it very simple.

<segment type="image" duration="0.15"> 
    <resources> 
        <item id="main">url_to_image</item> 
    </resources> 
</segment>

The image was simply displayed on screen using a timer that ran for the amount of time that was specified in the duration.

Conclusion

So, that is how the Lynx Fastlife website pretty much works. By combining the random story generation techniques described in part 1 of this post to create a story, and combining it with the techniques I described above, we were able to generate a pretty cool customised experience for the users that in theory should never be exactly the same twice.

I could never have finished this project on time or to the standard it was at without the help of all the guys at VJ – namely Erik, Rosy, Cookie, JB, Vincent, Michelle and Simon – it was a massive team effort to get it over the line and hopefully the whole campaign will be successful!

If you haven’t done so already, be sure to check it out at http://www.lynxfastlife.com.au

Update

As is the way with campaigns, the website is no longer actually live.Β  I did find this case study on Vimeo that shows off a lot of the sites elements though.


Leave a Reply

Your email address will not be published. Required fields are marked *

Name *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


sevenson.com.au