Motion and Synchronization

Two approaches

To move visual objects on the screen or to trigger audio objects and sychronize them with one another, Alambik provides two approaches :

· 1 Duration-based synchronization:

Used to create ordered audiovisual projects called clips, in which interaction with the user does not affect a predefined course of events. For this kind of duration-based synchronization, Alambik offers:
· A synchronous (duration-based) sequencer.

· 2 Event-based synchronization:

Event-based synchronization is non-linear, useful for productions containing animated objects or events which are triggered by outside actions (such as the keyboard, mouse, etc.). When you want to synchronize things based on events, but you don't know precisely when they will occur, Alambik lets you use :
· An asynchronous (event-based) sequencer.
· A selection of logical statements which let you take a more traditional programming approach in controlling clip behavior.



Alambik's Duration-Based sequencer

1.1. The basic principle
1.2. Selecting a TIMELINE
1.2.1 Synchronizing images based on time
1.2.2 Synchronizing images based on audio
1.3. Automatically-triggered events
1.3.1. Triggering on images
1.3.2. Triggering on audio
1.3.2.1 The basic principle
1.3.2.2 Realtime-rendered Audio (".MOD", ".XM",".S3M", ".IT")
1.3.2.3 Pre-rendered Audio (".MP3", ".WAV",".OGG")
1.3.3. Executing procedures based on different kinds of events
1.4 Chaptering a clip
1.5 Practical example

Alambik's sequencer facilitates the creation of ordered audiovisual projects, usually linear, called "clips".
You may, for example, easily use it to build music videos, interactive movies, or even cartoons.


1.1. The basic principle

Click here

A timeline (shown here in green) represents the progression of time; that is to say, the overall duration of a clip (".alg" or " .hdv").

Based on the position of the Selector (see diagram), you can partition the timeline in any one of the following ways :

· According to synchronization cues contained in any standard "MIDI" protocol file (1) (Feature not yet implemented but currently under research).
· In seconds, tenths of seconds, and hundredths of seconds, based on the internal system timer (2).
· According to audio markers, as set by any one of the three Audio Players (3, 4, 5).
· According to video frames, as set by the system Video Player (MPEG1) (6). (feature to be released)

Depending on the selected Audio Player, audio markers can be any one of the following :

· The line numbers in a musical score, set by the Realtime-rendered Audio Player (for file form ats MOD, XM, S3M, or IT).
· Points of musical reference, for example, the rhythmic score of a pre-recorded song, set by the Pre-rendered Audio Player. (feature to be released)
· Or even speech markers, set by the Alambik Text to Speech Player (feature to be released).

In the future, synchronization cues will be made available through an "OUT control signal," which can be exported and used by third-party recipients like television control consoles, other computer workstations, robotic devices, lasers, and lighting systems.

Once you have chosen your desired Timeline, you define your visual tracks and audio tracks.

Each of these tracks contains a certain number of keys which refer to the Timeline, allowing you to define precisely when you would like events and effects to be applied to visual and audio objects.

A track can be assigned according to its type, either to a visual object, or to an audio object.


1.2 Selecting a Timeline

For optimal flexibility, Alambik allows you to partition your Timeline according to the kind of production you want to create. You can therefore easily:

· Synchronize images based on time
· Synchronize images based on audio

1.2.1 Synchronizing images based on time:

This approach is best used when the clip you're building relies on animating images which move in relation to time, measured here in seconds. This works well for projects in which visual events are assigned sounds after the visual part of the animation has already built.

When synchronizing images based on time, the Alambik sequencer Timeline will be partitioned into seconds. The definition of animation tracks and the movement of visual objects will be given priority, as these form the foundation of a scene.

In this mode, the project creator "thinks out" his or her animation in terms of time. Visual events are based on the Timeline, and sound effects, in turn, are based on the Visual events.

Click here


1.2.2 Synchronizing images based on audio (music or speech) :

This approach is best adapted to projects consisting of sequences in which you want to manipulate visuals according to the audio signal.

Synchronizing images based on audio can be useful because through this approach animated visual objects automatically respond to any changes in the tempo of music or speech.

For example, to create a music video, the Timeline can be partitioned according to the audio's tempo, i.e., its musical score. The project creator can then easily "place" all animations and visual movement directly onto the chosen music.

In this mode, the project creator "thinks out" his scene in terms of sound.


Click here

To create talking characters, you can partition the Timeline according to the position of each spoken word in time. The animation of your character's arms, for example, or of his body, can be defined according to the Timeline and thereby take place in precise synchronization with the words it speaks. In addition, facial animations such as mouth movement and chin waggling, as well as emotive facial expressions, can be automatically synchronized in Alambik according to the phonemes pronounced, their amplitude, as well as by expression markers inserted into the text in the case of Text to Speech. (feature to be released)


1.3 Automatically-triggered Events

The Alambik sequencer lets you automatically trigger an ordered series of events.

To speed up your work and let you build scenes as naturally as possible, the Alambik sequencer lets you trigger any number of events based on either visual or audio cues.

1.3.1. Triggering on Images : (order: time > image > sound effect)

Let's imagine you're creating a simple project in which a character steps up to the front door of a house and knocks. To design this scene using a traditional approach, the project creator would have to carefully consider elapsed time (in seconds) to determine the exact duration between the appearance of the character on screen and the instant he begins knocking. At that precise instant, the sound resulting from the character's hand striking the door would then be recorded by the sound recordist with the aid of a microphone.

With Alambik, however, the project creator can script this kind of scene using a natural and coherent progression based on real phenomena. Thus, he or she would:

· Partition the Timeline in "seconds."
· Easily note the time elapsed up until the door is knocked.
· Set the knocking sound by looking at the image, then placing the sound effect exactly at the moment at which the character's hand makes contact with the door.

Thus, as in real life, the sound will be the result of an actual event - freeing the project creator from the necessary of positioning it arbitrarily in time, independent of visual content. Without Alambik's ability to trigger on images, the project creator would need to be working constantly with a stopwatch in hand, wasting time adding and subtracting in order to determine "that precise instant" at which events must happen.

As in real life, any sound effects arising from movement will automatically follow any changes to the movements' speed. Indeed, we can consider the usefulness of this feature from the point of view of a movie director. Let's say this director often likes to compare the same scene played by his actors at various different rhythms, in order to choose the take which best conveys the emotion he wants to get across. Imagine, in this case, that all sound effects had to be positioned on a Timeline measured in seconds: our director would be forced to manually reposition each effect every time he changed the rhythm of a scene. Even if he could somehow modify the speed of his "internal timer" to adjust the pace of sound effects, he would be constrained to make only linear changes in speed - he could not vary the rhythm, for example, to take on different speeds at different points. Furthermore, once he changes rhythm he would lose the reference value of his Timeline, which was originally broken down into seconds. Through triggering with images, Alambik solves all these problems.

Knocking on a door constitutes a looping movement. If the creation of this movement is to be accomplished by repeating the same animated sequence (the arm and hand in motion, for example), a project creator working in Alambik need only set the sound effect once to be triggered by the desired image (the hand coming into contact with the door) in order to produce a perfectly-sequenced animation.

1.3.2. Triggering on audio : (order = music > image > sound effect)

1.3.2.1 The basic principle :

To illustrate triggering on audio, let's take for example of the creation of a music video.

Creating music videos is by now a time-tested practice, for which there exists a traditional approach based on the following steps:

· During shooting, a "timecode" is burned onto the tape indicating in real-time the precise position of the musical soundtrack.
· Then, during editing, special steps must be taken to re-align the images, often shot in video, with the music in the soundtrack. Using the timecode, the editor carefully matches the video footage up with the music.
· Even if the speed of the images must be changed in order to meet the needs of synchronization, the music must always be played at the same speed, serving thus as the project's consistent "Timebase."

With Alambik, the project creator benefits from tools specially designed to simplify and speed up the creation of projects in which images must be synchronized to a soundtrack. Indeed, to make the work as natural as possible, particularly if the project creator happens to be a musician, Alambik gives him or her the option of:

· Partitioning the sequencer's Timeline based on a musical score, or audio markers, by selecting the Audio Timer;
· Displaying, positioning, animating, and moving visuals according to these audio markers.

In this way, all visual content will be firmly based on the soundtrack.

The project creator is thus freed from:

· Wasting time having to listen again and again to the soundtrack in order to figure out the elapsed time in seconds between different points in the musical score.
· Loss of synchronization in the case of streaming audio.
· The impossibility of quantifying musical time in the absence of a musical timescale.
· The difficulty of calculating a desired position in the music in terms of tenths or hundredths of seconds even if it happens to fall on a simple musical measure or beat.

In other words, without Alambik's ability to trigger on audio, a musically-minded project creator would be a bit like a deaf person forced to time his dance steps with the aid of a stopwatch.

Of course, while there exists a direct correspondence between the passage of time and the progression of a musical score, the passage of time is measured in a constant and linear manner, whereas the progression of music is based on a complex division of time, ordered but nonlinear.

1.3.2.2 Synchronizing with Realtime-rendered audio files (".MOD", ".XM",".S3M", ".IT") :

The Alambik Editor includes a utility called the Audio Synchronization Tool which generates a unique marker corresponding to every line of a realtime-rendered audio file.

1.3.2.3 Synchronizing with Pre-rendered audio files (".MP3", ".WAV",".OGG") :

For pre-rendered audio files, a utility will soon be released which allows Alambik to automatically generate markers from any kind of sound file, be it a music recording or any other kind of audio signal (speech for synchonizing the movement of a character's lips, etc.)

It's important to remember, however, that unlike realtime-rendered audio files, the technologies behind pre-rendered audio files are not intended for musical creation, but rather for the compression of digital audio signals. Pre-rendered audio files, for example, do not contain a musical score. It is therefore impossible to extract the exact lines of a pre-rendered audio file's musical score and add corresponding markers, as it is with realtime-rendered audio files.

The utility for generating markers for pre-rendered audio files will have two modes of functioning:

· One mode which lets you automatically set markers corresponding to specific sounds, notably the tempo of a drum or of a bass element in the music.

This mode functions by digitally analyzing the sound signal in real time.

Another mode which lets you manually set the markers you want, by means of three different kinds of peripherals:

· The computer keyboard, certain keys of which can be assigned to precise events,
· A MIDI keyboard connected via the MIDI port of any soundcard,
· A drum pad, connected via the MIDI port of any soundcard.

The project creator will have several tracks available for setting different kinds of sound markers according to need. He or she could, for example, start off by extracting the tempo of a musical measure onto the first track, then setting markers for lyrics, refrains, breaks, coda, and musical notes on distinct tracks.

Three tracks will be automatically created for each audio file, named by default "rhythm_score", "coda_score" and "note_score".
Later on, the project creator can take advantage of this "independence of tracks," most notably by dynamically changing tracks in order to alter the synchronization of visuals, or, inversely, to control the activation of each marker track for an audio file by means of the Timeline.

1.3.3. Executing procedures based on different kinds of events :

After looking at how different "series of events" can be set in a Timeline partitioned either in seconds (internal_timer), or in audio markers (audio, music, or speech_timer), we are now going to discuss how to call procedures based on different kinds of "trigger event".

A track specially-made for events, "track.event", exists in order to gather into a single sequence all the different kinds of events possible:

1.3.3.1 Temporal events :
In this case, events are triggered by a definition key representing a precise instant of time, defined in seconds, minutes, etc.

1.3.3.2. Audio events :
In this case, events are triggered according to a precise location in a musical score, an audio recording, or a speech synthesis.

1.3.3.3. Events triggered by user actions :
In this case, events are triggered by a user action. Whether this be an action of the mouse, a touchscreen, or the keyboard, any interaction taking place in a visual interaction zone defined in the track "track_visual" can be programmed to trigger a procedure call, as long as it doesn't compromise the linearity of a clip.

1.3.3.4. Events triggered by a specified frame :
In this case, events are triggered by the display on screen of a specified frame in an animated visual (sprite, video, 3D object, etc.).

1.3.3.5. Events triggered by the collision of visuals (to be implemented):
In this final case, events are triggered either by:
· The collision of other visual objects against a specified visual object.
· The collision of a specified visual object against other visual objects.

Note :
Procedures can be either:
· Suspending - i.e., they halt the progression of the clip until the end of their execution;
· Continuous - i.e., they execute as the clip progresses, without halting it.


1.4 Chaptering a clip (".Alg" or ".Hdv" files)

Chaptering a clip means placing markers in a chapter track which let the user move to precise marked locations during the playback of a clip.

These "jumps" can be made by using the buttons "previous chapter" and "next chapter" on the Alambik MC Player shown at right.

The rapid search buttons FAST FORWARD and REWIND can either be activated or deactivated, depending on the wishes of the project creator. It is thus possible, for example, to keep a user from arbitrarily moving the action to the middle of a sentence spoken by a character, thereby preserving the meaning and integrality of the character's lines.


1.5 Practical Example

Let's create an audiovisual clip. It will include all the different tracks which control the behavior of their assigned elements. This clip is based on a "unique Timeline," which will "synchronously" give rhythm to all of its tracks (i.e., using duration-based synchronization).
The start points for each one of these tracks is therefore predetermined by temporal definition keys based on this global Timeline.

Our first step will be to create the "support structure" for the clip, by selecting an adequate timebase for the Timeline according to which all tracks will be synchronized.

For example :

@clip_01=clip.timeline.create (INTERNAL_TIMER)
...
clip.timeline.end ()


Then we place all of the necessary tracks into this support structure:

@clip_01=clip.timeline.create (INTERNAL_TIMER)
@track_visual = track.visual.create (TRACK_LOOP)
...
track.end ()
... etc
@track_audio = track.audio.create ()
...
track.end () ... 1 track, only if needed
... etc
@track_event = track.event.create ()
...
track.end () ... 1 track, only if needed
... etc
@track_index= track.chapter.create (SEEK_ON)
track.end ()
clip.timeline.end ()

As mentioned above, a clip can include four different kinds of tracks:
· Visual tracks for each visual object to be animated
· Audio tracks for each audio object to be played
· An event track which includes all the event definitions for a clip
· A chapter track which includes all the chapter definitions for a clip

After having defined the basic form of our clip, we can now assign specific objects to each track.

clip.create ( @clip_01 )
clip.assign (@track_visual1, @object1)
clip.assign (@track_visual2, @object2)
clip.assign (@track_visual3, @object2)
clip.end ( )

Notes :
Neither event tracks nor chapter tracks are assignable.
These two kinds of tracks cannot be controlled with the instruction clip.play.

The tracks selected by the instruction clip.play will be controlled synchronously by the clip which contains them:

clip.play (@clip_01 )
clip.pause (@clip_01)
clip.resume (@clip_01)
clip.stop (@clip_01)
clip.time.seek (@clip_01, UP/DOWN )
clip.chapter.set ( @clip_01, CHAPTER_NUMBER )

as well as, coming soon,

clip.forward (@clip_01 )
clip.rewind (@clip_01)
clip.slow (@clip_01)
clip.speed (@clip_01, @speed)

Conventions for the track control string variable:

1 = track "activated" (ON)
0 = track "muted" (OFF)
X = track "non-controllable" : linked track, chapter track, event track, …


"Global Play" mode for a clip containing six tracks:

$ALL_TRACKS = "111111" plays all tracks.

"Selected Play" mode for a clip containing six tracks, three of which are non-controllable:

$ SELECTED_TRACKS = "11X1XX" plays all tracks
$ SELECTED_TRACKS = "10X1XX" plays only tracks 1 & 4

or, without having to use the variable at all : clip.play (@clip_01, "10X1XX")

Click here

Let's take for example a clip, in this case an Alamgram, in which the synchronization is duration-based. The above diagram illustrates the script which will follow from our example. The Timeline will be partitioned in seconds, by selecting "internal-timer."

We will create an initial track (track 01), to define the movement of an animated 2D object (i.e., a sprite). We place the keys for this track on the Timeline so that, upon launching the Alamgram (that is to say at T = 0 seconds), the sprite will move from its starting position to its ending position in 8 seconds' time.

At the same time, still in this initial track, the animation of the sprite is activated through the selection of action No. 1, which consists of 12 images played in a loop at the speed of 8 images per second.

We then create a second track (track 02), in order to create sound effects for the footsteps of the sprite using two sounds respectively assigned to image No. 6 and No. 12 of action No. 1.

In this example, the visual (i.e., the sprite) is synchronized according to the passage of time, and its sounds are in turn synchronized according to the movement of the visual. We can thus describe it as an Alamgram based on the same kind of ordered series of events outlined in section 1.3.
Sound effects will thus be automatically re-synchronized if there is a need; that is to say, upon any modification to the change of the sprite's animation speed, as well as upon any modification to the duration of its runtime.

Let's now look at the script itself.

Script
Comments
@spr1 = sprite.load ( " blue.spr " )
@step_soundG = sound
.load ("step1_sound.wav")
@step_soundD = sound
.load ("step2_sound.wav")
@step_soundG2 = sound
.load ("step3_sound.mp3")
@step_soundD2 = sound
.load ("step4_sound.mp3")
// loads the sprite contained in file " blue.spr ".
// loads the sound effect contained in file " step1_sound.wav"
// loads the sound effect contained in file " step2_sound.wav"
// loads the sound effect contained in file " step3_sound.mp3"
// loads the sound effect contained in file " step4_sound.mp3"
@clip_01=clip.timeline.create ( INTERNAL_TIMER ) // selection of the internal timer.
@track_visual = track.visual.create (TRACK_LOOP)
 track.position.interpolation.set (INTERP_LINEAR)
// Creation of the first track, to be used for the "sprite" visual.
// Selection of the default mode of movement
track.key.set ( time1 )
track
.key.frame.speed.set (10)
track
.key.action.set ( ACTION_NUMBER )
track
.key.position.set ( pos_x_1, pos_y_1, pos_z_1 )
track
.key.set ( time2 )
track
.key.position.set ( pos_x_2, pos_y_2, pos_z_2 )
track
.end ( )
// Time key expressed in seconds, or tenths or hundredths of seconds
// The no. of images per second for the animation
// Sets the action corresponding to the desired animation
// Sets the starting position for the visual to " time1 "
// Time key, placed at 8 seconds in the Timeline.
// Sets the position for the visual to " time 2 "
@track_audio = track.audio.create ( )
track
.key.set ( time )
track
.key.frame.sound.play(1,6, @step_soundG,CHANNEL_1,SOUND_ONE_SHOT)
track
.key.frame.sound.play(1,12, @step_soundD,CHANNEL_1,SOUND_ONE_SHOT)
track
.key.set ( time +n )
track
.key.frame.sound.play(ACTION_NUMBER,FRAME_NUMBER, @step_soundG2,CHANNEL_1,SOUND_ONE_SHOT)
track
.key.frame.sound.play(ACTION_NUMBER,FRAME_NUMBER, @step_soundD2,CHANNEL_1,SOUND_ONE_SHOT)
track
.end ( )
// Creation of an audio track, for the sound of the sprite's footsteps
// Selects the instant at which the following sounds will be activated:
// Assigns a sound effect to take place on frame no. 6 of action no. 1

// Assigns another sound effect to frame no. 12 of action no. 1

// It's possible to modify the assignment of sound effects at any time
// This time with two .MP3 sound effects.
@track_event = track.event.create ( )
track
.key.set ( time1 )
track
.key.event.call (proc_name, param1, param2 )
track
.key.set ( rythm_track, mark1 )
track
.key.event.call ( proc_name , param1, param2 )
track
.key.event.call (@event, proc_name )
track
.key.collision.event.call ( @track_Visual, proc_name )
track
.end ( )
// An event track can be used to trigger the execution of procedures:
// According to time, Time_Line on internal_timer,
// call the given procedure, passing these parameters.
// According to audio marker, Time_Line on audio_timer,
// call the given procedure, passing these parameters.
// According to a user action (to be implemented)
// According to a collision between visual objects (to be implemented)
@track_index= track.chapter.create ( )
track
.key.set ( time1 )
track
.key.chapter.set ($chapter_name_01, CHAPTER_SEEK_ON)
track
.key.set ( time1+length )
track
.key.chapter.set ($chapter_name_02, CHAPTER_SEEK_OFF)
track
.end ( )
// Creation of an optional track for chaptering.
// Determines the beginning of the Alamgram's first chapter
// Ch. name, Ch. duration. According to timer, Search mode
// Starts the second chapter
// The option for rapid searching within a chapter can be disactivated.
clip.timeline.end ( ) // Note : @track_Index reflects the total number of chapters.
clip.create (@clip_01 )
clip
.assign ( @track_visual, @spr1 )
clip
.assign ( @track_audio, @spr1 )
clip
.end ( )
// The principal track created in this way will be assigned AVOs
// Assignment of the sprite @spr1 to the track @track_Visual
// Assignment of the sprite @spr1 to the track @track_Audio
// To delete the assignment of an AVO to a track

Alambik's Event-based Synchronization

Event-based synchronization is non-linear, that is to say useful for productions containing animated objects or events which are triggered by outside actions (such as the keyboard, mouse, etc.). When you want to synchronize your script based on events, but you don't know precisely when they will occur, Alambik lets you use :

· An asynchronous (event-based) sequencer.
· A selection of logical statements which let you take a more traditional programmer's approach in controlling clip behavior.

Of course, if it's simply a question of moving an element with the mouse, you would use the instruction mouse.assign ().

2. 1. The Event-based sequencer.
2. 2. "Classic" programming-based animation.


2. 1. The Event-based sequencer

Generally speaking, the event-based sequencer is preferable for creating projects in which objects are animated (or events are triggered) by pre-defined exterior events (keyboard, mouse, etc.) and when you don't know precisely when they will occur.

This mode of the Alambik sequencer lets you define independent tracks, which are not assigned specific types. Each independent track sits in its own, independent Timeline. Unlike in duration-based mode, the triggering of these tracks is not set to take place at any specific time. Rather, their activation, whether "individually" or "grouped," occurs dynamically in the script, based on specific events.
Here is an example:

Here is an example:

@track1 = track.create (TRACK_LOOP)
track.position.interpolation.set (INTERP_LINEAR)
track.key.set ( time1 )
track.key.frame.speed.set (10)
track.key.action.set ( ACTION_NUMBER )
track.key.position.set ( pos_x_1, pos_y_1, pos_z_1 )
track.key.set ( time2 )
track.key.position.set ( pos_x_2, pos_y_2, pos_z_2 )
track.end ( )
@track2 = track.create (TRACK_LOOP)
track.position.interpolation.set (INTERP_LINEAR)
...
track.end ( )
@track3 = track.create (TRACK_LOOP)
track.position.interpolation.set (INTERP_LINEAR)
...
track.end ( )
@sequence_01= sequence.group.create ( )
sequence.group (@track1, @object1)
sequence.group (@track2, @object2)
sequence.group (@track3, @object3)
sequence.group.end ( )

@sequence_02=sequence.play ( @track1,@object1)
sequence.pause ( @sequence_02 )
sequence.resume ( @sequence_02 )
sequence.seek ( @sequence_02, UP/DOWN )
sequence.stop ( @sequence_02 )

sequence.group.play ( @sequence_01)
sequence.group.pause ( @sequence_01 )
sequence.group.resume ( @sequence_01 )
sequence.group.seek ( @sequence_01, UP/DOWN )
sequence.group.stop ( @sequence_01 )

2. 2. "Classic" programming-based animation

This mode, a classic approach to animation, makes use of logical statements and mathematical operators. It should be reserved to cases in which use of the Alambik sequencer is not appropriate. For example:

· In very simple scripts you may prefer to use the instruction time.wait () instead of creating entire sequences and tracks.
· If you want to move an object at random or if movements require very complex calculations. For example, to orchestrate the movement of enemy characters in an action or strategy game.
· When your script already contains loops which can be used for repositioning objects. In this case it may be simpler (but not always more efficient) just to use the instruction position.set to move your object.