MLT developer, Brian, developed 4 new filters, available now in git master. The fft filter uses the fast Fourier transform to produce data for other filters. The dance filter moves, scales, and rotates an image or video in reaction to the audio characteristics. The lightshow filter alters the color of the image based on the audio. Lastly, the audiowaveform filter draws a waveform over the image.
These were all sponsored by the "Learn Your Lyrics" YouTube channel. They generate the videos using MLT through shell scripts that generate long melt command lines. You can see the fft, dance, and audiowaveform filters in action in this video:
A major regression slipped into the 0.9.4 release plus some other good fixes rolled in just afer that release prompting this new release. Please discontinue using version 0.9.4 and upgrade.
Ushodaya Enterprises Limited of India, the original sponsor of the MLT and Melted
projects has assigned their copyrights to Daniel R. Dennedy, sole proprietor and CTO of Meltytech, LLC. What does that mean? Not much. There are no planned changes to project direction at this time, but it may provide some options in the future. It is just good to have the copyrights under control of active members of the project to make it possible to exercise future options. Many thanks to project co-founder, BGa, for facilitating this transaction.
Synfig Studio is a good, free, open source, cross-platform 2D animation program. They wanted to add audio support, but they needed more than just a hardware abstraction layer for audio output. They needed more than a multi-format/-codec library. They also wanted something that provides timing and mixing with a succinct API. They chose MLT.