Friday, November 20, 2009

More Bearmod Datamoshing: "Set You On Fire" Remix



Head on over to www.bandeapart.fm to check out an A/V remix video for Montreal party band, Think About Life. The audio remix is by Hatchmatik and the video remix is by Ian Cameron of The National Parcs, who also directed the original live session shoot and edit at Bandeapart's studios.

I was brought in to help facilitate some of the A/V synchronization between Hatch's Ableton Live session and Ian's Modul8 and Final Cut Pro sessions. Using my soon-t0-be-released Bearmod AU/VST Plugins that I've been developing, we were able to capture many of the nuances of Hatch's chops of the original live multitrack audio recordings and dynamically chop up Ian's live multichannel video captures of the Bandeapart session in Modul8. In the upcoming weeks I'll post a video detailing a bit of our process!

Additionally I datamoshed much of the original session footage for Ian using traditional techniques as described here which were then used for his final edit in Final Cut.

Please let me know what you think of the results!

Best!
-BM

Monday, September 28, 2009

Sounds for Datamoshing/Visual Flow Distortion


Purple Water is a brief experimental attempt at matching visuals made with the newly popularized techniques of visual flow/data mosh processing of video to an appropriate sound. Since this process tends to render fluidly distorting images, the sound of splashing water seemed like an obvious match and I found the results interesting.  

The video was made by processing clips of an old video of mine, Ipanemicenema, using an adapted version of one of Andrew Benson's hsflow distortion patches for Max/Jitter. The sound was chopped and composed in Ableton Live, which is in turn sending OSC timecode information through my custom made Pluggo Plugin to Max in order to chop the video footage in the same linear way that the original audio piece was chopped and rearranged.

Saturday, September 26, 2009

Inspiration: Civilization Elevator Mural Installation

At New York City's Standard Hotel, while you ascend to your 20th floor suite, you also ascend from hell to heaven via a collage of video loops from classic films.  


Marco Brambilla's Civilization video collage tickles me in just about every way.  
First, his immaculate collaging of these moving images into a surreal and ethereal sequence that maintains a somewhat believable illusion of natural vertical panning movement with convincing but impossible architecture.  Then there's his keen appropriation of beautiful, iconic, and even outright hilarious images from films ranging from Arnold Schwartzenegger flexing his muscles while standing over Princess Leah in her classic Jaba's slave garb, to the Stay Puff Marshmellow Man from Ghostbusters stomping through a cityscape, to Michael Jackson doing his high kick dance.  

The sheer volume of appropriated video clips in Brambilla's work places me in a state of nostalgic curiosity, as I try to recall where I might have seen each little chunk of imagery before.  This activity of trying to place small samples of historical moving imagery seems to be the visual equivalent of recognizing and placing sound samples in much of hip hop music; one of my favorite parts of listening to hiphop in the first place!

Wednesday, September 16, 2009

Live Visuals update

In the last several months, I've been shooting original video and manipulating it through Modul8. One of the projects I am compiling my live VJ set for is a collaboration with the Montréal "psych-soul" band, Golden Isles, (formerly known as 'Crystal Moustache').

The band will be playing a live show at Ex-Centris, Montréal's premier art house cinema which decided to stop operating as a cinema this Summer in order to take a new, experimental direction and become a multimedia and live performance venue this Fall.

My plan has been to develop an immersive environment for the audience, in line with the band's live performance which they described to me as being 'psychadelic'. The band has chosen the theme for the visuals: the four elements. Over my last month in Hawaii, I shot footage of the epic natural beauty of the state's dreamy islands. As I edit down my HD footage of slow-moving clouds, thunderous waves in the ocean, and rolling pastures, I am imagining how these images will be projected in the venue. More to come!

Friday, July 17, 2009

CHEECH AND TRON!!! WHAT IS GOING ON HERE?

Just saw this today over at Create Digital Motion and my mind is blown!!! Beyond being a brainflosser of a video on it's own, I think it's really great that someone has made a mainstream piece of art/entertainment that approaches the topics of our current financial crisis in the United States. Definitely be sure to jump over to Create Digital Motion post and read the exclusive response from creator Casey Basichis as he explains some of his intentions and process behind the video!

Speaking of Tron, I also want to point out that TR2N (the Tron sequel) is in production and coming out in 2011. You can see a really poorly captured and encoded trailer to the movie from Comicon 2009 here:


Friday, May 29, 2009

Technical Inspiration:

Here are some tutorials that I am particularly fond of that have and will be inspiring the techniques I develop this summer for my live performance set.  Most of these feature clever software tricks, custom software implementation, and the use of software features that are often overlooked but carry much potential.  

These videos are HIGHLY RECOMMENDED to anyone just using Ableton Live for their live music sets; as well as those interested in doing live visuals.  So check them out!!!

Ableton Tutorials:


This video introduces the basic concepts of Dummy Clips in Ableton Live utilizing the Chain Selector.  Dummy Clips allow you to create extensive preprogrammed (and sequenced/automated) effects that can be launched as clips!  (By the way, if you haven't figured out how to use chains within Live's Racks, you really should!  The Covert Operators have a great tutorial and explanation on the topic here!)


Part 2 goes more in-depth to show you how you can use automation envelopes in your Dummy Clips to create dynamic filter sequences.  Pay special attention in the second half of the video as they demonstrate an automation recording trick, switching between the arrange and clip views, that has many uses beyond creating Dummy Clips.  

I actually use this automation recording technique when working with external synths and drum machines in order to instantly record into Live all the MIDI CC parameters that I might want to automate.  I find this especially useful when recording sequences that I preprogrammed on my Elektron Machinedrum as (in addition to recording the MIDI note sequences) it allows me to instantly record the MIDI CC automations created with the special Parameter Lock/Slide functions of the machine.


Though this tutorial is a bit long, it really gives a better idea of many of the cool things you can do with Dummy Clips.  It also delves into using Follow Actions in combination with Dummy Clips which looks quite useful for making a dummy clip automatically reset after it has played through, as well as creating random jumps through the Dummy Clips for creating generative effect sequences.

Tom Cosm also has a lot of other interesting Tutorials worth checking out.  If you are new to using Ableton Live to perform with, his tutorial Moving from the studio to live performance using Ableton Live covers some good fundemantal techniques for building your p.a.  Or if you want to drool over Akai's new APC 40, check out Akai APC40 - Opening The Box.  


In addition to using dummy tracks to control effects and automations within Live, I plan to port these automations out to Max/MSP/Jitter or Modul8 to control effects upon my video as well.  For this, I am adapting strategies and custom software presented in the following tutorial.

A/V Sync Tutorial:


Please excuse the poor production of this video.  I recommend going to his tutorial page here in order to be able to fully understand what's going on in the video, as well as download the necessary tools to make this work.  For those of you fairly experienced with Max/MSP/Jitter and OSC,  you should be able to quickly figure out how to port this over and receive the OSC messages in Max.

For everyone else (most of you?!) let this video serve as a preview for my own solution to this problem for use with Modul8 or Max 5 that I will be releasing soon for free along with a tutorial on how to use it.  I have built a modified and expanded version of digital funfair's Pluggo VST that will port multiple OSC parameters (either 8 or 16 per plugin instance!) from Live to Modul8 through a configurable Max 5 router (or possibly/alternatively OSCulator).  It will allow you to send and scale Clip Automation Envelopes in high resolution from Ableton to Modul8 using completely free software (Cycling 74's Pluggo Jr. and Max 5 Runtime).  So essentially you will be able to use Ableton's Clip Automation Envelopes to automate effects in your video, the same as you can with audio, all perfectly synced and with extremely high resolution (16,383 steps as opposed to MIDI's 127 steps).  Additionally, you could port MIDI or live controller information this way as well... (which might seem pointless except that it opens up the possibility of using a wireless network to sync music and visuals between multiple computers, among other cool things!)

Lemur Control Tutorial:


I was hoping to have the new APC40 today, but sadly it seems Akai promised A LOT MORE units to be shipped than they actually produced in their first run so like many I will probably have to wait until later this summer to get my hands on one.  In lieu of this I think I will probably make use of this awesome new Lemur patch/interface made by The Covert Operators to control clips in Live.


I will post more tutorials in the future, including some produced by me.  Please feel free to comment and ask questions about anything I've posted.  Thanks!
-BM

Tuesday, May 19, 2009

"Put It On Ya" Music Video (2008)



Only a week after finishing this video back in May 2008, my hard drive was stolen/lost at a gig in Toronto.  To my great happiness, I recently found a lost DV tape that had the final edit of the video on it and I just encoded and uploaded it to my vimeo page.  

This video contains a lot of my original ideas of synchresis (synthesizing synchronicity between sound and vison, term coined by Michel Chion) and includes some of the footage that I used for my live shows, especially the big Mutek show.  I'd say that in a lot of ways my aesthetic has changed and expanded but the use of layering and keyed/matted images is still one of my core concerns.  

In digital video, one can create a video file that contains an Alpha layer (RGB+A) which will automatically set  transparency in your video, allowing your desired subject(s) to float on top of other video layers.  Unfortunately there are only a few codecs that support this (Animation+, PNG, TGA, Sheer+) and they come at the cost of creating much larger files than normal RGB video files which can take a serious hit on the CPU.  I will be exploring these formats and their use more during this project and provide some perspective on which ones seem to work best.

A somewhat more in-depth reading on the subject can be found here: Alphas Made Transparent

Building Blocks

Megasoid x Blingmod @ Mutek 2008, photo: fairlyawesome


To start off, I figure I should let you all know where I'm at, where I hope to go, and what I'm working with for my live av/pa set.  


I've been performing fairly regularly for the last two years; focused more on the musical side than the visual side.  Until last weekend (my test run at Super Aqua Club), I had never performed both visuals and music together, nor both off of the same laptop.


For my music set, I've always used Ableton Live.  My set has essentially consisted of triggering audio loops from bounced stems of my songs and then I've experimented with a variety of live chopping/filtering/processing techniques.  This summer I plan to explore new processing techniques as well as start running live synthesizers and drum machines (software based) and experiment with live MIDI filtering and arpeggiation.


With my live video, I built from the ground up a somewhat extensive patch in Max/MSP/Jitter in 2007/2008.  I performed with this alongside my friends at our Turbo Crunk parties and was even honored to play the Mutek Festival last year with Megasoid opening for Modeselektor.  In the final version of my patch, I was taking live midi and control feeds from Megasoid's Ableton set as well as a chopper/filtering patch that they had written and were running live off of a Nord G2 Modular Synthesizer.  All of this I controlled from a patch that I built on my Lemur.


I hope to build upon ideas that came about during these performances, utiliizing more cross-modal connections between my musical and visual softwares in ways that effectively balance interest between both mediums.  For now, I am stepping away from using or developing a standalone visual environment in Max/MSP/Jitter and have been exploring the possibilities of (mostly) developed software such as Garagecube's Modul8 and Vidvox's VDMX5.  However, I still use Max as a routing/scaling/processing device for controller information between applications and my Lemur.


To give you an idea of what I'm working with, here is my current performance hardware and software that I am using and plan to use this summer as I develop this project...


Hardware:

-Apple Macbook Pro

-Presonus Firebox

-Jazzmutant Lemur

-Faderfox LV2 (soon to be replaced by...)

-Akai APC 40 (next week!)

-G-Tech Mini FW800 Hard Drive


Software:

-Ableton Live 8

-Garagecube Modul8

-Vidvox VDMX5

-Cycling 74 Max/MSP/Jitter




Monday, May 18, 2009

Welcome to Blingmod A/V.

I've created this blog to document my progress as I work on building a new live performance set this summer for my Bearmod/Blingmod projects. In this undertaking, I am attempting to create a platform to combine live beat/song generation (using Ableton Live) along with live visual/video generation (using tools to be determined).

This journey is already well under way and the most useful resources I have found come from others who have been attempting similar A/V journeys and have graciously posted tips, tricks, tutorials, patches, templates, etc. on the internet for free. With this blog I hope to join this wonderful open source community by sharing the discoveries that I have been making and will continue to make along the way.

Ultimately, I hope this blog will serve to inspire and help others in creating and/or improving their own A/V performances.

Please feel free to comment on posts with questions, criticisms, recommendations, etc... All feedback is welcome!

Thanks!

-BM