I am happy to report that setting everything up now is a breeze. Looking back, everything started to work straight out of the box with version Ableton Live 10.0.5. More good news, it still works straight out of the box in Ableton Live versions 11+. Support has become integrated now. From the corner of my eye I did see that there might be problems with Komplete Kontrol S series and Ableton Live 11+ versions, but I am not able to verify that. So, what does the support mean? It means that you can immediately start working with your Komplete Kontrol A series keyboard by selecting it as a control surface in the Preferences > Midi > Control Surface section by selecting the Komplete Kontrol Surface and the corresponding DAW input and output.
This is just the start. If you downloaded and activated the Komplete Kontrol software from Native Instruments (through Native Access), you will find the Komplete Kontrol VST instrument as a Plug-ins intstrument. Drag it into a MIDI track and you will have instant Kontakt instrument browsing from your track. Now that takes some getting used to I must admit. Please note the following. Your A series keyboard display browse much more responsive then the Komplete Kontrol VST, so ignore the screen and focus on the tiny A series display when browsing. Click the Browse button on the A series keyboard to jump back to browsing at any point.
When browsing Kontakt instruments, nudge the browse button left or right to step deeper and back into the levels of browsing process. So at the top level you choose your either Kontakt instruments, loops or one shots. At the deepest level you choose your sounds. You will hear the selection audition a sound as you browse. If you push (don’t nudge) the browse button down as a button it will select the auditioned sound. This might take a while, so be patient. After that remember that you can click the Browser button again and nudge left several times to back to the top level. Keep your eye on the tiny display to see where you are browsing.
Once you inside the Plug-in MIDI button will light up and you will notice that the controls on your A series keyboard will automatically control the instrument macro’s. Again, touch the knob to see on the tiny display which parameter or macro is controlled and tweak and turn to get the perfect sound. This is how your keyboard should have worked from the start of course, but I’m happy to see how it has progressed. For all other plain MIDI control use you can still use the method of placing your instrument in a rack and MIDI mapping the controls to your instrument.
This is my first adventure with Deepfake technology. This blog is intended to show you how to get you started. In short its actually a technology that has a very dark side where it seems to be possible to make photo’s that show faces of people in videos or photo’s they’ve never appeared in by swapping faces. It can be done very fast and usually very unconvincingly by some apps on your phone.
The full blown and latest software can actually let politicians or your neighbor do and say crazy things very realistically and this way can corrupt your believe of what is truth or fake. Very scary. It also has a very creative side. Why can’t you be a superhero in a movie? I experimented with this creative side.
A new song for me is a new story to tell. Then a second way to tell the story is with a video clip and I like to tinker around with new ideas for video clips. Most musicians leave it at just a pretty cover picture and dump it on YouTube, but I like to experiment with video. There is a new song that is in the making now and I already found beautiful footage with a girl and a boy. The first step I take is to make a pilot with the footage and ask people if they like the concept of the clip.
Then I bumped into someone very creative on Instagram and when I showed the video it triggered some crazy new ideas. Why not make the story stronger with flashbacks? And there I thought why not swap myself in those flashbacks? The idea to use Deepfake technology was born. But how to get going with Deepfake?
First investigations led to two different tools: DeepfaceLab and Faceswap. There are many more tools, but in essence its probably all the same. Extraction tools to find faces in pictures. A machine learning engine like Tensorflow to train a model to swap two faces and converter tools to generate the final video. For you machine learning may be magic, but I already knew it from earlier explorations. Simply said its possible to mimic the pattern recognizing (read: face and voice recognizing here) that we humans are so good at.
Machine learning in the form that we have now in Tensorflow requires at least somewhere in the range of 1000 examples of something to recognize and the correct response to output when something is recognized. By feeding this into the machine learning engine it uses it can be trained to output a picture with a face replaced when recognizing the original face. To be able to make a reliable replacement the original and replacement data have to be formatted and lined up to make automated replacement possible. One aspect of the machine learning process is that it benefits a lot by GPU processing i.e. a powerful video card in your PC. This is important because current training mechanisms need around a million training cycles.
I chose Faceswap, because for DeepfaceLab it was harder to get all the runtimes. Faceswap has a simple setup tool and nice graphic user interface. The technology is complex but maybe I can help you getting started. By the time you read this there are probably many other good tools, but the idea remains the same. The Faceswap setup first installs a Conda Python library tool. Then all the technology gets loaded and a nice UI can be launched. There is one more step you need to do. You need to find out which GPU tooling you can use to accelerate machine learning. For a NVidia graphics card you will need to have CUDA installed.
Step 1: Extraction
The first step is actually getting suitable material to work with. The machine learning process needs lots of input and desired output in the form of images. At least around 1000 is a good start. This could mean 40 seconds of video at 25 fps, but 10 minutes of video will work even better of course. You can expect the best results if these match up as closely as possible. Even to the point of lighting, beards, glasses etc. If you know the target to do the face swap on you should find source material that matches as close as possible
Then its extraction time. This means already applying machine learning to find faces in the input and then extract these as separate images. These images contain only the faces, straightened up and formatted to get them ready to be used for the face swap training process. You need to extract faces from both the target and source video. For every face image the extraction process also records where the extracted image is found and how to crop and rotate the face to place it back. These are stored in Alignment files.
After extraction you need to single out only the faces that you’re interested in in case there are multiple faces in either source or target. From that point you can go to the next step, but the quality of the end result depends very much on the extraction process. Check the extracted images and check them again. Weed out all images that the learning process should not use. Then regenerate the associated Alignment files. Faceswap has a separate tool for this.
Step 2: Training
By passing in the locations of the target (A) and source (B) images and Alignment files you are ready for the meat of the face swap process, the machine learning training. Default settings dictate that training should involve 1.000.000 cycles of matching faces in target images to be replaced by faces in the source images. By default for all machine learning the software hopes that you have a powerful video card. In my case I have an NVidia card and CUDA and this works by default. If you don’t have a video card you can work without one. I found it slows the process down by a factor 7. My GPU went from 35% usage to 70% usage.
In my experiments I had material that took around 8 hours to train 100.000 times, so it would take 80 hours to train 1.000.000 times. Multiply that times 7 and you know its a good idea to have a powerful video card in your PC. During training you can see previews of the swap process and indicators for the quality of the swaps. These indicators should show improvement and the previews should reflect that. Note that the previews show face swaps vice versa. So even at this point you can switch source and target.
I saw indicators going up and down again, so at some point I thought that it was a good time to stop training. I quickly found out that the training results, the models, where absolutely useless. Bad matches and bad quality. At that point I went back to fixing the extractions again and rerunning the training. Much simpler, if the previews show fuzziness of the swap, the final result will also be fuzzy. So keeping track of the previews gives you a good idea of the quality of the final result. The nice thing about Faceswap is that it allows you to save an entire project. This makes it easier to go back and forth in the process.
Step 3: Converting
This is the fun part. The training result, the model, will be used to swap the faces in the target video. Faceswap generates the output video in the form of a folder with the image sequences. You will need a tool to convert this to a video. The built-in tool to convert images to video didn’t work for me. I used stop motion functionality from Corel VideoStudio. If the end results disappoints, its time to retrace steps in extraction or training. Converting is not as CPU/GPU intensive as training. You can at any point stop the training and try conversion out. Then when you start training again it builds on the last saved state of the model. If the model is crap, delete it and start over.
Here is a snip of the first fuzzy results. The final end result is not yet ready. Mind you, the song for the video clip is not yet ready. I will share the results here if it is all done. I hope now this is start for you to try this technology out now for your video’s! Please note that along the way there are many configuration options and alternative extraction and training models to choose from. Experimenting is time consuming, but worth it.
One more thing. Don’t use it to bend the truth. Use it artistically.
So this is what one of the interviewers said when I visited the local radio station here: “why not a cartoon video?” It was a passing remark when going over my video channel after the radio interview. Its something that this person, working with lots of creatives at the art academy in Den Haag, can easily say. But what if you’re just this guy in the attic? How to make a cartoon video? Not easy. This is how I got close to the result I was looking for with my video release for Perfect (Extended Remix).
A go to place is of course Fiverr. Here you can find animation artists and have your cartoon video in no time. There are actually animation sites that allow you to make your own animated video with stock figures and objects and I tried it. The first results where promising, but you need to go on a payed subscription to have maximum freedom. Even then you’ll find its mostly targeted towards business animations and infographics. A fun video clip animation is still hard to make. If you want you can try it: Animaker.
Eventually I stumbled upon this Video Cartoonizer. Its not free, but it seemed like it could do some pretty amazing stuff with “cartoonizing” existing video. You can see parts of the original video material here. Its quite funky and in many ways old fashioned software. It takes agonizing days to process video recordings like this, but the end result was quite amazing. Model Sara was also pretty pleased with the result. So there you have it. My first “cartoon” video.
This is a glance in my kitchen where I will tell you my kitchen secret: the sauce. You will find it somewhere on almost every song I released, the Molekular effects inside a Reaktor FX chain. This is an effect powerhouse that I use to bring life to otherwise maybe repetitive or otherwise uninteresting sounds. It’s well hidden somewhere in the infinite sound and effect library of Native Instruments. However, if you use Reaktor as part of your workflow, you might already know it. It’s sound experimentation to the max.
It’s hard to dive into the features of Molekular, because its really overflowing with possibilities. Just a look at the interface can already make your brain explode. Imagine that underneath that interface all kinds of wires are running to connect everything with anything. Reaktor users will be used to it, because it will be just a set of modules like all other modules. Please check out all video’s explaining the Molekular effects chain on the Native Instruments site.
I will try to make a start though. It starts with putting a Reaktor FX plugin in your effects chain. Then inside the FX plugin you load Molekular. Then in essence it starts on the bottom row. There you will see a chain of effects, that you can start modulating. The chain connections are depicted in the top right section. Effects can be chained one after the other, or parallel, or a combination of serial and parallel mixed. Then in the top left and middle you can choose how to modulate all the effect parameters.
The effects are just plain awesome. Hard filters, delays, reverbs, pitch shifters. Everything you need to bring bland sounds to life. You can make a rhythmic track tonal, or vice versa. You can drown sounds in distorted delays or otherwise alienating effects, or bring subtle life to a sound.
On the left side there are LFO’s, Envelopes, a step sequencer and a complex form of logic modulation. The modulation methods kind of overlap here and there and can then be interconnected to multiply or randomize the modulation of the effect chain. Then in the middle is a center piece, an X-Y modulator that can be set in motion by logic or the step sequencer, or by you.
The greatest power of this all is that if you replay your song you will have all modulations, no matter how complex, take place exactly the same way. The modulation can have complexity, but also repeatability in time. If you are a fan of totally random every time, this is always an option. For me the magic is the repeatability.
It means that I can just try some alchemy in effect chains and mess around with the modulation. If I find something that sounds cool, I can let it sound as cool every time. Assuming that you, like me, start the render from the same point every render time, the modulation of the effects will be the same. I find it inviting for experimentation, because it is rewarding if I find something that works.
There is only one problem now. With my luck, now that I tell you about it, it will probably jinx everything and it will be discontinued or stop functioning soon. This will really mean that I will have to freeze a machine software wise to allow it to keep running Molekular. With this in mind I will just tell you about it, so you can do the same.
If you have seen my recent live streams, you will have noticed that I ‘travel around’ these days while live streaming. I’ve started to use the Green Screen effect. With OBS Studio its so dead simple that you can start using it with a few clicks in your OBS Studio scenes. Of course there are also some caveats I want to address. The main picture for this post shows you what it can look like. It may not be super realistic, but it is eye catching.
So what do you need to get this going? A Green Screen is the first item you need. It does not have to be green. It can be blue or blue-green, but it should not match skin color or something you wear. It should cover most of the background, so it will need to be at least 2 meter by 1.6 meter, which is kind of a standard size you can find in shops. It should be smooth and solid. Creases and folds can result in folds in the backdrop, but some rippling is OK.
Then you need to set up OBS Studio. Its as simple as right-clicking your camera in the scene and selecting the Filters properties. In the dialog add the Chroma Key filter and select the color of your green screen. Then slide Similarity from somewhere around 100-250 to get a good picture. Everything outside the color range will become black. Then add a backdrop image (or video!) somewhere below the camera in the the scene list and you will have your Green Screen effect.
The first caveat I bumped into was that I set it up during daytime and it kind of worked, but then I found I stream in at night time and then you need light. In fact it turned out that 2 photo studio lights came in handy. When you use at least 2 studio lights they also cancel out shadows through folds and creases in the green screen. It does however bleed a little onto you as a subject, so you will be strangely highlighted as well. This is something you can also see in my first Amsterdam subway picture. Because of the uneven lighting in subways it does not really show. Not every picture is suitable as a backdrop. Photos with people or animals don’t work, because you expect them to move.
The second effect you see is that instruments with reflective surfaces also reflect the green screen. This will result in the background shining through reflecting surfaces. My take is that its a minor distraction, so I accept some shining through of the backdrop. Its also possible that some parts of your room don’t fit well with the Green Screen, doorways or cupboards. In that case you can choose to crop the camera in the scene by dragging the sides of the camera in the scene with the Alt-key (or Apple key) down. The cropped camera borders, will be replaced by the backdrop.
In a previous post I mentioned that I use OBS Studio for my live streaming and a little bit about how. It shows that I use an ASIO plugin for audio in the OBS Studio post, but why is it needed? For me in the live stream I want to recreate the studio quality sound, but with a live touch. After all, why listen to a live stream when could just as well listen to the album or single in your favorite streaming app? Lets first see where the ASIO plugin comes into play.
For OBS Studio and the live streaming setup, I chose to use PC on the studio recording side. Its directly connected to the Internet (cabled) and can easily handle streaming when it doesn’t have to run studio work. I play the live stream on the set dedicated to playing live and i use the live side stereo PA audio out to connect it to the studio side to do the live streaming. This means the live side if the setup is exactly as I would use it live.
It all starts with the stereo output on the Zoom L12 mixing desk, that normally connects to the PA. On the mixing desk there is vocal processing and some compression on all channels to make it sound good in live situations. To get this into the live stream as audio I connect the stereo output to an input of the Yamaha mixing desk. This is then routed to a special channel in the studio side audio interface. This channel is never used in studio work.
Of course it could be that your live setup simpler then mine. Maybe only a guitar and and a microphone. But the essential part for me is this that you probably have some way to get these audio outputs to a (stereo) PA. If you don’t have a mixing panel yourself and you usually plug in to the mixing desk at the venue, this is the time to consider your own live mixing desk for streaming live. With vocal effects and the effects that you want to have on your instruments. Maybe even some compression to get more power out of the audio and make it sound more live.
But lets look at where the ASIO plugin comes into play. The ASIO plugin takes the input of the special live channel from the Yamaha mixing desk using the studio side audio interface and that becomes the audio of the stream. Because I have full control over the vocal effects on the live side, i can just use a dry mic to address the stream chat and announce songs. Then switch on delay and reverb when singing. Just like when I play live, without the need for a technician even.
Playing a live stream is different from playing live, because it has a different dynamic. In a live stream its OK to babble and chat minutes on end, this is probably not a good idea live. I find however when it comes to the audio, it helps to start out with a PA ready output signal. Similar to the audio you would send to the PA in a real live show. Also it helps to have full hands on control over your live audio mix to prevent you having to dive into hairy OBS controls while streaming live. Lastly, for me its also important that streaming live is no different from a playing live at a venue in that you can break the mix, miss notes, mix up lyrics and that you feel the same nerves while playing.
Okay, like everybody else i started streaming too. I had a planned live show, but live shows will not be possible for at least another half year. Every evening my social timelines start buzzing with live streams and all the big artists have also started to stream live. No place for me with my newly created and sometimes shaky solo live performance to make a stand? After some discussions with friends i decided to make make the jump.
But how to go about it? If you already have experience with live streaming, you can skip this entire article. This is here just for the record so to say. After some looking around I came to this setup:
OBS is surprisingly simple to set up. It has its quirks. Sometimes it does not recognize the camera, but some fiddling with settings does the trick. You define a scene by adding video and audio sources. Every time you switch from scene to scene it adds a nice cross fade to make the transition smooth. You can switch the cross fade feature off of course.
I only use one scene. The video clip is there to promote any YouTube video clip. It plays in a corner and disappears when it has played out. The logo is just “b2fab” somewhere in a corner. The HD cam is the C920 and the ASIO source is routed from my live mixer to the audio interface on the PC. I setup a limiter at -6db on the ASIO audio as a filter to make sure i don’t get distortion over any audio peaks.
I also had to choose my platform. From the start i wanted also to stream live on Facebook and Instagram. Instagram however kind of limits access to live streaming to only phones. There is software to stream from a PC, but then you have to set it up again for every session and you need to switch off two-factor authentication. For me one bridge too far for now.
I chose Restream.io as a single platform to set up for streaming from OBS. It then allows to stream to multiple platforms and bundle all the chats from the different platforms into a single session. For Facebook pages however, you need a paid subscription tier. For now I selected the free options YouTube, Twitch and Periscope. YouTube because it is easy to access for my older audience. Twitch because it seemed quite fun and i also like gaming. Periscope because it connects to Twitter.
If the live show takes shape i might step into streaming from my Facebook page. Another plan is to try the iRig Stream solution and start making separate live streams on Instagram. With high quality audio from the live mixer. I will surely blog about it if i start working with it.
For now it all works. Restream.io allows me to drop a widget on my site. Its a bit basic and only comes alive when i am live, so i have to add relevant information to it to make it interesting. If you want to drop in and join my live musings check my YouTube, Twitch and Periscope channels or my site at around 21:00 CEST.
I’m back on the track of my own small solo live set. The first experiment was running a video stream that would run along with the show. But now there is a new twist: The Corona virus came and there will be no live set the coming months. All public shows have been cancelled for about half a year. My first live show has been pushed to November from June. The only alternative is live streaming.
Just before the lockdown to combat the spread of the Corona virus I had bought a stage light. Just one to at least have a blue wash on stage to set a kind of moonlight mood. This was the Ayra ComPar 2. A simple LED stage light, with an IR remote and plenty of flexibility be more than just a blue stage wash.
But while staying at home and after browsing through some online articles it dawned on me: you can simply control stage lights as part of your Ableton Live set. I use Ableton Live sets to run my stage show and believe it or not I use color coding for each different song to quickly browse through all the songs without having to look up the names.
The colors match the moods of the song, so my simple idea was to use this color code to match the color of the wash on stage. A red wash for a deeply felt love song. A green wash for a song about nature. A purple wash for an up tempo hot song etc.
But why put all this effort in a stage light when there will not be a stage for months to play on? Up to then I had been a bit weary of immediately jumping to live streaming instead of playing gigs. All the bigger artists now stream live. Every night on my socials there are at least a dozen artists performing live. I’m just starting out, so what can I bring to the table?
After discussing this with a close group of musicians and my music coach it became obvious. Why not start streaming live? It’ll be fun, even if nobody watches it. I can invite friends and just have fun together. And also because I had nothing else to do I jumped in to make this stage light idea work. It would change color with the song. Not on stage, but in the attic. The attic with my home studio as my online stage.
One of the intriguing functions of the ComPar 2 is the ability to connect a XLR cable with DMX signal to control it. After diving into it and in lockdown there was a lot of time to dive into anything I found out that there are also DMX light controllers that support MIDI. From the same company I got the Ayra OSO 1612 DMX Scanmaster controller. Very friendly priced i think.
The DMX light controller simply accepts MIDI note data and maps that to programmable scenes. The controller can be connected to a chain of lights and a scene can set each light correspondingly. You can have flashing lights in a scene or movement from stage lights that can move. With 240 scenes you can probably make an interesting progression of lights for several songs, but I simply have a red, green, purple and blue scene for each song.
The controller I chose has a default setting where it blacks out all lights when starting up and that is not a bad thing at all. The only thing I must remember is to switch off the black out when playing live. That is the only attention it needs and from there everything is now running on rails. The live streaming shows allow me to test stuff out, but I’m now pretty happy with this setup.
Since i started using the Soundbrenner Pulse and its Metronome app i’ve had serious problems connecting it to Ableton Live with Link. I read through the troubleshooting page forever. Added firewall rules everywhere. Checked the network traffic going from and to the laptop and the phone. Nothing. Almost nothing. The worst part was that somehow random suddenly the connection would work. Even more frustrating: i seemed to be the only one with these problems.
Suddenly it became obvious to me. If no one has these problems, it must be the network. Obviously the phone has to work from WiFi. My wireless network must be up to date and all should be fine, but it does work with these newfangled mesh repeaters. So my idea was: why not connect the laptop directly to the phone’s mobile hot spot and cut out the router and mesh network?
Suddenly everything connected flawlessly. If you ever want to use Ableton Live Link, make sure its a straight connection between devices. Any router or repeater can wreck the connection, or the reliability of the connection. Another problem finally solved.
This maybe something that I had overlooked for too long: Loopcloud. For years the talk of the sampling library town, but I didn’t look at it until I got a demo of the new Loopmasters Loopcloud 5.0 version at the Amsterdam Dance Event this year. I also had looked at other sample managers like Algonaut Atlas, but that may be only drums oriented. Intriguing, because Atlas uses machine learning to recognize the types of samples. For me, up to now, a sample manager was simply a folder in Ableton Live to browse through. And I had always put Loopcloud away as simply a shop to buy samples with a subscription model.
How to work with the application
The Loopcloud application is a standalone application, but it integrates with your DAW through a Loopcloud plugin. You can only have it on one track in your DAW. All samples that you browse then play through that track. The idea is to start with a sample in the Loopcloud application. You can have random sorting to free your mind. Then use that to edit, slice, dice, sequence, mash up and add effects if you wish. You can drag the final result into your DAW as a sound file. Quite something different than finding a sample and then edit it in the DAW. All with the tune and tempo of your DAW. It nicely prevents you using kind of preset sounds over and over. Clever!
It means however, that you have to keep two applications open while working. For those of you with two monitors, maybe a no-brainer. But then again, it could just be that you already have a nice workflow with your two monitors and now you need to fit in yet another application. Anyway, there is an option to have the application dock to the sides of a window at about 20% of the width. Combined with scaling and other options, you might manage with one screen. The application sometimes forgets how you docked and scaled it.
Your library manager
Now about the library management. The moment you add your own samples to the Loopcloud application it starts scanning all the samples in it. It will try to find BPM and key information and it will try to read other information from the name of the sample or the loop. It will probably not correctly discover more complex information like the genre, loop or one-shot, or the exact instrument. All is then marked down as tags and you can start searching for things like key and BPM.
For this you need to click the button marked “Your Libary”. If you also want the detailed information of your scanned samples to be correct you will have to start tagging yourself. Its quite advanced, you can tag whole folders and batches of files. For a more in-depth dive into the tagging and searching you should dive into the tutorials.
But then when I found out Loopcloud as a sample manager, the tutorial also pointed me to Loopcloud Drum. A separate plugin that is actually a full sample drum instrument. It uses its own Loopcloud drumkit format and will open up a separate section in the Loopcloud manager. A strange find in a sample library manager. As a separate instrument it has its own format and its actually more of a pattern beatmaker with its own sequencer. A preset list of drum kits get activated that have been assembled from Loopcloud one shot samples of course.
I didn’t find any option to change the patterns in the beatmaker, other than with a mouse. You would also expect an option to edit drum kits and build your own. You can edit the mix of the kit and save that as a “user” drum kit, but I didn’t see any way to create a drum kit from your own set of one shots. Maybe this is in a future version, or in a Loopcloud subscription model that I didn’t explore. I was kind of on the lookout for tools to start making beats, other than with loops or Nerve, but this is not it yet.
And even more? The tutorial also points to the Loopcloud Play plugin. Yet another sample instrument, but this time melodic. As an instrument its quite basic, maybe so basic that you fall back into the preset trap again. There are about 7 knobs to turn and that’s it. Like the Drum instrument it has its own place in the library and again no way to choose the samples. You can save tweaks to the knob as “User” instruments. I think it needs work, as this is no match for Native Instrument’s Kontakt.
Loopcloud has a quite intricate subscription model and not all of the features are available in all tiers. Specifically on using multiple tracks and the sample editing. However, if you just want to use it as a sample library manager you can even use the free subscription model tier. If you already own Loopmasters stuff it will automatically appear in your library. Even though it could do with more advanced detection of the samples that you load in the library, for me this was a great find and it surely beats the user folders in Ableton Live.
I will never sell your personal information. I'm in it for the music!