
I used to be a full time VFX artist working as compositor, 3D artist, producer and supervisor.
These days I am mostly home-bound due to health reasons, so my focus is mostly on tending to my family as the resident cook and doing the occasional remote work for film and TV projects.
You can find my work history on IMDB or LinkedIn and I do have a rather old showreel on Vimeo. Apart from that, this is my online home. Feel free to get in touch via email or on Micro.blog.
FMX 2013 - Crowdfunding
Phil Tippett had a dream. “Mad God” a film that he wanted to make, but for a variety of reasons got put on hold around the time of Jurassic Park.
The Problem
Lack of money. Simple as that. To get it you used to ask friends, family and people foolish enough to invest In your little project. That lead to a lot of concession being made. Giving the rights to creative input away to investors, having investors kids in the movie, etc.
A Possible Solution - Crowdfunding
Mad God got funding via Kickstarter and Corey talked us through what he has learned.
- Make your objective clear
If you cannot define your product, then people have no concept of what they are buying into. - Make your pitch personal
- Don’t come off as aloof. If people sense that you don’t need/want their money, they’ll take it elsewhere.
- Set a realistic budget
- As in make a spreadsheet and plan things
- Limit your reward costs
- Make your project accessible to backers at all levels
- Let the project reflect you and your idea
- Don’t be afraid to fail
- Treat your campaign as a campaign
- Advertise via social networks like Facebook, Twitter, App.net, LinkedIn
- Get people that are influencers to push your project to their followers/readers
- Write frequent updates during the campaign
- Ride the wave of spikes and plateaus
- Treat your backers with respect
Subtitle: How To Raise Money For Any Startup, Video Game Or Creative Endeavor By Corey Rosen, Head Of Creative Marketing, Tippett Studio
FMX 2013 - OpenSubdiv
We startend the session with a quick history of subdivision surfaces. Invented by Pixar and first used in the short Gerry’s Game they made away with the constraints of both polygonal modeling as well as Nurbs modeling.
What’s wrong with Nurbs?
Nurbs surfaces are based on control vertices or hull points through which a b-spline is calculated. This leads to smooth curvature and inherent UVs. Both are good. However, Nurbs modeling relies on adding a multitude of Nurbs patches together to form you final surface. He problem arises at those patch seems where it can, and often is, mathematically impossible to create a seamless surface, much less when the “patchwork” is animated.
So Polygons then?
Short answer: nope.
Long answer: While polygons have no problem with arbitrarily complex surfaces or seem cracks, they have their own set of problems. First, they don’t have inherent UVs and unwrapping a complex mesh for texturing is no small feat. Second, to get smooth surfaces you need a very high amount of polygons or play cheap tricks with normals, which tend to quickly fall apart under scrutiny.
Subdivs
First, the world in general and Pixar in specific only calls then Subdivs. Not subdivision surfaces.
Also, they are the answer to all the problems above. Arbitrarily complex, while always maintaining a definably smooth, crack free surface. Also, with the addition of PTex there is no need for UV unwrapping anymore.
In addition, Subdivs support localized levels of subdivision. What that means is that the whole pipeline can work with the coarse base mesh. The modeler can then go into specific sections that need more definition, locally subdivide that area and make modeling changes there. Those will be saved and applied at render time. Bill showed an example of that at work. In Brave, which was originally supposed to play in winter, they had Merida’s horse run through a snowy plane. The plane itself had a resolution of about a vertex per square meter. Enough resolution to model snow drifts. However, for the horse’s path they locally increased the planes resolution to one square centimeter at render time to capture the fine detail of hooves disturbing the snowy surface.
At rendertime
The term “at rendertime” is misleading, because Pixar is now using a GPU implementation of the subdiv algorithm. The implications of that are far reaching.
At the simplest level “at rendertime” in the paragraph before means, the animator gets a live preview of those several hundred thousand faces in real time in the viewport (Maya’s Viewport 2.0 in this case, which has OpenSubdiv support built in already). Let me restate that, we saw a demo of a “low poly” mesh with about 3000 faces animated with bones that had the OpenSubdiv algorithm applied. What we saw on screen were about 3.8 million faces animated in real time. And since Subdivs have the added benefit of getting displacement at hardly any additional cost, those 4 million polygons were displaced as well. Very intriguing stuff.
This is not only interesting for VFX though. Since this realities GPU implementation also means games that adopt this will get much more visually complex. And in fact, while Bill could not mention any names, he went out of his way to let us know that major mobile company that produces very popular devices we all own and with the power to dictate the chip manufacturers what to put into their chips will implement hardware OpenSubdiv support within the year. Or as Bill put it, you will likely have devices with hardware support built in the next time you see me.
Good for Pixar. How do I get it?
That’s the nice part. You likely already have the technology available to you. For one OpenSubdiv is open source and all the licensing for the technology is available for free as well. Also, if you use Maya, you have access to all this already. Maya uses the exact same algorithm Pixar uses since Maya 5.0 and now also has the GPU implementation through Viewport 2.0.
So no excuses, get cranking!
Subtitle: Talk by Bill Polson, Director of Industry Strategy, Pixar
FMX 2013 - Camera Physics
This talk was very theory heavy with lots of formulas and photos of curves that summarize pretty badly. I still gave it a try here.
The session started with the history of camera tech. From the first wooden boxes with a hand crank.
A hand cranked camera had severe restrictions in that it obviously only could film where a human operator could go and there wasn’t even adjustable focus. Since then we have come a long way. Huge VistaVision camera rigs, car cranes with two seat at the top end of the crane (Titan crane) and Technocranes, which was the first camera crane that didn’t need someone to look through the lens allowing much greater range of camera freedom. This was thanks to Jerry Lewis idea of a video feed.
After that came the motion control rigs, which not only allow repeatable motion to shot many matching passes of a shot, but also allow the combination of live footage with camera matched 3D footage or miniature footage.
Setting The Scene
- it’s imperative to set up your film back accurately
- after that setting the focal length should give you the correct field of view
- you need to set up your nodal point on set correctly
- live action usually is not nodal, meaning there is a parallax while panning
- this must be matched in the 3D camera
Real Life Camera Motion In 3D
Cameras must adhere to physics, which means there is a limit to the acceleration of an object. Not speed. Acceleration. Meaning sharp changes in speed, up or down, directional changes.
The acceleration can be derived by the second order derivative of the position change (translation curve). Ideally, you want your 3D curves to accelerate gradually with no more the 9.82 m/s or 1G. Some motion control sYstems can handle more or less.
Of course, there is also a certain speed limit involved.
Problems Capturing Live Motion For Repeatability
A problem arises when you have a camera motion, usually SteadyCam shots, that you need to transform into a motion control shot. For example, you film your actors with a SteadyCam in a greenscreen setup and then you need to repeat that move with a motion control rig on a different set or a miniature.
The way to go about it is to match move (track) the shot, which will give you a rather noisy result, at least in motion control terms, which you won’t be able to program. You can smooth the result, which will loose you accuracy, but enables you to program the shot. The trick is to filter enough, without leading to misalignment.
Subtitle: Talk by Anthony Jacques, VFX Camera Operator
FMX 2013 - Panel Discussion on the Future of the VFX Industry
Not too much to record for this one. It was mostly company heads trying to weasel their way out of loaded questions by the audience and discussion chair Eric Roth. Things like “Do you think a VFX union is a solution to the problems?” “…”
It was interesting to see the panel stammer around the issue. But mostly it was sad that they still don’t see that they are a part of the issue. Companies not standing up and showing some balls.
Example: Pixomondo’s Chris Vogt agreeing that tax incentives are part of the issue, while not disclaiming that Pixomondo is pushing for tax credits here in the Stuttgart area. In fact, the papers are about to be signed within a month.
One of the last questions was good: “If you had 30 million, would you invest it in VFX today?” (After every panelist confirmed that they are feeling positive about the future of the industry.) Not one would use the money to open a VFX company. Very telling. :) Some suggested to invest it in a film fund.
Also, “why do you keep working on a fixed cost model? That’s insane.” A (summarized): “We are no businessmen and also passionate. So it’s mostly our fault, but it’s likely not going to change since our clients won’t go for a cost plus model.”
––––
Subtitle: Panel with Eric Roth (VES), Pierre Buffin (BUF Compagnie), Mark Driscoll (LOOK Effects), Christian Vogt (PIXOMONDO), Jean-Noël Portugal (jnko)
FMX 2013 - The "unfilmable" Life of Pi
Opening with a joke about the botched Oscar ceremony, this is promising to be a good yet sad talk. Rhythm & Hues went from 1000 artists to a small fraction of that recently.
Why was Life of Pi “unfilmable”
Three simple reasons, combined making for a perfect storm:
- water
- animals
- kids (the actor playing Pi while technically not a kid played his first role and could not swim)
Research
Ang Lee did some hands on research of how a life boat or raft behaves in the ocean and how water and waves behave. Authenticity was paramount.
A pool with a wave machine was built in addition to an extensive set of 3D shaders mimicking the pool water. The water had to blend with the CG seamlessly.
Water Shading
- custom, physically plausible water shader
- a set of five different water noises to layer and mix the water as needed
- to avoid tiling a set of noise patterns was used to multiply the effects of the water noise parameters
- once water was dialed in, it was locked and 3D artists couldn’t change it and needed to adjust the scene to get the desired effect
- and of course, physical accuracy went out the window as soon as we reach the comp stage or when the director wants the water different then the physical simulation says it should look
Skies
About 140 different extremely high–res HDRi skies were shot and and artists could pick and choose crop outs
Meerkat Island
- one giant single Banyan tree system with about 6 billion polys per frame
- up to 45000 meerkats in one shot done with Massive
- up to 10000 in the shot
- basically only Pi is real, the island is CG
Richard Parker, the tiger
- subsurface on the fur
- new muscle system
- not anthropomorphized at all
- based on a real tiger, “King”, from France, he appears in only 23 shots
muscles simulation → subcutaneous skin layer → epidermis layer that slides over the top → covered by fur
All that leads to the realistic wiggling, bouncing and sliding of ski that makes for a realistic animal.
10 million strands of hair lit with area lights, subsurface scattering
Subtitle: Talk by Chris Kenny, Compositing Supervisor, Rhythm & Hues
FMX 2013 - Le Big Shift in VFX
Topics discussed center around what the industry can do to improve interoperability and workflow to strengthen the business instead of running it into the ground.
Open Data Platforms
Rob Bredrow took the lead by talking about the work he and SPI has worked on to create a good open standard onto which companies can build to achieve something greater. Alembic, OpenColorIO, OpenEXR to name a few.
Before every company needed to reinvent the wheel in-house to set itself apart. Now they can work on a common standard which helps with interoperability between companies, which is something that is required in today’s industry.
We have moved from “secret sauce” to common baseline, which also includes the game industry. A convergence of VFX and games in this respect seems inevitable to Cevat from CryTek.
The words “vector of cross-pollination for these industries” were uttered. That should tell you pretty much everything you need to know.[^1]
Cloud Based Solutions
Cloud based computing power is something that is a topic everybody was interested in, even the big houses, who usually have several thousand render farm computers of their own. There are immense draws to this kind of workflow. From lower overheads, due to savings on machine, administration and licensing costs to being able to ramp up render power quickly when you need it during a deadline crunch.
This interest is something that is shared equally between small studios like ours and the big boys. And only expected to expand in the coming years. Ludwig von Reiche talked a bit about cloud computing applications in development, running on Amazon Elastic Computing among others, which he expects to come out within the next year.
While there are already solutions that offer Amazon Cloud rendering, they are usually cobbled together requiring a small science degree to figure out. We are talking about more accessible solutions.
Rob Bredrow argued that there are really two sides to cloud computing. One is the reduced cost due to less inventory and running costs. The other side is to add a lot of render power on demand, say 1000 or more machines to render a shot in an afternoon. That might cost more, but speeds up the creative cycle and might be worth it from that perspective.
[1]: That’s around where I got bored and lost a bit track of the conversation.
Subtitle: Panel Discussion with Marc Petit (Autodesk), Rob Bredow (SPI), David Morin (Autodesk), Don Parker (Shotgun), Ludwig von Reiche (NVIDIA ARC), Cevat Yerli (Crytek)
FMX 2013 - Cloud Atlas
Starting out with a short overview of RiseFX, his company, Florian dove right into the workflow for Cloud Atlas. RiseFX has garnered a reputation for using innovative approaches to set extensions and they didn’t disappoint on Cloud Atlas either.
Starting out with some stats:
- most expensive German movie to date at 100 million US dollars
- financed independently internationally
- lots of famous actors in multiple wildly different roles
- shot only in European locations standing in for totally different environments
- pre–production started July 2011 with the previz of 1973 San Fransisco and the Luisa Rey car crash
- shooting started in August
Car Crash
The previz was very accurately planned and is pretty much verbatim like that in the movie.
Filmed on a bridge in Scotland, which lead to major cleanup work as the bridge was supposed to lead to an island not to mention be in San Francisco. A major challenge was that it plays at night with various camera crane moves showing kilometers of street environment. While you could in theory light that set, there would be huge amounts of lights, power and therefor budget involved.
So the guys went ahead, shot her on the backlot of Rise in Berlin in front of a blue-screen. For the bridge, they made a 3D scan of the environment that got them a textured and completely relightable. Easy.
The crash itself was filmed on a gimbal with added CG trash elements floating around giving a zero gravity effect.
The car crashing into Halle Berry’s car was a 3D car for a simple reason. The director liked the movement of the car in a specified shot, which was unfortunately unusable due to plate errors. So the car was modeled and textured based of all the different takes. Then the preferred shots car was match moved and the 3D car placed into the shot.
Plane Crash
The plane crash was a pretty straightforward Houdini simulation. However, for this project RiseFX adapted a 100% Houdini approach, unlike what most other companies do, simulate in Houdini and model, animate and render in another package. This approach saved the, a lot of headaches as everything could interact and be rendered in the same package.
Which means for the plane explosion that not only could the simulation influence the fluid simulation, but also they could light the geometry with the fluid simulation. Meaning the explosion light everything physically correctly including all the small debris.
Lots of environment re–projection
Having all the sets as Lidar 3D scans meant that they could very easily redress entires sets and just replace them with the 3D model. It also made relighting a lot easier.
––––
Subtitle: Talk by Florian Gellinger
FMX/fxphd Kickoff Meetup


Yesterday evening was the first of likely many fxphd meetups this week. About 15 people showed up and had a relaxed chat. It was a great time to catch up with old friends and meet new faces.

Movies were discussed, Schnitzels eaten and some teased (twitter: johnmontfx text: John Montgomery), who sadly couldn’t make it this year, with pictures of delicious wheat beer like the cruel friend we are.

All in all it was great seeing everyone again and to have some light non-FMX, but of course still VFX chat.
Remote posting to Kirby via iOS now working
Next week is FMX, a great yearly convention about visual effects, games and virtual reality. I’m there every year reporting for Professional Production Magazine and I thought this year I could up my game and write little short blurbs on this blog about the sessions I attend. I asked around and there was some interest from people that could not make it this year or which are interested in the topic.
So far so good. There was only one problem. I switched to using Kirby as the back end system of all my websites including this blog. Kirby is a great lightweight system for web publishing. Only, because it is still so young, it doesn’t have a mobile client that allows you to create posts or pages easily. True, it comes with an admin panel, but that doesn’t work offline and it also doesn’t work on my iPhone. So I was looking for something that better suited my needs. And I had a first version working.—
Drafts, Dropbox, Hazel, Rsync. Easy, really.
Since I only had a few hours here and there, it is still really rough and not as elegant as I’d like, but it works. I am now able to use Drafts to compose my posts on either my iPad or my iPhone, or start on one and finish on the other, since notes are always kept in sync, send the post to Dropbox via Draft’s custom Dropbox actions where it is picked up by Hazel, renamed to fit my Kirby naming convention, put into the right folder and then uploaded via Rsync.
Pictures work, too
So much for plain text posts. But pictures work, too. Though they are still very much a pain. iOS' default camera app records the orientation of the device as EXIF data, so portrait orientation pictures are not actually rotated, but the EXIF Orientation is set to 90° or -90°. Which is all nice and fine, except Safari, and a few other browsers, disregard the EXIF orientation, so images appear in landscape on my site after I upload them. That sucks. The only way around that is to either do anything to the image in Camera+ or another photo editor and save the result, which bakes the rotation into the image. Or to write a function in Hazel that bakes the orientation into the image after I upload it to Dropbox. I’m probably going to go with the latter, since I’m trying to save battery life and using Camera+ a lot sucks down battery life quite a bit.
But I’m getting ahead of myself. To get my images up to Dropbox I use a great little app called CameraSync, which has Geofencing support, meaning I don’t have to do anything to upload the picture to a folder of my choosing on Dropbox. Also, and this is the killer feature, it resizes them before the upload, saving me time and bandwidth.
The biggest pain from here is that I still need to manually move the images into my blog posts folder and rename them from somename–timestamp.jpg to the naming I use for images in my blog. I have not found a smarter way to do it yet then to do it manually. As I said, the whole process is still rough around the edges.
Please update your RSS reader
Effective immediately, I switched over this site’s RSS to URI.LV to handle my feed subscriptions. With Feedburner going down in the near future and Kirby not having its own feed tracking, URI.LV seems like a pretty good replacement.
Also its support rocks. I had a problem with the URL rewrite rules and Maxime, the mind behind URI.LV helped me out within minutes.
Cartoon Movie 2009 — Lyon, France

At the moment, I am in Lyon, France to write about Cartoon Movie 2009. At this conference, motion picture and games are presented to investors and distributors. A wide range of animation styles and storylines is present, which makes the conference really interesting to watch. A lot of great and creative people come together to give each other feedback and to look for new ideas and opportunities. All in all it is an exciting melting pot of like-minded people.
I will be writing a two-page article about it for Professional Production magazine in Germany and probably will present an abridged version of it here when I am done with it. So keep watching this space for more info.
VFX Coordination for Pixomondo
I have been hired at [Pixomondo Images] 1 in Ludwigsburg to work as VFX Coordinator for some of their recent projects. This is an exciting opportunity. The company size is considerably bigger then most companies I have worked so far and I have to handle several smaller projects instead of one or two big projects. That is a new challenge and I am looking forward to master it in the coming weeks.
Being a part of Pixomondo makes me proud, as this is a very professional, yet cool company to work for. I have a chance to stay with them for a longer time if I am not messing up the first 1,5 test months. Wish me luck.
Prisoners of the Sun put on hold

It is official now. The production of “Prisoners of the Sun” has been put on hold. All employees and freelancers have been laid of as of 30th April 2008.
According to rumors, it seems the legal situation of the movie was not researched thoroughly before going into post production.
It is a pity, because we were at a stage, where all the pipelines were set up and we just started our first two full CG shots. We were actually starting to have some good old VFX fun. But not anymore. Now we have 20 people hunting for jobs again.
It really makes me sad to give up a great little company. We had a killer team. Thanks to all of you for being so great.
P.S.: If anyone has an open position for a compositor or VFX producer let me know.
Securing Your Laptop at Work
I wanted to share this litte trick with you for quite a while now, but of course I have been busy and other things came in the way. They always do.
I work with my MacBook Pro at home and at the office. That is of course nice, because it is my only base of operations and I have everything in one place.
But do you know the feeling that you are working with your laptop at work and don’t want anyone to snoop around on it, because it also is your private laptop and has all kinds of personal stuff on it? I know I do.
Luckily there is an easy fix for that. Enable the setting that pops up a password request after your machine wakes from sleep or when it disables the screen saver. But now you always have to enter the password at home, too. Kind of annoying.
There is an easy fix for that as well. The key is location aware software like [MarcoPolo] 1. MarcoPolo allows you to trigger certain actions depending on where it knows you are at the moment. Pretty cool stuff. That means it allows you to enable or disable the screen saver password at home or at work. Neat isn’t it?
I made a little [screencast] [2] walking you through the steps. All you need is MarcoPolo which you can get for free from the [developers website] 1.
If you don’t want to use MarcoPolo you can also use AppleScript to do the same thing with whatever software you prefer. In fact I used the scripts with MarcoPolo, because I overlooked the very convenient built-in action that already does that.
The AppleScripts are as follows:
####Enable Screen Saver Password
tell application "System Events"
tell security preferences
get properties
set properties to {require password to wake:true}
end tell
end tell
####Disable Screen Saver Password
tell application "System Events"
tell security preferences
get properties
set properties to {require password to wake:false}
end tell
end tell
Have a look at the screencast for more in-depth info.
I hope you enjoyed this little hint. Check back for more in the future.
Blog Posts Transferred

I finally found the time to transfer all posts from the old BabylonDreams blog over to their new home at this address. I only transferred posts I attached a certain value to, so not everything is mirrored 100%.
If you miss a post or something is broken, please send me a message via the contact form. Thank you and enjoy diving into all the new/old content.
How to Behave as New Guy - I Guess, I Wasn't the Only One
I just found an article on the BusinessWeek site entitled [“How Not to Be the Obnoxious Newcomer”] 1. It seems that author [Liz Ryan] 2 has also stumbled over one too many of those annoying people that waltz in on their first day and step on everybody’s toes, just like I did a while ago when I wrote (link: blog/newbie_dont_act_like_a_pro text:“If you are a newbie, don’t act like a pro!”).
Liz thinks:
There’s no doubt that every organization has a few best practices to share. As the new kid on the block, you can share what you’ve learned elsewhere and make a real contribution to your new employer’s operations. But if you lend that expertise in such a way that people roll their eyes and drift away when you enter a conversation, you’re not helping anyone—even worse, you’re setting yourself up to have negative credibility with your peers.
Solo - a Danish Documentary
Hello everyone, I have to make a completely selfish plug for a movie I worked on lately. It is called Solo and is about Jon, the first winner of the Danish Popstars show. I don’t speak a word of Danish — well, at least not enough to fully understand the movie — but the critics seem to love it and it is a big success for us so far.
Me and my team worked on several things, some which made it into the final movie and some which ended up on the cutting floor. We had some nice effects and compositings in there, but the director decided that it is a documentary not an effects movie. A good decision if you ask me, although it is a pity the effects never me it into public.
What is left is a few name removals from signs and some other invisible effects to increase the quality of the material like degraining or resizing and reframing, etc.
We are shipping the DVD which has English subtitles (Yay! I finally can understand what I contributed to!!!) over the company website and through the normal DVD stores beginning of June. So just grab a copy and make the success even bigger, will ya?
How to Work Efficiently
A lot of times I hear people complaining how slow this or that program is. “shake is so slow”, “motion is total crap” or “why is this taking so long?”
Well, guess what, there is actually a solution to it. And the magic word is workflow. We are working on pretty heavy projects most of the time. HD material (even if the output will be PAL in the end), 2K material for film or lots of layers or 3D layers. All that stuff is by definition slow. Especially if you work in a more complex setup then one or two layers. Add to that, that we are working in a network based environment and you get a workflow slow as molasses.
There are a few things you can to do help pretty much every program to work faster.
- Don’t work in full resolution
- Don’t turn on all the effects, just because it looks better
- Don’t work over the network
- THINK before you work
Don’t work in full resolution
Yes, I know, you just built the most sexy motion graphics ever and you feel like looking at it over and over again, while tears run down your cheeks. Don’t.
Your first responsibility is to work as fast as you can, because that allows for making it even better and being more flexible when the customer has last minute changes. It is of little importance to look at your masterpiece in 100% resolution and highest quality most of the time. Don’t use these settings until you really need them to judge the final output quality. For all other cases go ahead and turn down the display size to 50% and the display quality to a third or a proxy of a third. Some applications even offer the option to go one step lower in quality as soon as you move things around. Use that.
Another thing you should think about is the use of proxies. A proxy is a lower resolution version of your footage. So instead of working on HD material you work on PAL or half PAL material. After Effects and shake both support the automatic use of proxies. That means they automatically switch from low-res to high-res version as needed. To see how that is done, have a look at the tutorial video, that describes it in more detail.
Don’t turn on all the effects, just because it looks better
Yes, that 3D lighting in AfterEffects is sexy. But if you are working on font sizes, it is absolutely irrelevant. So switch it off. Same goes for every other effect that slows you down, but has no effect on the part you are working on at the moment (Motion Blur, other kinds of blurs, re-timing, heavy 3D, particles, you name it). It is much faster to switch off 5 effects, do a change and switch them on again, then trying to make a change while waiting 5-10 seconds for the screen to update.
Don’t work over the network
You might have a pretty fast network, but when 10-20 people work with HD or 2K footage over the network it slows down. Nothing that can be done about that. Except to copy your source material to a local drive. That will be way faster to work with then waiting for your bits and bytes to trickle through the cable.
THINK before you work
Seems pretty obvious, right? Try to think before you start working.
Working locally is fast. But if it takes 2 hours to copy footage to make a 10 minute change, then it might not be the most efficient way of working.
Generating proxies is cool if you work on a shot for several hours or days. But generating proxies takes time as well. It might be faster to just open a file, do a quick change and render it out again.
Assess the situation and act accordingly. But don’t get lazy!! If you see a benefit, do it!
Twitter for Colloquy
I don’t know if you already know this service called [Twitter] 1. It can be described as a IM independent status message, so you can notify people of your current doings if you like.
[Coda Hale] 2 created a [Quicksilver action to post conveniently to Twitter] 3. [Ted Leung] 4 thought it would be nice to have [Growl notifications] 5 in the mix and [Matt Matteson] 6 thought it needed a bit more sparkle by [adding iChat support] 7. Well, now I add two more options to the mix.
First I created a modified version of Mattesons’ script to work with [Colloquy] 8 — my favourite IRC chat client (Which I also use for IQC, MSN, Jabber and IRC via the help of [BitlBee] 9). This worked out pretty straightforward and nice — and can be downloaded here (for setup instructions go to [Coda Hale’s page] 3). But then I thought it is stupid to have to change my status message of my chat client outside my chat client. So I sat down a little while longer and created a Colloquy script that posts to Twitter — including Growl notifications.
This script works like the normal /me
commands in IRC but also posts the text behind the /me
command to Twitter. Users of Colloquy should know what it does, but for you non-IRC people, /me
creates an _action_ message so /me is hungry
becomes AlexK (my nickname) is hungry. And on [Twitter] 1 it says _AlexK (my username there) is hungry_. It makes perfect sense (to me at least).
###How to install
To install it download this file, unpack it and save it as twitter.scpt into your Colloquy plugins folder (~/Library/Application Support/Colloquy/Plugins/twitter.scpt), then restart Colloquy (on some machines a system restart was necessary for unknown reasons).
Then you need to configure it. If you’re not already using [Twitterrific] 10, open Keychain Access and add a new password with the following data:
- Keychain Item Name: http://twitter.com
- Account Name: Your email address
- Password: Your Twitter password
If you’re already using Twitterrific, this password will already be in your Keychain. So just sit tight.
Now you can use it with the commands /tweet
or /twitter
like so /tweet is testing out this cool plugin
. It should send and action message to your current chatroom (Nickname is testing out this cool plugin) and also to your Twitter account (Username is testing out this cool plugin). After it succeeded you will get a Growl notification.
On first use you will be asked if it is ok that the script gets access to your Keychain. This is fine and you should allow that. On first usage it might also take a little bit longer, but from then on it should run pretty fast. At least it does so over here.
###Caveats
I have one guy who had problems with it running very slow (up to 15 seconds), but over here is runs pretty much realtime. Of course it has to send data over the internet, so that might be a slowdown factor. I guess if you like just try it out.
UPDATE: [Jesse Newland] 11 linked the slowness to AppleScript needing a lot of time to access the Keychain in some cases. Maybe a bug on Apple’s side? Anyway he suggest to [use Rubygems to work around this] 12. Thanks for that workaround, but I am not too hot on installing an extra piece of software just for this. But for all of you out there who have problems with my Colloquy/Twitter script being slow this might be a viable solution.
And as always with this home-brewn scripts, I cannot give any guarantee that it works and I also distance myself from any damage done by it. It works great over here, but I cannot guarantee this for your setup. That is all I can say.
###Following are comments from the old blog
Blaine Says:
Nice. It’s very cool to see twitter inching its way into every nook and cranny of the desktop and not.
Any particular reason you didn’t use Bitlbee’s Jabber integration to help you out?
Alexander Kucera Says:
Well, two reasons Blaine.
- I hardly use Jabber these days
- When I use the Jabber route I am locked in to using Jabber
Let me explain the second one.
Let’s say I am in an IRC channel. I’d have to switch the room to a Jabber connection to send my message to the twitter user.
With my script I am independent of the room I am in. It just intercepts the command and send it to Twitter. I don’t have to think about sending it to a specific user or anything else. I just type my command and it sends.
Very easy, very “out of my way”. I like to keep it simple, otherwise I end up not using it.
Rinoa Says:
Found you on digg. Have you considered having your script submitted on the Colloquy website?
Alexander Kucera Says:
Hi Riona,
I wanted it to be tested by a few more people before making it an “official” part of the Colloquy site. But yes, I am definitely thinking about submitting it.
Blaine Says:
That makes sense. I didn’t realize Bitlbee was presented as an independent room in IRC.
Keep up the good work, and let us know (help@twitter.com) if there’s anything we can do to improve twitter!
Naming Conventions - Part 02 - Real Life Application
Last time I talked about the basic ingredients of a working naming convention, which got inspired by a topic covered on [Lifehacker] 1 and [43 Folders] 2. What to avoid — like never using the word “final” in any circumstances — and what to include — for example version numbers.
This time I will show you how to use this new gained knowledge and plug its elements together into a working template for a professional naming convention. Just read on to find out how it’s done.
###The Template
Actually it is very simple. Take what we have spoken about last time and plug the elements together to get a working naming convention, like this (this might wrap a bit unlucky, if so just wait till we get to the real life example — which reads more easily — two lines further down):
<project-name>_<file-content>_<version-number>_<changes to the version before>_<author>.<file-extension>
Easy, isn’t it? So in a more lifelike example we get:
ambul_keying-Shot012_03_fixedMaskIssue_AlexK.mov
This file is part of the project called “The Ambulance” and it contains the greenscreen keying and compositing for shot 012 and is in it’s third revision. The change to the last version is a fix in one of the masks (which probably was wrong in version 02) and the change was done by me.
###It sorts in groups…
As you can see this is a pretty human-readable filename. And just from looking at it we can see what is going on. And because of the way we built the filename it sorts correctly when we sort by filename in the file-browser. We will always see the shots for the project “ambul” together, followed by all versions of keying shots, then the individual shots in order and then the individual versions of a shot.
And if you ever accidentally save your file in the wrong folder(another project or your local drive instead of the projects drive), it becomes pretty easy to find again, because it includes the project name.
###…and is extensible
The nice thing about this order is that we can now add as many additional informations to the filename as we deem necessary after the version number — like in this case the changes to the last version and the authors name. We will always get it sorted in the right way, because the version number comes first in the filename.
And we also need not worry about including the date in the filename directly in most cases, because the operating system takes care of this automatically.
There is not one correct way — the template adjusts to your taste
There are of course many different options to express the same information as above. We could also write any of the following:
- the ambulance_keyingShot012_ _version03_fixedMaskIssue_AlexanderKucera.mov
- amb_gskS012_03_fixMask_AK.mov
- ambul.keying_Shot-012.03.fixedMask.AlexK.mov
- ambul-keying shot 012-v03-fixed mask issue-AlexK.mov
All these versions contain the same info. Some are more verbose, others less. Some are easier to read, others less. I tend to stick with underscores to divide the individual chunks of information and dashes or a combination of small and capital letters to make the individual chunks more readable. This comes from my programming history. If other schemes are easier to read for you, take them instead. This part is completely up to your taste and doesn’t influence the effectiveness of this naming convention as long as you always use the same method.
###Do yourself a favor — stick to it
Stick to it once you have a working naming convention. And please don’t try to change it after every project. You will be thankful for sticking with it when that project resurfaces after a month or a year.
If you have questions or suggestions about my way of naming files please leave a comment. I’d love to read about your ways of tackling this issue.