Modelling a possible Gene Drive in Mosquitoes

A while ago I told a fauna-nerd about CRISPR/Cas9 and the potential it has - for genetic modification as well as for diagnostics and similar. I have previously posted articles about CRISPR on Ich ahne Zusammenhänge, a blog I co-author, but I got into a little argument with her about the efficiency and it led me down an interesting path and I wanted to write about it in more detail. Before I start, a little disclaimer: I am by no means an expert on any of this, so please take anything below with a grain of salt. If you notice any errors or have suggestions etc, I am always happy about feedback!

First, some context: with traditional techniques it was already possible to create a single mosquito that is unable to carry the parasite that causes Malaria. But since the parasite has no detrimental effect on the mosquito there would be no evolutionary pressure that would preferable select this particular version of the gene (aka this allele) of our "mutant" mosquito. Since there are so many "wild type" mosquitoes, you would need to outnumber the wild type mosquitoes to have a decent chance to get rid of the wild type trait and it is unrealistic to breed so many mutant mosquitoes.

The really interesting thing about CRISPR is that the tool that performs the genetic modification can itself be coded as DNA. Thus, if you create a mosquito egg cell that has the instructions to build CRISPR/Cas9 with the right targeting and replacement sequence, this cell will modify its own genome (both copies of the target chromosomes) and all the cells that divide from it would have the same gene-altering code. This also means that, if such a mutant mosquito were to mate with a wild type mosquito, the usual laws of Mendellian inheritance would no longer apply - instead of the usual 50:50 chance of getting the mutant allele or the wild type one, almost 100% of the children would get the mutant allele (in practice the mechanisms seems to break down every now and then and initial results suggest that the effectiveness may be around 95-99%). This concept is called a Gene Drive, and as far as I know CRISPR is the only general purpose technique we know of that makes such a thing possible.

The argument that started this article was over the number of generations required for the mutant allele to become expressed in more than half of a local population if you only start with one mosquito. My guess was that this should happen within 20 generations, while she thought it would take significantly more.

I tried to google this but found no good estimates for a very low number of initial mosquitoes. The best I could find where estimates that started with 10% of the population.

So I started thinking about how to arrive at a ballpark estimate. I quickly realized that my single mosquito would very likely die before it would be able to mate. The German saying "Die sterben wie die Fliegen" (they die like flies), may have something to it after all. But if you increase the number to a still very low 200 initial mutant individuals, the likelihood for at least a few successful matings starts to be reasonable.

I ended up modeling the whole idea in a spread sheet. I know almost nothing about either modeling populations of insects or the genetic particulars of actual CRISPR methods as they are applied, so all of this is very, very rough. But it's still some interesting results. I wanted to see how the numbers work out for really big "local" populations. I have no idea if these numbers are anywhere close to reasonable, and there seems to be remarkably little data on the number of mosquitoes in a given area. So I came up with 4 Billion mosquitoes, which I wildly guessed might be the number that live on a small island in the middle of the wet season in a mosquito infested area (so that you can meaningfully talk about a single "population" that is not constantly mixing with mosquitoes from the surroundings).

My modeling has several dramatic weaknesses, e.g. it assumes perfectly random matings without regard to geographic proximity etc which is clearly unrealistic. Another thing is that the numeric factors for successful matings & the fraction of eggs that develop into successfully reproducing adults were tweaked to arrive at stable populations whereas real mosquito populations surely vary wildly in size depending on external factors.

I originally wanted to use Guesstimate to enter ranges of plausible values and have it estimate the final ranges via Monte Carlo simulation (drawing random numbers with given distributions for every ranged variable and calculating the spreadsheet formulas with those), but Guesstimate is still too cumbersome to use for repetitive calculations, so I had to do it with vanilla google docs and with single precise values for the individual factors.

The result for those particular numbers with my guessed parameters was that it would take about 26 generations for those 200 initial mutants to spread their malaria resistance allele to be expressed in the majority of the population (and pretty much totally replace the wild type mosquitoes 2 generations later).

When I was done with this, I was pretty appalled at how hard it is to audit spreadsheets. No wonder Reinhart/Rogoff made a really embarrassing Excel error back in 2010... Since I have recently started to work with the programming languages Haskell and Elm, I wondered if it would be easier for a non-programmer to understand and verify source code in Elm than it would be to verify excel formulas. I was also looking for something that can be written and run on the web without installing anything. So I wrote the same modeling again in Elm and published it as a gist on github. To run it, copy the code and paste it into elm-lang.org/try. The results are slightly different because I made the Elm code a little more precise (it doesn't allow for fractions of a mosquito to mate ;) ).

There are a number of other routes to try. My requirements are that it has to work in the web without installing anything (so that others can easily play with different parameters) and that it should be easy to audit. In my humble opinion, this second requirement excludes vanilla Javascript - a language that, among other things, allows you to redefine its "undefined" value is just too hard to audit.

Purescript would have been an interesting choice since they had a self-hosted compiler that was compiled into JS at one point, but it was so much slower than the native one they gave up on it. So like the elm version they now have a Try Purescript website with a compiler running on a hosted machine.

The next thing I actually want to try is wolfram alpha which has a slightly nastier syntax but brings many high level functions that might allow me to write a version with Monte Carlo sampling from probability distributions for all parameters in an understandable way without writing too much code. I am also considering writing the code to do a Monte Carlo simulation in Elm but since "Try Elm" doesn't let you use any additional libraries, I'd have to write a lot of supporting code to be able to perform the simulation.

I'll blog again about how well Wolfram Alpha performed if it turns out to be a viable alternative. If you have any comments on my methods or ideas for good other ways to simulate this, please let me know in the comments on on Twitter!

Self documenting data manipulation with R-Markdown

The company I worked for over the last few years provides a lot of data cleaning/data manipulation services, mostly with proprietary tools that I and another developer created over the last few years. One of the things I introduced before I left was a bridge between the proprietary datasets that are used inside that company and the R project. My main motivation for this was to enable self-documenting workflows via R-Markdown and in this blog post I want to talk about the advantages of this approach.

R Markdown is a syntax definition and a set of modules for R that make it very straightforward to write normal text documents that embed R code. When compiling the text files, the R code is executed and the result embedded into the compiled document. These results can just be the textual output of R functions (like summary() that describes a couple of important metadata of a data set) or even graphics.

As the name suggests, R Markdown uses the markdown syntax for formatting text, so you would write something between stars to make it bold etc. Markdown is pretty neat in that it is both easy to read as plain text but also easily compiled to html to be viewed with actual formatting in a browser.

It's probably easier to understand with an example, so here is a simplified version of what this looks like:

This is a sample r-markdown script that plots Age vs Income as a 
Hexbin plot. This text here is the natural language part that can 
use markdown to format the text, e.b. to make things **bold**.

```{r Income vs Age - Hexbin}
# The backticks in the line above started an R code block. 
# This is a comment inside the R block. We now load the hexbin 
# library and plot the data2008 dataset (the code for loading 
# the dataset was ommited here)
library(hexbin)
bin <- hexbin(data2008[, 1], data2008[, 2], xbins = 50, xlab = "Alter", ylab = "Einkommen")
plot(bin)
```

And this is what the compiled html looks like (embedded here as a screenshot)

The great thing about inlining R code in a markdown document in this way is that you can create a new workflow that is much more maintainable because the focus shifts to documenting the intention. Instead of focusing on writing R code to get a job done and then documenting it a little with some comments or as text in a separate document, the analyst starts the work by describing, in plain text, what it is she wants to do. She then embeds the code to do the transformation, and can even generate graphs that show the data before and after.

This idea to document changes by embedding graphs was my original trigger for writing the bridge code. I had implemented the weighting code in our proprietary tool but the textual output describing the changes in the weights was a bit terse. It was clear that a graphical representation would be easier to understand quickly, but introducing a rich graph library into our proprietary DSL would have been a major undertaking. By making it fast and easy to get our data sets into R and back out again, we quickly got a way to create graphs plus it enabled the self-documenting workflow described above.

Another big plus is that since all transformations are described in natural language as well as in code, auditing data manipulations becomes a lot easier and quicker. I can thus wholeheartedly recommend this workflow to everyone who works with data for a living.

Laws - the source code of society

Today the citizens of EU will elect a new parliament and this seemed a good opportunity to write down some of my thoughts on lawmaking. As the title suggests, I think that law texts are very similar to source code. Of course, source code is a lot stricter insofar as it defines exactly what the computer will do. Laws on the other hand describe general rules for behaviour as well as the punishment for violations of those - ultimately though, both are expressed as text. Yet where programmers have developed sophisticated tools to work with source code, laws are still developed in bizarre workflows that necessitate a huge support staff. In this post I want to describe one set of tools used by programmers to work on texts; how I think they could be useful for lawmaking; and what our society would gain if our lawmakers would adopt them.

When non-programmers write, they often realize that it would be beneficial to save old versions of their texts. This leads to document file names with numbers attached ("Important text 23.doc") and then the infamous "final", "final final", "really final" etcetera progression. Programmers instead rely on a set of tools that are known as Distributed Version Control Systems (DVCS). The most famous of these is probably Git, which is used in many open source efforts. What these tools do is manage the history of the text documents registered with them, and allowing easy sharing and merging of changes.

In practice, after changing a couple of lines in one or more documents these changes are recorded as one "Changeset". These changesets can be displayed as a timeline and one can go back to the state of the documents at any point in its history.

A sample of the timeline of several changes in a DVCS

This in itself is alreday clearly useful, but what really makes DVCS magnificent tools is the ability to manage not just a simple linear progression of changes but different "Branches". This allows several people to make changes to the text, share their changesets and let the system automatically combine their changes into a new version.

Because changes by many people creating many branches can become confusing, there are some great tools to visualize these changes, as well as complex workflows that allow others to review and authorize changes via their cryptographic signature.

So how would these tools be useful in creating law texts? The main benefit would be to clearly document which politician introduced which line into a law, and which edits they made. Others in the working group could create branches with their favoured wording, and these could then be combined into the final version that is voted on by parliament.

One very useful tool is the blame tool (sometimes also called praise or annotate), which displays a document in such a way that each line is tagged with the person who last changed it. I think that it could be quite revealing to see who changed what in our laws, a process that at present would be very time consuming.

The website Abgeordnetenwatch already tracks the voting behaviour of the members of the German parliament, and it would be a great extension of this effort if the genesis of all law texts were plain to see for everyone as well. Bundesgit already puts the final German law texts into a Git database - but because these are only the final texts and Git is only accessed by a single person instead of the real authors, the real power of a DVCS can't be used. For things like the blame tool to work, all the small changes during the draft stage in the parliament's working group would have to be made inside a DVCS by their respective authors.

I am sure that there are many more potential improvements to the process of lawmaking that would be possible if lawmakers used DVCS tools. But the main and immediate advantage would be an increase in transparency, which in the end is what democracy is all about. Laws are the source code of our societies. Let's make sure they are made with the best tools available.

Automatic edit detection with FFmpeg and import into Premiere via EDL

I like to study films in an NLE like Premiere. You can see the rhythm of the scenes a lot clearer when you look at clips in the timeline. Like this:

Unfortunately it used to be very tedious to go through a scene (let alone a whole film) and set all the cuts again by hand. Until now. Today I created a workflow to automate edit detection for use in an NLE. All you have to do is run two tools and you get an EDL that you can import into your NLE of choice, link the media and off you go. The whole process takes maybe 15 minutes for a feature length movie.

You need to touch the command line for two commands, but stay with me, it's really simple. The hard part, the actual scene detection in the movie file, is done by the wonderful ffmpeg program, more specifically the ffprobe program. It takes a video file and creates a spreadsheet file with the times of the edit points it detected.

The second part is creating an EDL from this spreadsheet file. I wrote a little tool for this today that you can download below. It is written in C# as a console application, is release as GPL code and hosted on bitbucket if you want to compile it yourself or modify it. It should work as is on Windows and on OSX and linux if you install Mono.

If you want to use it, here is a step by step guide.

  1. Download ffmpeg from their website. Either put the bin directory in your path or if you don't know how to do that then put ffprobe.exe into the directory where your movie is.
  2. Open a command line and go to the folder where the movie file is (start->cmd.exe on windows)
  3. Run this command and replace MOVIEFILENAME with the name of your movie file. It shouldn't contain any spaces. Be sure to copy the command exactly as stated here:
    ffprobe -show_frames -of compact=p=0 -f lavfi "movie=MOVIEFILENAME,select=gt(scene\,.4)" > MOVIEFILENAME.csv

  4. This will take a while and output a bit of status information. The ".4" is the scene detection level between 0.0 and 1.0, lower numbers create more edits, higher numbers create less edits. 0.4 should be a good default.

  5. Download EDLGenerator.exe and run it, again from the command line, like so:
    EDLGenerator.exe MOVIENAME.csv FRAMERATE MOVIENAME MOVIENAME.edl
    The first file is the csv file you generated earlier, FRAMERATE is the framerate of the movie (needed for dropframe timecode corrections when appropriate), the second MOVIENAME is the source filename that should be written into the EDL file (might help with some NLEs to make linking easier) and the last is the name of the edl file to generate
  6. Import the edl file into your NLE (In premiere in File->Import)
  7. Link the media (in Premiere CS6 you can select all clips in the bin and choose link to media and just have to select the source file once even though premiere creates one source item for each edit)
  8. Voilla, you are done!

I have only used it on a couple of movies so far so there may be some rough edges - if you run into a problem, drop me a line.

Slides to my "Colour in Movies" and "Digital Video Workflow" lectures

The (german only) slides to my two recent lectures at filmArche Berlin are online now. One online presentation about Colour in Movies, and two PDFs about Digital Video Workflow and Codecs and Backups.

On the topic of Backups I have written in the two previous blog posts that of course are a lot more accessible than just the slides.

Incremental backups made easy

In my last post I wrote in length about backups but I omitted one thing: how to make incremental backups that use so called hard links and that barely take more space than 1:1 backups (on both windows and osx). First though, let me explain what is so nice about this concept.

Backups with a history

If space were no concern, it would be nice never to throw backups away. We would simply have folders that contain the date and time the backup was taken as part of the backup target folder name and keep all those backups. Then if one day we discover that we now need a file that was deleted two weeks ago we would simply access the backup from 16 days ago and restore it. If, like me, you have several TB of important data and can barely afford 2 additional sets of hard drives (one to keep as a daily backup, one that is stored at another location and that is swapped regularly) then this seems to be impossible.

Incremental backups

If you look at your whole hard drive(s) then you will notice that between two backups only a fraction of the data actually changes. This is what incremental backups use to their advantage. They only store the new and changed files and thus save a lot of space. However now you have a full backup at one point in time and every time you run the backup again you get a new folder structure (or, if you choose a bad backup software, a proprietary single file) containing only the new and changed files. This is a bit cumbersome. Wouldn't it be great to have a full snapshot each time?

This is where a feature called Hardlinks comes in handy. Hardlinks are a way for file systems to reference the same file several times, but only storing it once. Both NTFS (the main windows file system) and HFS+ (the main OSX files system) support hardlinks, but both operating systems hide this feature from the user interface.

What we gain from this approach

So taken together, these features enable incremental backups that look like full snapshots but only store the new and changed data. This way you only need a backup drive that is a bit bigger than your source (since you will want to have some additional space for the newly created and modified files) and you can keep a full history on it.

rsync and two GUIs for it

rsync is an open source application that is used to copy data. Since version 3 or so it supports creating snapshot copies using hardlinks. On OSX the tool backuplist+ allows you to easily create incremental backups by checking the "Incremental backups" check box and entering how many past snapshots to keep. On windows QtdSync allows you to do the same thing if you change the backup type from "synchronisation" to "incremental".

Backups

I always thought that one of the great things about digital technology is the ability to have backups - physical items can break but with digital data there is no reason why you should ever loose it, because creating exact copies of it is easily possible. And yet few people I know have a convincing backup strategy. Since I will hold a lecture at the filmArche filmschool next week on Workflow with digital files and Backups I thought this would be a good time to write the most important points about it down.

Your hard drives will fail

The question is not whether your hard drive will fail, but when. Not to have a backup of your important data is negligent and easily avoided. So I think it's well worth thinking a bit about it. If reading the title of this blog post made you feel a little guilty because you do not have a backup of your important data, read on. I promise to explain things in simple terms and walk you through some common backup scenarios for individuals or small groups of people.

Oh and one little disclaimer: thinking about backups can be a lot more complex that what I present here. This is just the bare minimum any normal person should know about backups in this digital age :)

What is a backup?

Let's start with a simple thought - what is a backup? It's an complete, independent copy of your data that does not share a single point of failure with the original copy. What's a single point of failure? Anything that can go wrong, that will destroy both copies at once. For example, if you have a backup on the same hard drive as the original file then this hard drive is a single point of failure. If it dies, then both your backup and the original data are gone.

This single point of failure thing is the key. I worked on a video shoot once where they shot with a camera that records to flash cards. The content of the flash cards was copied onto a hard drive and then to another backup hard drive before the contents of a flash card could be deleted and reused. So far so good. But then at the end of the first day the guy who did the data wrangling packed both hard drives into the same backpack and put it with the rest of the equipment for someone else to transport it to the next location. It didn't occur to him that if that backpack would fall down or get lost, both copies would be affected - it was the single point of failure.

What to back up?

Ideally you would just backup everything and be done with it. But some things are trickier to backup than others. Let's start with the obvious thing: all your "User" data should be backed up, i.e. your photos, the texts you write, the ("project") files of the applications you work with (spreadsheet software, video editing software, databases etc). This is the stuff that you really should not loose. I work as a photographer and so all photos I take fall clearly into that category, as do all the business related files like my accounting softwares files etc..

The nice thing about this category of "user data" is that you usually work with it on a regulary basis and thus know where it is. Ideally you do what I do and put all that stuff on it's own hard drive(s), away from the main operating system hard drive. This makes it easy to identify what is the most important stuff to backup (this one hard drive).

The second class of data is stuff that the software you use saves but that you do not directly interact with. Preferences for example. I work with software like Adobe Lightroom or Microsoft Visual Studio, both of which are very complex pieces of software that have a ton of user settings and user generated presets. If you were to loose these, it would probably not be the end of the world, but it would suck. The nice thing about this group of data is that it usually is rather small.

The third group of data is stuff like your operating system or the installed applications. Now while it would be nice if you could just copy all this stuff onto it's own disk and if the main hard drive fails restore it to a new disk and be done with it, this usually doesn't work. Operating systems need a boot loader and need to have certain data at certain sectors on the disc, stuff like that. So you need disc imaging software to backup this class of data which is why I do not bother to back it up. If the main system drive fails, it will take me a day or so to install the OS and the rest of the software again from the orignal DVDs, but that is ok. Your milage may vary though - if you simply can't afford a day of downtime it may make sense to create a scheme were you can back this kind of data up as well or have a second computer ready as a standby machine so you can switch quickly.

What kind of backup to do?

Most of the data we produce is not static - text files change, photos are edited, new files created all the time. So a backup needs to be done regularly. And with this comes an important decision: Do you need just one copy of your data or is it important to be able to go back to the way things looked some time ago? If all you need is an up to date copy, then it may be enough to just run a program that can mirror all the changes that happend onto your one backup. This is called a 1:1 backup. But if you need to be able to have a history of your data things become more difficult.

When you need past versions of your data, one approach is to buy X discs and use them in turn. So if you have 7 backup drives and you create 1:1 backups to one of them every day than you can go back in daily increments up to one week into the past. But this is expensive. So there is some software that let's you do this in clever ways and only store the data from the last backup. Because, you know, usually only a few files change between two backups.

There are a lot of different ways to these so called incremental backups. The easiest on the mac is Time Machine - on PC it's a bit difficult (Genie Timeline does something similiar). For smaller amounts of data, online services like dropbox usually provide some sort of history (although you have to trust that service provider to take good care of your data). But even time machine and genie timeline don't work well if you need to manage several disks, which is a common case today.

I have recenlty discovered that the powerful command line tool rsync now has the ability to create incremental backups with hard links, much like Time Machine does. I will write up my findings in another blog post.

A good solution for the real world

If all the data you care about are a few hundred MB and you usually have internet access, then a service like dropbox does all a normal user needs. But nowadays, even my grandparents have a few dozen GB of photos etc, and I have around 5 TB of important data that I need to keep safe. So I will now describe the setup I use and that I think is pretty safe and not overly complicated.

Step 1: data organization

It's important to know where your important data is. So I have a policy not to use the usual "My documents" folder or the "Documents" folder on my OS harddrive but instead put all the important data on dedicated data disks. Let's call the two data disks A and B. All your important data should be on these two disks. If the main OS drive dies, your main data should survive, even without the backups.

Step 2: get two more hard drives for each data hard drive

Why two you may think? Two reasons: First, while your backup drive is attached to your computer, your computer is a single point of failure. If you have a really nasty virus that wipes all your attached hard drives (or worse, encrypts them and extorts you with the password :) ) and you only have one backup then that's it. You had no backup because your two copies had a single point of failure. The second reason is that you should keep one of your two backup drives at another location. At a friend, your office, your lover, doesn't matter. Just pick a secure place and store it there and exchange the two disks every now and then (maybe every week or so). This way, even if you get robbed or your house burns down, the data will still exist somewhere. It will be a couple of days, maybe (if you're lazy) a few weeks old, but at least most of it will still exist. (Hint: if you do not encrypt your backup drives then you will want to store it with people you fully trust :) )

Step 3: Create your first backup

Because time machine and similar tools usually do not work with more than one hard drive, I recommend using simple 1:1 backups instead (or use a clever rsync strategy which I will desribe in another blog post soon). With todays huge disks, even a 1:1 backup that overwrites everything takes quite long, so I recommend to use rsnyc on linux or mac or robocopy on windows to create efficient 1:1 copies that only copy what has changed. This way I can backup 3 TB of data every night within 1-3 hours.

Rsync and robocopy come preinstalled with mac os x and windows respectively and both have gui frontends available for those who do not enjoy working on the command line (e.g. arrsync and yarcgui). Rsync defaults to creating a 1:1 backup with the -a option, robocopy useses /MIR for the same result.

With the gui frontends it is pretty simple to set up one copy job per hard drive (just tell it from where to where it should copy the data).

Once you finished setting up rsync or robocopy to copy the data disks to the backup drives, let it run once and check if everything worked (this first run will take a few hours per TB, possibly longer if you use a slow USB2 or Firewire connection). Subsequent runs should be much faster.

Step 4: Setup dropbox or similar for your most important, smallish files

The most important files we have are often pretty small. Many may be text files. Many will change frequently. For this kind of data it is reasonable to use dropbox and so I advise you to use dropbox in addition to the setup described above for the kind of data that is rather small and that changes a lot and where earlier versions of the file may be useful in the future.

Step 5: Setup reminders to switch backup drives, test backups

It's important to switch your two sets of backup drives regularly and to check if the backup works, so set up reminders in your calendar to do so. The best way to check if a backup works is to go to a different computer and try to open your files. If your files work with all their dependencies (linked media files in a video editing software e.g.) then you are safe. If not it's time to improve your setup.

A note on the side: RAID is not a backup

One last thing: some people have RAID setups configured for their data and think that they don't need any further backup. This is wrong. RAID is a system to protect you against hard drive failure by using redundant drives but of course the raid system itself and the computer it is attached to and the apartment that computer is located at are all single points of failure. RAID systems are nice but they are not a replacement for backups.

Final thoughts

Whew, quite a post. This stuff may seem a bit complicated but it is the simplest version I found that keeps me safe and makes reasonable compromises for my personal use case. I could go into a lot of detail on the various thoughts on why I prefer uncompressed backups etc but I think this will do for now. If you found this post useful, if you have questions or if you spotted an error, please let me know in the comments below!

920 Milligray website finished

I finally finished work on the website for the film I am currently working on, 920 Milligray. The design was done by Marius Wawer and I built the html page and set up a drupal cms for the future (it currently only serves the FAQ page).

920 Milligray is going to be a drama centred on Katja, a young girl who barely knew the world before the catastrophe, and her older brother Uwe. The film is set in a post apocalyptic Europe and is currently in development. BTW, we are actively looking for production companies to get on board so if you know someone who might be interested, please tell them about the project!

One short side note on the workflow I currently use to update my websites: a lot of people who work with websites still use ftp to push new html pages or themes for a CMS to their servers. I seriously recommend to do away with that concept and instead use a distributed version control system like Mercurial or Git for this purpose. This way you have full history locally as well as on the server(s), can maintain branches (Version 2.0 e.g.) and switch once everything is ready. Not to mention the advantage of easily being able to share work between people and merge changes from different people with ease.

Git is the system used for the Linux Kernel project among others and very powerful but a bit convoluted in daily use IMHO. I prefer Mercurial which is also powerful but quite a bit simpler to use.

The way I like to work is to have a local Mercurial Repository where I commit changes in small logical batches (command line: hg addremove and hg commit). Then I have a private repository at bitbucket.org where I have easy to use and secure https access (hg push locally to push to the remote repository). Finally, on my webserver (with shell access) I have the repository that contains the content that is visible to the public (hg fetch to get the up to date version from the bitbucket repository). This way whenever something goes wrong I can just go back in the history and fix things locally. I use a similar system for all software development I do as well and more and more with other stuff as well.

Our webserver is pretty heavily firewalled but at some point I want to implement https access to the webserver repository directly so I can push without the detour via bitbucket. With a simple post-push script it will then be possible to have an up-to-date version on the webserver with a simple local hg push. Nice.

One last note: if you work on windows, tortoiseHG is a very nice GUI for daily mercurial use. 

Colour grading

I'm doing a little lecture tomorrow at filmArche on colour grading and thought that I could share a few links on the topic here.

To give a feeling for the use of colour in movies, I really like the moviebarcodes. The artists who does them wants to stay anonymous, so I can only link to his tumbl site. The image below is that site where you can also order prints (I personally find them beautiful). Wired uk described the process of making these images very well as: [the software the artist wrote takes] every single frame of the film (and its constituent colours), stretches them vertically and lines them up in chronological order to create an image that gives a visual overview of how the film would look if you saw it all in one go.


A Scanner Darkly as a Moviebarcode

Also very useful is prolost, the blog of Stu Maschwitz. He is a filmmaker with a vfx background and helped develop a number colour grading plugins that can be used with several NLEs. Stu has a collection of colour grading tutorials for one of these plugins, Colorista II, on his site. I will show Magic Bullet Looks at my little lecture as it incorporates most of the important tools for a more complex grade and is compatible with NLEs but also usable as a standalone tool to grade stills.

Grading stills in preproduction and finding reference stills from movies or other sources is a very important preproduction step in my opinion. I really like screenshotworld for browsing stills from a number of different movies.

As you may know, digital colour grading is a rather recent practice that took off only in the early 2000s with O Brother Where Are Though being credited as the first feature film to be fully digitally graded. Digital colour grading offered a whole array of more sophisticated and more targeted manipulation of colour, which is sometimes used to a very positive effect. However, as this blog post on the Teal and Orange plague nicely illustrates, it has also led to an overuse of this very powerful colour contrast.

Human colour perception is very sophisticated (although some animals have even weirder colour perception) and colour grading therefore a very important and complicated subject. I'll write more on it some time soon, but for now I hope the links here are a nice read.

New series of posts about photography - Artefacts of Light

I started writing a series of blog post with the title "Artefacts of Light" at the Iconoclash Photography blog. The series will cover different phenomena in photography and the first post is about Bokeh (the rendering of out of focus light sources, as seen above). Future posts will cover things like Film Grain, Back Light and Empty Space. I always appreciate feedback, so let me know what you think about it - thanks!

Design by Daniel Bachler - created with drupal - set in Melbourne by Marco Müller