This is a post mortem development post recounting my personal work during the Gooseberry project, detailing which features made during gooseberry made it to blender master, which didn’t and why.
Since there have been some concerns about transparency, I will be posting some information here:
I’ve been working as a paid member of the gooseberry crew for a few months before preproduction started, specifically since March 2014, and I’m drawing this list from my visible work in the git repositories for the blender project since that time. Anyone can check for him/herself the full list. For my reference, I generated the relevant commits from the blender git repository using:
git log –author=”Antony Riakiotakis” –all –format=”%cd %h %s” –since=3-1-2014
Master landed goodies
With this out of the way, a summary of the actual goodies that made it to master follows:
The GSOC branch still needed a few improvements and fixes to the user experience side, and being able to work on it full time helped concentrate and provide a solid feature set.
The sequencer has had a few features and refinements during gooseberry. One of the most important fixes is solving performance issues with undo. Undoing during an editing with sound strips while waveform display was on would free the waveforms and hang until all sound waveforms would reload. It could easily take 10 seconds per undo step. This was solved using threaded waveform loading as well as preserving the waveform data between undo steps.
Some features were also added:
A lot of time was spent to get better synchronization between audio and video, an issue that turned out to be caused by inherent limitations in the libraries we use for sound streaming (OpenAL). In the end Jorg Muller, the author of the original sound system for blender proposed supporting native audio backends to get better synch quality, but this is still not implemented.
A lot of features were added here. The most important is viewport compositing with SSAO and depth of field effects. While the first was a gift to sculptors, the second was used in gooseberry to get a better feel for the depth of field effect of the scenes in the movie.
World background display using GLSL was also added, making it possible to visualize environment textures in real time.
Last but not least, during the end of the project, some time was spent to refactor blender’s mesh display code using a more optimal data transfer scheme which resulted in less overall memory use and faster drawing.
Animation tools also got a lot of attention. I worked closely with the animation team trying to improve blender.
A feature started by Julian Eisel, allows image editor and sequencer to display blender related metadata while previewing an image.
Basically, fixes in blender codebase to make sure that the full 64bit address range is used in our image pipeline. This made it possible to properly support and display images where the memory requirements where bigger than 4GB.
If blender animation player (blender -a) is used to open a movie file with sound, it’s now possible to listen to the sound as well. Also an indicator has been added to make scrubbing easier.
Non-master landed goodies
Where there is glory, there is also failure. This is a list of things that were attempted during gooseberry but were not merged to master for various reasons:
The GSOC viewport project was mostly concerned with replacing OpenGL calls with an intermediate API that would handle drawing commands and communication with a graphics API. I as well as Jason took some time to look at the project but bringing the branch close to master ready state was a big undertaking and the needs of the production made it impossible for me to concentrate on it. Some improvements were ported to master, such as OpenGL debug contexts, less reliance on the GLU library and better OpenGL context creation. The new API was based too much around legacy OpenGL and had some extra overhead, which made some developers sceptical. During February/March, a new branch was started with Mike Erwin aiming to support a PBR based viewport. However after a month or so, we had few results. There was a better, lower level API at the time but there was still no functioning PBR prototype and I was still hooked up in production with little time to contribute. At that point the project was suspended. Some of the code and ideas done for that branch was merged (subsurf drawing optimization, indexed drawing for polygons) and improved upon for the display optimizations done for blender 2.76, but none of the new APIs was merged.
The main idea behind the wiggly widgets branch was to make a system that allowed reusable widgets to be hooked in 3d/2d views and allow manipulation of various properties and operators. One of the deliverables was supposed to be a system like Pixar’s tool “Presto” which allows animators to treat areas of a mesh as handles for bone manipulation. This meant depth aware widgets that used meshes as handles. After about 2 months of fighting with blender’s handler, operator, undo, RNA system and OpenGL selection, the project was suspended. Significant code had been added already which allowed operators and properties to be manipulated, but we still did not have a system for animators to use and there was no depth buffer interaction with the rest of the scene. The expected time budget for a prototype for the project was two weeks which was way beyond my wizard level. The branch is being continued by Julian Eisel currently.
The purpose of the animation curves tool was to provide real time preview of the path of a bone. We already have such a tool, but we wanted to improve it to update automatically as animators tweaked a bone during transform. While this was easy to do, it took too long to compute the new paths. Obviously the operation had to be threaded, however due to the way blender’s dependency graph works, the positions of the bones for the new frames would be stored in the original bone data structures. This would create data race conditions, with the transform system and the threaded dependency graph overriding the bone positions simultaneously. A solution to this would be to flush the data to copies of the bones during dependency graph evaluation but this was still not supported, so the feature was dropped.
This is a feature implemented during the final days of gooseberry. It was hacked together using regular image previews and it didn’t support movie files. In the end it was left out of master in anticipation of the greater changes by Bastien and expecting to do the features in a more robust way.
Aaaand, that’s it! Till the next project!