This is a post mortem development post recounting my personal work during the Gooseberry project, detailing which features made during gooseberry made it to blender master, which didn’t and why.
Since there have been some concerns about transparency, I will be posting some information here:
I’ve been working as a paid member of the gooseberry crew for a few months before preproduction started, specifically since March 2014, and I’m drawing this list from my visible work in the git repositories for the blender project since that time. Anyone can check for him/herself the full list. For my reference, I generated the relevant commits from the blender git repository using:
git log –author=”Antony Riakiotakis” –all –format=”%cd %h %s” –since=3-1-2014
Master landed goodies
With this out of the way, a summary of the actual goodies that made it to master follows:
The GSOC branch still needed a few improvements and fixes to the user experience side, and being able to work on it full time helped concentrate and provide a solid feature set.
Pie menu project started on the side during beginning of 2014 but it was during June/July 2014 that development was really concentrated and finished.
The sequencer has had a few features and refinements during gooseberry. One of the most important fixes is solving performance issues with undo. Undoing during an editing with sound strips while waveform display was on would free the waveforms and hang until all sound waveforms would reload. It could easily take 10 seconds per undo step. This was solved using threaded waveform loading as well as preserving the waveform data between undo steps.
Some features were also added:
A lot of time was spent to get better synchronization between audio and video, an issue that turned out to be caused by inherent limitations in the libraries we use for sound streaming (OpenAL). In the end Jorg Muller, the author of the original sound system for blender proposed supporting native audio backends to get better synch quality, but this is still not implemented.
A lot of features were added here. The most important is viewport compositing with SSAO and depth of field effects. While the first was a gift to sculptors, the second was used in gooseberry to get a better feel for the depth of field effect of the scenes in the movie.
World background display using GLSL was also added, making it possible to visualize environment textures in real time.
Last but not least, during the end of the project, some time was spent to refactor blender’s mesh display code using a more optimal data transfer scheme which resulted in less overall memory use and faster drawing.
Animation tools also got a lot of attention. I worked closely with the animation team trying to improve blender.
Features include:
A feature started by Julian Eisel, allows image editor and sequencer to display blender related metadata while previewing an image.
Basically, fixes in blender codebase to make sure that the full 64bit address range is used in our image pipeline. This made it possible to properly support and display images where the memory requirements where bigger than 4GB.
If blender animation player (blender -a) is used to open a movie file with sound, it’s now possible to listen to the sound as well. Also an indicator has been added to make scrubbing easier.
Non-master landed goodies
Where there is glory, there is also failure. This is a list of things that were attempted during gooseberry but were not merged to master for various reasons:
The GSOC viewport project was mostly concerned with replacing OpenGL calls with an intermediate API that would handle drawing commands and communication with a graphics API. I as well as Jason took some time to look at the project but bringing the branch close to master ready state was a big undertaking and the needs of the production made it impossible for me to concentrate on it. Some improvements were ported to master, such as OpenGL debug contexts, less reliance on the GLU library and better OpenGL context creation. The new API was based too much around legacy OpenGL and had some extra overhead, which made some developers sceptical. During February/March, a new branch was started with Mike Erwin aiming to support a PBR based viewport. However after a month or so, we had few results. There was a better, lower level API at the time but there was still no functioning PBR prototype and I was still hooked up in production with little time to contribute. At that point the project was suspended. Some of the code and ideas done for that branch was merged (subsurf drawing optimization, indexed drawing for polygons) and improved upon for the display optimizations done for blender 2.76, but none of the new APIs was merged.
The main idea behind the wiggly widgets branch was to make a system that allowed reusable widgets to be hooked in 3d/2d views and allow manipulation of various properties and operators. One of the deliverables was supposed to be a system like Pixar’s tool “Presto” which allows animators to treat areas of a mesh as handles for bone manipulation. This meant depth aware widgets that used meshes as handles. After about 2 months of fighting with blender’s handler, operator, undo, RNA system and OpenGL selection, the project was suspended. Significant code had been added already which allowed operators and properties to be manipulated, but we still did not have a system for animators to use and there was no depth buffer interaction with the rest of the scene. The expected time budget for a prototype for the project was two weeks which was way beyond my wizard level. The branch is being continued by Julian Eisel currently.
The purpose of the animation curves tool was to provide real time preview of the path of a bone. We already have such a tool, but we wanted to improve it to update automatically as animators tweaked a bone during transform. While this was easy to do, it took too long to compute the new paths. Obviously the operation had to be threaded, however due to the way blender’s dependency graph works, the positions of the bones for the new frames would be stored in the original bone data structures. This would create data race conditions, with the transform system and the threaded dependency graph overriding the bone positions simultaneously. A solution to this would be to flush the data to copies of the bones during dependency graph evaluation but this was still not supported, so the feature was dropped.
This is a feature implemented during the final days of gooseberry. It was hacked together using regular image previews and it didn’t support movie files. In the end it was left out of master in anticipation of the greater changes by Bastien and expecting to do the features in a more robust way.
Aaaand, that’s it! Till the next project!
Hi! Are you planning to finish “failed” features or are they dropped forever (especially the widgets and real-time paths stuff) ?
Or maybe they’d be done in 2.8?
Btw, thanks for the HUGE work done during the Gooseberry project!
wiggly widgets is being continued by Julian, and I might be inclined to take another look at some point. For the viewport we have great aspirations, though there’s a chance I might not be able to follow up on it properly due to time constraints. Animation curves should be doable at some point, but I would need to study the new dependency graph and see how the limitation could be avoided. Sergey once hinted at a copy-on-write system that would definitely help there.
Thanks for reply! Glad to hear that these feature aren’t dropped and could
be developed in the future. Keep up amazing work, hope you’ll overcome
all obstacles.
Certainly the interactive and editable “motion path” is the most expected feature for me and all Blender animators.
Truly hope that this feature is not dropped for real.
And, yes: thank you for your work!
Man, do you even get sleep? ;) I thoroughly enjoyed reading that write-up, and I appreciate even more now the amount of work that you developers do on Blender. Cheers, and have a virtual beer on me! :D
Way to much information for a creative guy as me but one thing for sure great improvements here made by you and others i am proud that I did a donation to help blender and the project to do the right coding. Thanks for your writing and thumbs up for the next Blender project
Thanks for all those informations Antony; as a simple user it’s always hard to have a real idea of the work done by developers and Gooseberry project helped to grasp how you are working, specially with the weeklies and this summary.
It feels like beyond supporting the Cloud and the developers fund I wanted to thank you personally as well as all the other developers for your work even if I don’t always understand everything you do ;-) I have to admit it.
Cheers,
Bernard
One feature not mentioned here is use sequence option for the VSE scene strip this what have been a nice add
Indeed, but that was Campbell’s contribution, not mine.
I hope it will get merged at some point it would help keeping video editing space clean with out creating meta-strips
Is there anyway that the “use sequence” option could be made into an add on or something in the future?
SSAO and depth of field effects ….. works great
I hope soon new lines of your code in Blender
Thanks for your great work
= :-)
Thank you Antony Riakiotakis, your effort and zeal is much appreciated. And many thanks to Ton also i love you all.
Thank you so much for all the amazing work!
Looking forward to integrate those features into my workflow.
A Big Thank You!
Have another virtual beer on me as well!
Or tea if you have a irritable bowel ;)
Best regards, Aclariel.
Always using the new paint tools, and recently used the cavity masking to paint a cinnamon roll for a product shot – it was a snap and avoided the mechanical masking usually arrived at with face selection masking. Really cool stuff, and brilliant work all around. I really appreciate Gooseberry and all that it has done so far to improve our favorite software.
Thanks for all the hard work, Antony! It’s much appreciated!
Many thanx for your contributions Antony -esp the ones related to the VSE :)
Thanks for all the work you have made. Hope seeing moarz coming from you santa claus :)
I’m interested in the “text strip type” mentioned in your contributions to the sequencer. where can I read up on that?
Thank you.
Yes! I’ve found the new Text-type sequence strip in the Sequence Editor: Add>>Effect_Strip>>Text (Version 2.76 is not published, yet, on the download page of the main Blender.org site – couldn’t wait to try it!).
Thank you, for your work on including this strip type. Even though it seems to be limited to one font, right now, I have hopes that there will be a way to link them (by drivers?) to display their content in Text Objects within the 3D-View. I envision the day that I can turn the Text-strip opacity down to zero, and have a paired text object render out the same words. Maybe, even with key-frames to swap out textual data, over time.
I’m Sorry if this sounds like a feature-request. Your hard work just has me imagining great things. Again, thanks in bunches.
A big thanks !
Just curious, because you worked in a production environment you had coding targets. Targets that probably would have had different weights if no movie was to be made.
If you look back on it, did this movie making business put forward things that had to be addressed by improving blender. In other words without the movie those needs would have been very unlikely addressed, and ever end up in blender ?.
From those things, might there be different things you’d like to have added to the project but, well there was less need for it with the goal of this movie production in mind ?.
Background of why i ask.
I think that developing with a real movie in mind is a great way to evolve blender, despite i know some dont like that, but i believe that each method (with or without a movie) would result in different code improvements. And this movie made blender more production ripe i think as a result of this path. Wondering what the things are that you would have personally liked, to add to Blender if there only was time for creating it.
I agree. I really like the way Ton (and others) have setup the BI and the BF to produce a truly professional open source project.
Making open movies where the coders are in the same room as the artists is probably the most ultimate form of “dog-fooding” blender.
I think it leads to really high quality software, and rapid development because the developers hear the complaints and feature requests under a tight deadline.
The quality and development pace of blender is amazing in the world of open source. Keep up the good work!
Seriously what you guys do the time frame you do it in amazes me. Your like Rockstars to me.
Well done.
I didn’t see a BlenderNation post on Cosmos Laundromat, but I just wanted to say that I thought the quality of the animation was simply astounding! A HUGE improvement compared to Big Buck Bunny and even Sintel. It can hold its own with anything Pixar and Disney have produced.
Nothing but humble appreciation to You and the great team you’re working with! Thank you so much for all these amazing features, optimizations, improvements – they make a huge difference!
Great job,
It should be done annually to Kickstarter to accelerate these projects and further improve beloved blender