[Soc-2010-dev] Status Report
Konrad Wilhelm Kleine
konrad at konradwilhelm.de
Sat Aug 14 00:53:32 CEST 2010
Hi to everyone,
I have been quite productive this week, meaning I could finish the
infrastructure for image layers and update it's documentation:
(I will create a screenshot and a video later and add it to this page.)
Currently you can have an unlimited amount of layers in each image. The
layers are displayed on top of each other like in any other painting
package, for example The GIMP or Photoshop.
UI-wise the layers are organized in a stack and can be moved up an down.
Each layer has an opacity slider, a visibility switch and a name
currently. Besides moving layers up and down you can add and remove
layers and fill them with a constant color.
Displaying and drawing two issues currently.
Before my new implementation, images in the UV/Image Editor where drawn
directly into the framebuffer using glDrawPixels(). That means if I'd
draw multiple images at the same location, they would have overwritten
each other. With glDrawPixels() there was no way to adjust an image's
opacity if alpha wasn't pre-multiplied into the image. There seem to be
some way using glTransferPixel (or something like that) but it was slow
as hell. That's the reason why I chose to draw the layered images using
a texture approach. Currently the blending is not working as I wish though.
In my implementation each layer carries an image buffer. I thought that
it would be great if I can temporarily replace the Image->ibufs.first
image buffer that one paints on during the stroking. This approach works
at least for the base layer and for all other layers nothing happens.
The idea is to paint only on the selected image (layer->flag &
IMA_LAYER_CURRENT). In my definition, the base layer is the layer whose
image buffer points to the image's image buffer (ImBuf). Therefore when
this layer is deleted, the image buffer is not freed. All other layers
and their image buffers are free correctly.
So far, my implementation can coexist with Blender's image systems
pretty well. I think one can also easily integrate a layered image into
the composite input node.
I might be mistaken, but just like in The GIMP you can only have layers
of the same type right now. This means that each layer will be created
after the specifications of the original image and it's image buffer.
For example if you have a grey scale 10x10 pixel image, each layer will
be 10x10 pixel and grey scale. Compared to OpenEXR where each layer or
"channel" can have it's own dimensions, color channels and so forth, my
model is somewhat limiting. But in the end I think it can be of great
value for artists who want to multi-layers textures.
When painting any layering works smoothly (probably after GSoC) I
thought of bringing in some baking functionality that let's you create a
single-layered image that looks just like your blended multi-layered
image. This image then can be used in Blender just like any other image.
Another idea is to enable artists to use an image layer as a paint mask.
I also fixed a null-division bug in my "Bump Map Preview" patch that
Brecht found. Before this feature can make it into trunk, I need to fix
another slightly more complicated bug. I reserved this one for after the
summer of code.
None of this is in SVN yet but the wiki page above contains quite a lot
of code from my implementation.
I think that I'm on schedule. My plans for the next week are to continue
working on documentation and some code and to pass final evaluation.
With the end of GSoC goes the end of my stay in Spain. I'm moving back
to Germany tomorrow.
Since this is probably the last chance before final evaluation, I want
to take the time and thank the Blender Foundation, Google and the
Blender community for letting me work on this great project. You are
amazing! Thank you so much for the overall opportunity to participate,
the help, the inspiration and last but not least the critics and
feedback. I look forward to continue contributing to Blender after GSoC!
More information about the Soc-2010-dev