[Soc-2005-dev] One more week!

Chris Want cwant at ualberta.ca
Thu Aug 25 14:56:26 CEST 2005


We have one more week to go!

At this point I think pretty much everybody should
be trying to get demo builds out there for testing,
creating demo blend files, writing docs, etc. A good
place to post testing builds is here:

http://www.blender.org/forum/viewforum.php?f=18

You should also take a little time to reread your
original proposals and assess how far you are to
the goals outlined in those documents. I've
attached the proposals to this mail.

Anyways, keep at it! If you have problems, drop by
the #blendercoders for Q & A, etc, or send questions to
either bf-committers or this mailing list.

Regards,
Chris

-------------- next part --------------
EXTENDING INVERSE KINEMATICS FOR BLENDER

Synopsis:

Inverse Kinematics (IK) is a character animation technique that, based on a skeleton and the position of one or more end effectors, computes the joint angles to reach this position. For example, it is used for computing the relative angles of bones in an arm, in order for the hand to reach a given position. It is much easier to define a single position for the end effector, than to set every single joint angle.

Blender's IK implementation is robust, but lacks functionality and performance. The goal of this project is to extend the IK module in Blender to model skeletons more accurately.

Benefits to the Blender Community:

Blender's animation tools have had little attention for a long time, while initially it's main purpose was exactly animation. Now with the upcoming 'animation recode', better IK tools will also be needed. These tools would make it easier for artists to create natural looking character animation. Also, within the context of an upcoming short movie project [1], entirely produced with open source software, enhanced IK tools would very useful.

Deliverables:

The IK module in Blender will be extended to support:
- Translational joints
- Tree structured IK chains, with multiple end effectors
- Rotational limits for joints
- Joint pinning
- Parameters like joint stiffness and end effector influence

The main focus of this project will be on the technical implementation of these features. Integration in the user interface will be coded, but mainly for testing and as an example to improve later.

Functionality Details:

Blender's IK module now accepts a single linear chain of spherical joints, and one end effector at the end of this chain.

The proposed extensions will allow for a tree structured chain, so a complete body, instead of just a single arm or leg, can be incorporated into the inverse kinematics computation.

By adding translational joints, and limits on rotational joints, skeletons can be modeled more precisely. Rotational limits can be used, for example, to prevent a knee from bending forward.

Joint pinning is a powerful technique for posing characters. One example usage is to keep a character's feet on the ground, while reaching with his hand for a high target. Using joint pinning, natural poses can be created easily by dragging end effectors.

Lastly, parameters like joint stiffness allow to characterize joint motion. For example, when grabbing an object, a bone in the spine will likely rotate less than a bone in the shoulder.

Implementation Details:

Blender's IK implementation resides in the 'iksolver' module, and was written in C++. It provides a good basis for further extensions. It has a C api, as most of Blender's other code is written C. Most of the work would thus be done in the iksolver module, with modifications to the rest of Blender's code to support this new functionality.

The current IK implementation is based on numerical techniques, where the joint angles are computed with an iterative algorithm based on a Jacobian matrix. This is the common technique used in today's animation packages, as it is very general, known to produce natural looking animations, and allows for a wide range of functionality.

The new functionality will thus build further upon this. Secondary goals like pinned joints and rotational limits would be added by exploiting the redundancy of the system. Translational joints and multiple end effectors can be incorporated into the system by extending the Jacobian.

The numerical system is very sensitive to singularities, and adding more features will not make the situation simpler. Since a balance will need to be found between robustness and performance, it's hard to make any claims about this, but I will try to keep the system as fast as possible.

Development Methodology:

Since the IK module is already in a working state, the goal will be to get some of the simpler functionality working soon. This will allow me to get comfortable with the current code, and become confident before tackling the harder parts.

New code would be posted regularly to an online CVS repository. This both motivates to actually deliver working code often, and will allow for feedback from other developers and users.

Regular builds of the Blender CVS repositories are posted on and downloaded from the development forums [2]. This will allow for user feedback early on in the process. 

Considering a rewrite of the animation system will be happening at the same time, communication and cooperation with other developers will be important, in order to deliver the desired functionality and to ease integration.

Project Schedule:

The first week will be used to get to know the code and do some experimenting. After that I will start by adding some of the simpler functionality to gain confidence. Then I will tackle the larger parts, until two weeks before the deadline. These will be used for finishing the last issues, debugging, and optimizing the code.

Bio:

I am a 19 year old student in computer science, in the 2nd bachelor year at a university in Belgium. I have experience using and also developing open source software. My main interests related to coding are in computer graphics and animation.

I started coding 6 years ago. During that time I have worked on many small personal projects ranging from an alternative DNS, a few games, to a website [3], and a website creation tool. As part of my studies I have created a regular expression parser, a game based on the four colors problem and a raytracer.

I have contributed to the FOSS world by  writing some tools for the Verse network protocol [4] and working on Blender's UV editor [5][6].

My experience is mostly in C/C++ (3 years), Python (2 years) and Java (5 years). Besides that I have lesser experience in PHP, Javascript, Prolog, Scheme and SQL.

I have been using Blender for 4 years, but have very little experience with the animation code in Blender. I have no experience working on an inverse kinematics system, but have been studying the subject in my free time for the last few months.

The interesting mathematical twist, and the idea of the result being used by artists to create animations, would make this a fun and challenging project for me to passionately work on.

[1] http://orange.blender.org
[2] http://www.blender.org/modules.php?op=modload&name=phpBB2&file=viewforum&f=18
[3] http://www.pcwereld.be
[4] http://users.pandora.be/blendix/verse/
[5] http://www.blender3d.org/cms/UV-editor___Image_window.222.0.html
[6] http://www.blender3d.org/cms/UV_Unwrapping.363.0.html

-------------- next part --------------
Proposed upgrade of Blender's curves and surfaces architecture. 

In 2003 the code for Nurbana (a 3D NURBS* modeling tool) was donated to theBlender team with a view to integrating its functionality into Blender. For anumber of reasons e.g. a lack of developers with experience of 3D splinemathematics and a Blender/Nurbana C/C++ mismatch, this has not yet been achieved.I am proposing that this integration (and the not-insignificant changes which will need to be made to Blender's curves and surfaces architecture) be completed by myself over the course of this summer.After discussion with Blender's creator Ton Roosendaal it was decided that asensible way to do this would be the creation of a generic curves/surfacesAPI which could be used from within Blender (and potentionally by many otherapplications in need of this functionality). By seperating this functionalitythe C++ code in Nurbana could be easily reused, and then accessed from within Blender via a C API interface.

As such the project could be viewed as several distinct (but interconnected)
sections:

1: Curves and Surfaces library
      Not dependent upon Blender
      Written in C++ (with as much use of Nurbana code as possible)
      Exposed via a C interface to Blender
      Data compatible with Blender

2: Conversion of current Blender curves/surfaces functionality to new API
      Blender spline maths -> calls to new API
      Ensure everything works as before

As a result of these changes to Blender's Architecture, it should be
straightforward to implement a number of other modifications:


-  New spline/surface types
      e.g. B-Spline, Cardinal

-  New tools/modifiers
      based on functionality from Nurbana codebase and/or new types

-  Modification/Improvement of drawing and selection options

While these latter suggestions do not require architectural changes to Blender, I imagine that they could nonetheless be time-consuming to implement (in the way that the little things inevitably are). Due to the time-limited nature of SoC I am therefore proposing them as 'time-dependent deliverables', which will be implemented, but perhaps not within the SoC timeframe.

I consider myself to be qualified to attempt this project as a result of the
work I carried out on my final year project this year at Trinity College Dublin:

Procedural 3D Modelling using Genetic Algorithms.
http://www.netsoc.tcd.ie/~eman/EmmanuelStoneFYP.pdf

This project contained a custom-writted B-Spline subdivision engine. This means I am now very familiar with the mathematics needed to implement curves/surfaces in 3D, and equally with OpenGL. Because I wanted to work on my project on my laptop (which is running Mac OS X) and at university (Windows/Linux), I am also well versed on the importance of writing portable code.

I am a 22 year-old Irish final year student of Computer Science at Trinity College Dublin, Ireland. Over the last 4 years I have developed a keen interest in both Open Source Software and 3D graphics. During an internship at a local software company (Doolin Technologies) I developed a authentication plugin for web-based applications written in Perl and PHP. This project was coded almost entirely by myself and provides a single sign-on system across a wide range of Doolin's products. The code for this plugin was written under the GPL and should be released soon. I intend to specialise in computer graphics in either a professional or academic capacity after graduation.

I sat my last final exam yesterday (this is why this proposal is so late) and I expect to graduate in November with First class honours.

Note: I have been in contact with Blender Foundation with regard to this project, and I have been assigned a designated mentor.

* NURBS =   
Non-Uniform Rational B-Spline (In 3D, smooth organic-looking
forms be quickly modelled using a small number of control points and        in 2D the same Maths can be used to model splines)


-------------- next part --------------
Integration of the Verse protocol in Blender
============================================

Introduction
============

Verse [4][5] is a new open source network protocol that allows multiple applications to act together as one large application by sharing data over a network. If one application makes a change to shared data, the change is distributed instantly to all the other interested clients. The functionality of the open source 3D software package Blender [1][2] could be greatly enhanced with support for this protocol, and this will create new opportunities for modeling, animation, rendering, and real-time simulation that even commercial packages don't offer. Implementation of the Verse protocol would improve workflow in graphical studios, create new possibilities in educational settings, and will generally enhance the spirit of cooperation that is so prevalent in the open source community. For example, imagine a teacher that can see and change the work of his students without leaving his chair (this is particularly useful when the student is half way around the world!). Verse also has the potential to connect users using different 3D and 2D applications, so for example one user might do the modeling on a project (in Blender), and another the texturing (in GIMP), with each getting updates of what the other is working on.

End user specification
======================
Through the verse protocol, two or more Blender users will be able to cooperate on a project in real-time, over a network. They will be able to work in the same virtual scene, and they will be able to transform objects, and edit low level geometry, with each client recieving instantaneous updates of the ammended data. The focus of this proposal will be to enhance Blender's mesh objects with the verse protocol, since they are widely used due to their rich functionality. This will be a great first step towards a more general integration of Verse in Blender, and one which should fit well within the time frame provided by the 'Summer of Code' grant. Users will be able to connect to a verse server (possibly running on the same machine as one of the Blender clients) and tell the server what data they wish to share. Other Blender clients will also be able to connect to the server and subscribe to the shared data. A link is then formed that that will allow each user to see and modify the same data in real-time.

Development specification
=========================
Blender is written mostly in C, but parts are written in C++ and the python programing language. The Verse module for this project will be writen in C. The first stage of the project will be to add a basic verse module that will provide the underlying infrastructure (connecting to the verse server, disconnecting, setting callback functions, etc.). Following this, other modules for mesh editing and object editing will be added. It will be possible to extend this integration in the future with other modules (material editing, texture editing, animation, etc.) and I intend to continue some of this work beyond the completion date of the 'Summer of Code' work, and it is my hope that other parts will be extended by other developers.

Object editing can be reduced to matrix operations in a homogenous coordinate system. The Verse data model and the Blender data model of object transformations are very similar, so I don't expect many problems with this stage of the integration.

The main focus will be on the editing of mesh objects, because they are widely used in modeling, both in Blender and in other 3D applications. This will create some very exciting possibilities!

Mesh objects can exist in several modes (object mode, edit mode, face select mode, UV Face select, Vertex paint and Texture paint modes). Meshes are stored in dynamic data structures in edit mode, and these structures are well suited for transmision between Blender and the verse server. When a mesh object is in some other mode, it's geometry is stored in dynamically allocated arrays (as a memory optimalisation) and such object can't reflect effectively changes coming from a Verse server. Due to this, a thin layer will be added between the Verse server and the Blender data structures. We can call this thin layer VerseMesh.

The next step would be to add user-friendly GUI for cooperation with other Verse clients -- special care will be taken since users are not usually familiar with sharing 3D data in real-time. This will require some special visualisations in the 3D scene, and some changes to the  'outliner' (Blender's scene data browser) for management of shared data.

Strategies will also be developed to reduce and localize the changes in Blender required for this Verse integration. This will allow the Verse integration to be an optional component, at least until it matures and stabilizes to the point that it gains wide support among the Blender developer and user communities.

Project schedule
================
Work on this integration will start immediately. This project is expected to require roughly 2 months of full time work. Work on this project will take place either in a CVS repository on the projects.blender.org website, or as a project on SourceForge. CVS code merges with the official Blender tree will be done weekly to ensure that this project's sources are in sync with current Blender development. Feedback from the Blender user community will be very important in shaping the develpment of this project, as will the advice and mentorship from the Blender development community. It is hoped that this implementation of Verse protocol in Blender will become an important tool in the creation of 'Orange', the Open Movie Project [3].

About me
========
Since March 2005 I have been a PhD student studying Computer Graphics at the Technical University of Liberec (Czech Republic) [7]. I am an adept Blender user, and I teach Blender at the Faculty of Architecture. I have been a Blender developer since summer 2003, and I have write access to the official Blender CVS repository. The theme of my Master's thesis was "Implicit surfaces in Blender", and I was awarded the Dean's prize for this work. The topic of my doctoral thesis is Distributed Network Graphical Applications.

I have been programming since 1998, and I have experience with C, C++, Python, Bash scripting, PHP, MySQL, and Pascal. I have experience with the Verse protocol. I contribute to the development of the program 'Key Status Monitor' [9], I have written a simple raytracer (as part of a school project), and I am the author of several web sites [7][8].

Links
=====
  [1] http://www.blender3d.org
  [2] http://www.blender.org
  [3] http://orange.blender.org

  [4] http://www.blender.org/modules/verse
  [5] http://www.quelsolaar.com/

  [6] http://jiri_hnidek.blogspot.com
  [7] http://www.kai.vslib.cz/~hnidek/
  [8] http://www.blender3d.cz/drupal
  [9] http://programmer-art.org/key-status

-------------- next part --------------
Project Title: "Rewriting of Boolean Set Operations Module for Blender"
-------------


Synopsis & Benefits to the Blender Community
--------------------------------------------

Nowadays, Blender has a boolean set operations module but it does not
work reliably. My proposal consists on fully re-implementing this
module with new algorithms, as most Blender developers and users think
current code can not be reused.

Boolean set operations (union, intersection and difference) between
solid objects are a basic modelling tool, so a robust implementation
would be a major gain for all Blender users. The reason this task has
not been undertaken yet is probably its difficulty, due to the fact
that any complete implementation must deal with many extreme cases and
strange configurations.


Deliverables 
------------

- Complete mirrored Blender tree with a new implementation of boolean set
  operations.

- Report documenting how to integrate the modification to the head
  Blender tree.

- Short report documenting the algorithms used with references to
  papers or other resources.

- Tutorial of boolean operations to show the new features.


Project Details
---------------

My intent is to keep the current interface and range (cases where
boolean operations are applicable) but rewrite the whole
implementation.

Current implementation drawbacks include:

- False solids creation, with overlapped vertexes and edges.

- Deficient and unnecessary triangulations.

- Many crashes.

The new algorithm must guarantee the creation of a new regularized
solid without overlapped vertices and edges, and triangles as regular
as possible.

As first approach I want to have a look at the GNU Triangulated
Surface Library (GTS, http://gts.sourceforge.net/index.html), that has
a full implementation of boolean set operations on triangular meshes
and is GPL. I will take into account the possibility of reusing it
or, at least, following it as a reference for the implementation. This
project also references bibliographical resources on boolean
operations such as [1].

As an alternative I can use the algorithm presented by Martti Mäntylä
in [2,3] that works on a BREP representation.

As I work at the University in a computer graphics department, I have
access to solid modelling experts that can guide me on this
issues. This project is very interesting for me because it can both
allow me learning more about important topics in my research area and
introduce me in a large open-source project.

I have also a Blender contact, Alexander Ewering
(blender at instinctive.de), who is an expert Blender developer and very
interested in this project. He has offered to be my mentor and drive
me on my learning of blender sources.

I will work in a mirrored Blender tree hosted at LaFarga
(http://lafarga.upc.edu), a SourceForge-like repository maintained by
my University. The whole project will be coded in C following the
blender "Coding Style Guide" and tested on gcc/Linux.



Project Schedule
----------------

June 14th - 23th: Research phase.
June 24th - July 6th: Analysis of the project and blender code.
July 7th - July 22th: First programming steps.
July 23th - 25th: Progress report writing.
July 26th: Progress report delivery.
July 26th - 2th: Holidays.
August 3th - 24th: Full-time programming.
August 25th - 31th: Testing phase.
September 1st: Final version delivery.


Biography
---------

I have a degree on Computer Science and I am currently studying in a
computer graphics PhD programme, so I am skilled in 3D geometry. I
have written programs to create and manipulate triangle meshes and
boundary representations. I have programmed boolean operations for
boundary representation in the University context, and I have also
developed a render for this models using opengl and a ray-tracing
strategy. At present, I am involved in a video-game seminar and in
Geometric Constrains Solving research.

I am a Blender user since two years ago and I have also taught 3D
animation using Blender in a postgraduate course, preparing my own
teaching materials and examples. I am best programmer than artist but
if you like to know my work I can publish some samples in my web page.

It will be my first contact with Blender code. This will be a good
chance to plunge in a great open-source project.


Bibliography
------------

[1] Krishnan, Narkhede, Manocha. "BOOLE: A System to Compute Boolean
Combinations of Sculptured Solids". Proceedings of ACM Symposium on
computational geometry. 1995.

[2] Martti Mäntylä. "Boolean Operations of 2-Manifolds through Vertex
Neighborhood Classification". ACM Transactions on Graphics, Vol. 5,
No. 1. 1986.

[3] Martti Mäntylä. "An Introduction to Solid Modeling". Computer
Science Press Inc., 1988.

-------------- next part --------------
Google "Summer of Code" Project: Fluid Animation with Blender
Nils Thuerey (nils at thuerey.de)


The goal of this project is to couple my existing
free surface fluid solver with Blender and publish
the software as an open source project.

This will be of interest for both Blender and
the open source community, as the animation of fluids
is very difficuilt without a corresponding solver.
To my knowledge there is currently no free surface
fluid solver available as open source.


Milestones:
Phase 1
 a) coupling of Blender and Fluid-Solver
 b) generation of a low resolution surface as animation 
    preview
 c) GUI controls for setting the simulation parameters
    Phase 2
 a) clean up and publish source code
 b) documentation of the user interface
 c) write an introductory and a more advanced tutorial
 d) creation of a sourceforge project & web page


Details:
The first phase described above will include the
coupling of Blender and the fluid solver. Here the
geometry from Blender has to be exported to the solver
to generate a voxel grid for the simulation. Furthermore
the fluid and obstacle parameters have to be entered
in Blender. As the simulation can take minutes to hours
depending on the resolution, and may be performed on a
different machine than the one used for Blender (e.g. a
larger compute cluster), the simulation will only generate
the fluid surface mesh for each frame. The mesh will
then be rendered in Blender. For preview purposes
a fluid surface with a lower resolution will be
generated to allow quick estimation of the aniamtion
properties.

The second phase will include the clean up and publishing
of the source code, as well as documentation tasks. Here
the user interface will be described, and two tutorials
will demonstrate a simpler and a more complex fluid
animation setup. As a last step, the source code together
with documentation and tutorials will be published on
source forge.

Bio/About myself:
I am a computer science student at the University of Erlangen in Germany (http://www.uni-erlangen.de/), I received my diploma in June 2003 and am a PhD student at the Institute for system simulation (http://www10.informatik.uni-erlangen.de/~sinithue/).
I started working on computational fluid dynamics in 2002,
examples of results can be found here: http://www.ntoken.de/p_fluid.html.

I am part of the team that developed a new algorithm to treat free surface fluids with the Lattice-Boltzmann method (LBM), we published several papers on this (http://www.ntoken.de/i_pubs.html). The solver is
a state of the art optimized free surface LBM code, that I will continue to develop at least until my PhD is finished (around end 2006 / beginning 2007).

Examples of recent results are:
- SIGGRAPH 2005 poster on optimizations with adaptive grids:
http://www10.informatik.uni-erlangen.de/~sinithue/temp/sgpostervid2.avi
- Animation of real-time fluids with LBM:
http://www10.informatik.uni-erlangen.de/~sinithue/public/phd/nthuerey_050607_tr1rtlbm.avi
- Older animations can be found here:
http://www.ntoken.de/p_fluid.html

As my main subject was computer graphics I am familiar with programming raytracers, GUIs and mesh algorithms. Apart from Blender, I have furthermore used a variety of 3D programs such as 3D Studio Max, Maya, Houdini. Although I have only used it for a few months now, Blender is currently my program of choice, and I hope can contribute to it's
success with this project.


Some References:

1) A single-phase free-surface Lattice-Boltzmann Method,
(Nils Thuerey), Institute for System-Simulation, master-thesis, 2003,
http://www10.informatik.uni-erlangen.de/~sinithue/public/nthuerey_030602_da.pdf

2) Performance Evaluation of Parallel Large-Scale Lattice Boltzmann Applications on Three Supercomputing Architectures, (Thomas Pohl, Frank Deserno, Ulrich Ruede, Peter Lammers, Gerhard Wellein, Thomas Zeiser), SC 2004,
http://www10.informatik.uni-erlangen.de/~pohlt/download/pohl_sc04_paper.pdf

3) Optimized Free Surface Fluids with the Lattice Boltzmann Method, (Nils Thuerey, Ulrich Ruede), SIGGRAPH 2005,
http://www10.informatik.uni-erlangen.de/~sinithue/public/nthuerey_050731_sgposter_poster.pdf

-------------- next part --------------
SKETCH-BASED MODELING INTERFACE FOR BLENDER

Proposing student name: Pablo Diaz-Gutierrez
Nationality: Spanish
School: University of California, Irvine
Email: pablo at uci.edu



* Synopsis

I propose developing a sketch-based input interface for Blender. It will provide a quick and dirty approach to the initial stages of 3D modeling, as opposed to the current, more rigid method. Once the project is completed, it will be easy to use a drawing pad or Tablet-PC system to draw a few defining lines of the desired model, and have the program automatically generate the surface that fits the drawing. Subsequently, a more experience Blender user can refine the model as desired.



* Motivation and benefits to the Blender community

Blender is an excellent piece of software, comparable in power to the leading products in the 3D modeling market. Furthermore, its recent release as a GPL licensed program has accelerated the addition of new features, through the creation of a vibrant developer community. However, as with many complex tools, the learning curve for new users to obtain satisfactory results is considerably steep. Having a simpler, alternative modeling system might bring new users to the program, which would undoubtedly turn into a higher quality product, as other open software projects have experienced.

Although the main focus of my project is new comers, experienced Blender users will also benefit by viewing the sketching interface as a way to produce quick model prototypes, which will be refined later on. This way, the creative power of traditional drawing and the current flexibility of Blender can be combined to allow faster and simpler work.

Another scenario where this tool will be a valuable help can be during the early stages of an animation movie. Storyboards are used by the artists to describe the desired look of each scene in the movie before rendering it. Unfortunately, there is always a mismatch between what is drawn in the storyboard and what the final product becomes, altering the original idea of the moviemaker. A way to reduce this distortion would be to rapidly produce a rough prototype of the scene, and let the traditional artist interact with it to further explain the desired result to the dedicated modelers.

Last, but non least, I am personally interested in sketch-based modeling from a research point of view. Automatically generating a 3D shape from a set of 2D sketchings is a fundamentally underconstrained problem.
Deep understanding of the creation process and of the  expectations of the artist while drawing is required before attempting to provide an appropriate solution. I have worked on a similar problem for a research project, and in the process I developed an in-house tool to test the surface generation algorithms (a paper showing some results, but still in review, can be downloaded from 
http://www.ics.uci.edu/~pablo/files/sketch.pdf). My implementation of this project in Blender would utilize some of the algorithms described in that paper, and serve to experiment with further improvements.



* Project description

As indicated before, I intend to program modifications to the Blender source code that allow sketch-based modeling. Being as this is a highly experimental field, I can only guarantee to have the most basic functionalities that such
a system is expected to provide in order to be able to create simple 3D shapes. Fine detailed models like products of CAD tools are out of the scope of the tool. The form of interaction with the Blender team is through submission of C patches to the experimental branch of the system, Tuhopuu, since this is the preferred way for new developers.

By the end of the summer, I plan to have a system capable to do the following:
 - Register free-hand drawing on an arbitrary plane in the 3D space.
 - Register free-hand lines connecting two points. These two types of lines form what is called a network of curves.
 - Calculate the 3D path corresponding to the drawn 2D curves.
 - Determine which sets of drawn lines identify surface patches, or allow the user to explicitly indicate this.
 - Generate surface patches covering the network of curves.
 - Interactive modification of the surfaces by changing the generation parameters, such as normals at the vertices, material stiffness, etc.

Additionally, there is a number of features I would like to work on, but I cannot be sure to finish on time, so I will indicate them as optional tasks:
 - Surface edition by cutting and stretching created patches.
 - Interactive edition of the generated 3D lines
 - Pencil gesture recognition to permit a more comfortable use of the system, reducing the complexity of the interface.
 - Sketching of the articulation of models (skeleton systems).
 - Improvement of the existing surface painting module.



* Project schedule and deliverables

I am organizing the development of my project in four periods of two weeks each. The first one consisting of system design and becoming familiar with the code base.
The second and third will see the development of the core of the system, and the creation of the first usable prototype. I am saving the last period for testing, debugging, integration with other parts of Blender, and experimenting with different surface generation algorithms and their parameters.

Specifically, the plan for submission of deliveries is as follows:
 - July 15th: System design document. First substantial, but private code modifications.
 - July 29th: Generation of a network of curves with correct normals at vertices and interpolation of normals along curves.
 - July 31st-August 4: Meet at SIGGRAPH'05 with the Blender team. Evaluate progress.
 - August 19th: Surface patch generation.
 - September 1st: Final submission. Debugged code committed. All possible extra features (see project description) completed.



* About the student

I am 26 years old, and a second year Ph.D. student at the University of California in Irvine. Before that, I completed my B.S. in Granada, Spain. I have industry experience on development of large pieces of graphics software, for I worked a whole year at ESPELSA, in Madrid, writing GIS software just before joining graduate school. There I reinforced my C/C++ skills and most important, I learned how to work in a big team.

Regarding Blender itself, I have looked through the source code and read the developer manuals and the description of the system architecture. Before the start of the project I will be learning about it, changing small parts and building my modifications. I've done some simple modeling before, following tutorials and some chapters of "The Blender Book". Although I can't call myself a frequent Blender user, I'm very interested on learning the ropes of Blender modeling and scripting to be able to do my scientific and schematic
visualizations with it. I think there's no better way to do so than doing a serious contribution to Blender itself.



Sincerely,
-Pablo Diaz-Gutierrez

-------------- next part --------------
Overview: A mechanism for recording animated tutorials of Blender.


Problem: the majority of Blender tutorials incorporate narrative text and a series of screenshots.  Text and screenshots can only explain so much for a program as graphics-intensive as Blender.  Something that plays back in an animated fashion can better explain the subtler nuances of the 3D modeller.


Implementation goal: at project completion, Blender obtains the ability to generate one or more files usable as an animated tutorial for Blender learners.


Proposed implementations:

One style of tutorial recording is to record the layout of Blender and the input sequences given to Blender.  These together would go into a recording file.  The notion is akin to gui macros or a Quake 3 demo.  In this form of tutorial recording, Blender itself would be the playback mechanism.

The second style of tutorial recording is to simply dump the entire screen of Blender to a video file per unit time.  The notion here is that of a builtin VCR.  The playback mechanism can be any common video player.

Advantages of first style over second: smaller overall file size to be distributed; easier on computing resources (faster to record).
Disadvantages of first compared to second: may cause confusion in users, both recording and viewing; recording files may end up being dependent on a particular version of Blender (due to feature set, etc.)

Advantages of second style over first: much more straightforward; independent of version of Blender; enhancable by common video-editing software.
Disadvantages of second compared to first: requires much more massive computing resources (cpu, memory, disk), interaction may become choppy over time.

Mix solution: both or a mixed compromise of the two styles can implemented such that the first-style recording is used as an interim format until recording finishes, then the system may take its time afterward generating screen frames based on the interim recording.


Personal Background:

I have kept up a casual following of Blender development since its initial source release.  In the initial source release I integrated autoconf/automake, which has since been thankfully replaced.  Since then, I have not made any substantial contribution to Blender development.

I am a undergraduate student of Computer Science and Engineering at UCLA with third-year standing.  I have studied a variety of programming languages, including C, Perl, Python, Scheme, some Lisp, PHP, Javscript, and so on.  I have also worked on quake3 game modifications (game logic), an emulator for the assembly language course at UCLA (something called CUSP), and a PalmOS application called ZBoxZ.  In 2001 I was dismissed from UCLA for poor academic
performance.  I am now working on readmissions back to the school, and I have done well academically this quarter.


-------------- next part --------------
Please delete any previous applications.

Project: Blender/FFMPEG

Bio
--------

I have just finished up my first year at Lane Community College in Eugene,
Oregon. I've already gone as high as I can go in their CS program; I plan on
transferring to the University of Oregon in Fall of 2006 to continue
my studies.
However, before then I will need to take discrete math, which I will do here at
Lane. I'm a computer science major, so any practical coding I can do will
be invaluable
to enhance my skills and prepare me for a career. While the primary language
being taught these days is Java, I have also learned C and C++ in my spare time.

I've been using Blender since version 2.33(April 2004). I have worked
with the Blender sources a little, specifically pertaining to the sequencer
UI code. However, I did not make any changes worthy of mention.

I have written a couple of plugins for Blender's sequencer(its built-in video
editor). The first, and greatest, is the fabled lightsaber plugin, found here:
http://www.elysiun.com/forum/viewtopic.php?t=29973
second, and not as great(although extremely useful) is the "framestamp"
plugin found here:
http://www.elysiun.com/forum/viewtopic.php?t=35668.

Synopsis/Deliverables
--------
1. An improved video I/O subsystem
2. Documentation of this system
3. (time permitting) Audio multiplexing upon output

Proposal
--------

Blender's video capabilities are outdated and in need of serious overhaul.
I propose to add better video input and output via the extremely
capapble FFMPEG.
FFMPEG provides two libraries, libavcodec and libavformat, which can handle
both video and audio. I plan on coding improved video support in all
areas: video textures, 3D view background images, and the oft-neglected
sequencer. In addition, I plan on adding improved video output using
ffmpeg.

Time permitting, I also propose integration of audio multiplexing for
video output formats. In addition, this will present opportunity to clean up
Blender's Movie API along the way. This should make 
Blender more enticing as a video effects platform, as 
well as giving existing users more options for video 
work.

I've been working with libavcodec/libavformat(ffmpeg's libraries) in my
spare time over the last couple of weeks, in attempting to create a video
editing application. Thus I have a small amount of experience with these
libraries.

Documentation will be written at all stages, both on the Blender API and
libavcodec/libavformat, as ffmpeg's docs are extremely sparse. 

To complete the proposal, the project will have a startup period in which I and my mentor, will determine a clear roadmap and refine any project goals along the way.

-------------- next part --------------
Project Title: Implementation of Open Dynamics Engine rigid bodies 
physics in Blender.

Synopsis
This project will involve the implementation of the open source physics 
SDK provided by www.ode.org to provide functionality allowing Blender 
users to create physics-based animations including collisions, forces, 
motors and joint constraints. This will improve the flexibility and 
sense of realism in animations produced through blender by allowing 
them to apply a set of coherent physical rules, and also potentially 
benefit the game engine, by providing an interface with a real-time 
rigid body physics engine through the blender system.

Deliverables
The goals of this system will be to, by the end of the project 
duration, provide a working implementation of ODE-based constraint and 
integration functionality, including collision detection, 
joints/constraints, forces and motors. The essential criteria of the 
physics engine might be considered to be the first three items.

Discussion of the task
Note that all recommendations I make here are just that, 
recommendations, and are subject to change at the discretion of the 
mentoring organisation.

Collision detection 
This will be achieved by automatically generating (with physics 
enabled) a geometric object (or geom) and associated rigid body (with 
user configurable properties including mass, whether or not physics is 
enabled for that object, initial velocity, auto-disabling options 
(perhaps through an advanced menu) and so forth) for every graphical 
body created in blender. These will be of type sphere, box or cylinder 
where applicable, generating a triangular mesh to represent more 
complicated objects. These can then be hierarchically divided into 
collision spaces to optimise performance by culling the checking of 
impossible collisions. These collision spaces may be of one of three 
forms, based on the decision of the mentoring organisation:
1) Simple space (Performs no collision culling but is optimal for 
smaller numbers of geoms)
2) Multi-resolution hash table space (The most flexible method, 
resolution may be dynamically scaled depending on number of geoms to 
optimise speed, accuracy and stability of the simulation)
3) Quadtree space (A tree-based hierarchical division of the 
simulation world into collision spaces, unnecessary with smaller 
numbers of geoms but most efficient with large numbers)
My own recommendation in this situation would be the use of scaleable 
resolution hash table spaces, as this will be most widely applicable 
to the unpredictable creations of the user.

Joints/Constraints
The joints to be made available to the user would be fixed, hinge, ball
and socket, slider, hinge-2 and universal joint (The Contact constraint 
would not be available to the user, but handled entirely internally 
through the collisions system discussed above). These could be applied 
to physics-enabled bodies through a tool which would allow selection of 
two bodies, a joint type, anchor (pivot of the joint) and axes (as 
applicable to the joint), properties which would be attached to the 
joint and available for modification through the interface at a later 
point.

Forces
The addition of forces as an element of the Open Dynamics engine 
integrally attached to the use of rigid bodies would require no 
additional work, however it might be advisable to allow the user to 
manipulate the simulation through application of force during runtime 
by means of, e.g. use of left mouse button to pull and right to push. 
This would carry a relatively low priority in this project, and might 
be added as a feature should time allow.

Motors
Use of motors would also seem to be relatively low priority, but could 
be useful to the user interested in creating more sophisticated 
animations, involving mechanical systems or characters/creatures with 
their own motions. The addition of this could be done with minimal 
additional workload once the interface for the addition of joints is 
added, as a variation of this existing interface could be adapted to 
interact with the existing ODE motor APIs.

Step Algorithm
Application of the physical rules discussed here would occur through 
an algorithm repeating at a given interval. The ODE supplies two 
possible algorithms, which one would need to decide between based on 
preferences for speed, stability and accuracy. The most accurate and 
stable is the default step algorithm, however this will run slowly and 
may function with a higher level of accuracy than is required for 
purely graphical simulations. The 'Quickstep' algorithm is, as the name 
suggests, superior in speed, but inferior in accuracy and stability. 
However, the accuracy, stability and speed of the Quickstep algorithm 
can be varied by changing the step rate, increasing the rate for higher 
speed and decreasing it for greater accuracy and stability. This 
provides greater flexibility than the default step algorithm would, 
especially if the step rate could be dynamically scaled in inverse 
proportion to the number of Degrees of Freedom (DOF) constrained in the 
simulation.

Graphical Feedback
In addition to allowing an interface for the creation and manipulation 
of simulated physical systems, the simulation would require feedback 
from the physics engine to alter the position of the graphical elements 
corresponding to the simulated rigid bodies. This should be easy 
enough to implement through the APIs provided with the ODE, whereby the 
position and orientation of each element can easily enough be passed 
to the graphical engine and used to ensure synchronicity between 
graphics and physics.

ODE Modifications
This more or less concludes the higher-level overview of the tasks to 
be performed. In addition to these, there are several points on which 
it might be advisable to make minor modifications to the open source 
physics engine for reasons of memory efficiency and stability, such as 
the streamlining of redundant arguments to certain functions (notably 
the impulse to force conversion function), and additional functionality 
in the rigid body destructors (allowing removal of attached constraints 
which are currently put into 'limbo'), and addition of validation to 
mutators which access variables known to only function correctly with 
certain values. 

It would also be necessary to decide at an early point whether the 
blender implementation of the ODE is to be built to use single or 
double precision floating point numbers, considering the trade-off 
between speed and accuracy/stability.

Project Schedule
As the project development takes place between June 24th (Although I 
can start development as soon as is necessary) and September 1st, or 
roughly 9 weeks, I propose initially dividing the development period 
into  two major sections, the first of which will involve 
implementation of the collision system. This will be preceded, however, 
by an initial period during which the specification will be worked out 
in greater detail, including extrapolation from the mentoring 
organisation on such matters as whether single or double point 
precision is to be used, what method of solid representation, space 
representation and user interface is preferred, as well as the choice 
of step algorithm. All of these are outlined to an extent earlier in 
this application, and I can offer recommendations on each as required. 
The collisions implementation period will involve first the 
application of the ODE APIs to allow the attachment of rigid bodies and 
geometric objects to the graphical solids created by blender, followed 
by the modification of the step algorithm to include feedback to the 
graphical display (which might be used every step, or at intervals 
depending on speed and memory considerations). After this, collision 
spaces should be implemented, after which the collisions work may be 
considered to be completed. Testing will take place throughout the 
process, and when sections are completed before schedule, the next 
section will begin immediately, and the schedule will be altered 
accordingly. The second section will involve implementation of the 
joints and constraints, which will be roughly divided into development 
of the interface and application of the APIs. A period of ten days will 
be allocated at the end for tidying up loose ends and ensuring that all 
aspects of the project are completed to the satisfaction of the 
mentoring organisation.
Thus, the planned schedule is as follows (Dates provided are subject to 
change):

Week 1 (06/24-07/01): Finalising specification
Week 2 (07/02-07/08): Implementation of collision detection APIs 
		      within Blender
Week 3 (07/09-07/15): Continued Implementation of APIs
Week 4 (07/16-07/22): Modification of step algorithm for graphical 
		      feedback
Week 5 (07/23-07/29): Implementation of collision spaces
Week 6 (07/30-08/05): Continued Implementation of collision spaces
Week 7 (08/06-08/12): Implementation of constraint APIs
Week 8 (08/13-08/19): Creation of constraint interface

Remaining Period (08/20-09/01): Final touches, completion of loose 
ends, and so forth.
Within the larger periods (constraints and collisions), the milestones 
might be considered flexible, but the collisions should be finished by 
the 5th of August at the latest, and the constraints clearly by the 1st 
of September. Addition of motors could be done if time allows, 
following implementation of the higher priorities.


Biographical Details
I am currently studying Computer Science and Game Design at the 
University of Southern California, and in the course of my studies, as 
well as through personal interest, have worked with other 3d systems, 
including Maya 6, 3ds Max, Worldcraft, Hammer, and the product 
concerned, Blender. I am familiar with the Blender tools, and have 
examined the source code, although I have not as of yet made any 
significant modifications to it. I have worked with C and C++ 
extensively over the last year, amongst other projects programming an 
extensive real-time graphical artificial life simulation with ten 
different species and terrain. In terms of qualifications towards the 
physics aspect, I have a thorough understanding of the Open Dynamics 
Engine, and received a perfect 800 score on my SAT II Physics, as well 
as a score of 100% in a British Edexcel A-Level Physics examination 
(equivalent to AP). I also took an Edexcel A-level advanced mathematics 
course on mechanical physics in which I received an A.

-------------- next part --------------
PyTexture Implementation in Blender \u2013 An Extensible Python-Based Texture Plugin System

The proposal is to implement a Python based texture descriptor for increased flexibility for artists and animators using Blender.

Blender's current texture plugin implementation uses a C, dynamic plugin system which is not only inflexible and time consuming to code, but difficult for non-programmers to implement.  

PyTexture's implementation will render the current plugin system obsolete, along with its negative effects.  It will allow fast and easy use of Python to define new textures, and will be easily accessible to beginner and professional coders alike.  Furthermore as Blender uses Python natively as its scripting language, no other software such as compilers is required.  This also negates the time consuming compile/test/debug cycles, as Python is an interpreted language and does not require compiling.

PyTextures will allow a level of control unlike any current texture implementation in Blender, both internal and plugin based.  This stems from the ability to use any Blender object data in the definition of the texture, which could include vertex location, surface normals, or any other geometric data.

The basic requirements to allow this functionality include:

1.A mechanism to link PyTextures to objects.
2.Relevant modifications to the UI to allow PyTexture linking.
3.Blender access to the Python interpreter throughout the render pipeline to allow calling of the PyTexture as render-time
4.Python access to render data.

I expect that the project will take between 6 and 8 weeks of research and development followed by two weeks of documentation, testing, and texture development for release.  Development will begin as soon as approval is received.

A basic timeline is as follows :

1.Research \u2013 collaborate with mentor and investigate the detailed inner workings of the Blender render pipeline.
2.Design \u2013 develop a detailed design based on the research undertaken.
3.Implement \u2013 implement the Blender internal systems to handle PyTextures.
4.Test and Debug \u2013 test all aspects of the new modules incorporated into the Blender Python API. Fix any bugs.
5.Develop and Implement a series of PyTextures to demonstrate their power and flexibility.
6.Documentation \u2013 Complete the required user documentation.
7.Release.

This project will see a vast improvement in the quality and range of textures achievable with Blender.  As a user friendly system, I expect that a greater number of people will utilise PyTextures, and thus create a repository of advanced texture scripts which could be distributed within Blend files.



More information about the Soc-2005-dev mailing list