[Bf-committers] Is there a parallel pipeline in blender?
Ruan Beihong
ruanbeihong at gmail.com
Thu Aug 6 17:11:39 CEST 2009
Hi there,
I wonder if there is a parallel pipeline in blender. I mean the
pipeline as that in CPU which increase IPC (instructions per cycle).
I'm considering implement that feature.
Let me make it more clear.
Assuming the following interfaces is provided:
enum ThreadedPipelineStatus
{
EMPTY,
RUNNING,
FULL
};
struct ThreadedPipelineOutcome
{
enum ThreadedPipelineStatus stat;
void *rtv;
};
struct ThreadedPipeline
{
ListBase * stage_list; //list of pipeline stage (tasks)
unsigned buf_size;
void * in_buf[];
unsigned in_buf_head;
unsigned in_buf_tail;
void * out_buf[];
unsigned out_buf_head;
unsigned out_buf_tail;
};
struct ThreadedPipelineStage
{
struct ThreadedPipelineStage *prev, *next;
void * (*task)(void *);
};
/*
* This function allocate a space for return, initialize its buffer
* and setup @startup and @endup.
* @startup should return the data's pointer to enter
* the in_buf of pipeline, or NULL to drop the input.
* @endup should return the data's pointer to enter
* the out_buf of pipeline, or NULL to drop the input.
*/
struct ThreadedPipeline* BLI_init_threaded_pipeline
(unsigned buf_size,
void * (*startup)(void *),
void * (*endup)(void *));
struct ThreadedPipeline* BLI_init_threaded_pipeline_add_task
(struct ThreadedPipeline *pipeline, void * (*task)(void *));
struct ThreadedPipelineOutcome* BLI_run_threaded_pipeline
(struct ThreadedPipeline* pipeline, void* data);
/*
* This function free the space pointed by input, and all the resources
* it takes. Also end all running tasks
*/
void BLI_destroy_threaded_pipeline(struct ThreadedPipeline *pipeline);
NOW, current code use these interfaces:
//the startup
void * foo_startup(void* input)
{
some_data * data = input;
if(data is OK)
{
some_data * output = copyof(data);
return (void*)output;
}
return NULL;
}
//the endup
void * foo_endup(void* input)
{
some_data * data = input;
if(data is OK)
{
return (void*)data;
}
return NULL;
}
struct ThreadedPipeline* pipeline = BLI_init_threaded_pipeline
struct ThreadedPipelineOutcome(2, foo_startup, foo_endup);
pipeline = BLI_init_threaded_pipeline_add_task( pipeline, task1);
pipeline = BLI_init_threaded_pipeline_add_task( pipeline, task2);
pipeline = BLI_init_threaded_pipeline_add_task( pipeline, task3);
pipeline = BLI_init_threaded_pipeline_add_task( pipeline, task4);
struct ThreadedPipelineOutcome outcome;
void* input;
int loop=1;
while(loop)
{
if( i < len(data))
input = data[ i++ ];
else
input = NULL;
outcome = BLI_run_threaded_pipeline( pipeline, input );
switch(outcome.stat){
case FULL:
sleep for a while;
case RUNNING:
do somthing with outcome.rtv;
break;
case EMPTY:
loop=0;
break;
}
}
BLI_destroy_threaded_pipeline(pipeline);
IN THIS WAY, the data in data[] can be processed by task[1-4] in
sequence one by one, but actually more than one element in data[] are
processed in parallel.
Any comments?
--
James Ruan
--
James Ruan
More information about the Bf-committers
mailing list