[Uni-verse] Long post about everyting sound and more.
Eskil Steenberg
uni-verse@blender.org
Mon, 05 Apr 2004 19:40:06 +0200
Hi
Regarding the sound system, As you may know i have been traveling a bit
and have had the time ti think this thing over.
I sent out a API to the mailing list and the latest version of verse
also has this API (although not fully implemented under the hood), still
I havent got any response on an issue i conceived as "hot". there are
how ever some thing i im beginning to doubt, First of all the time
stamps, since most of verse doesnt have time stamps having sound time
stamps will not sync the sound with the graphics. Next, to use time
stamps requires you to have more buffering of the sound introducing
latency. And then again each sound command has its own length and each
channel has its own frequency. shuldent the different commands be merged
by frequency not time stamp?
Then again comes the storage of sound:
I think we need storage of sound, but not for the same reasons as
discussed before. I think we need sound "assets" to be able to have two
sound editing apps like CoolEdit and WaveLab work together on the same
sound clip just like Maya and Max on a mesh. Again tis is a move to
accommodate the content creation aspect of verse rather then the VR
aspect. For live sound we keep the streaming sound. We would not add any
functionality to trigger the asset sounds. if on wants to play a asset
sound one subscribes to it and then pipes it back to verse threw the
streaming channels. Why? because clients can not be expected to down
load large sets of clips they have o idea if they ever will be
triggered. My guess is that i would get a lot of flack over that but i
think its the right thing to do. It also avoids over complicating sound
players who would have to be able to play both clips and streams.
None of this makes client development any simpler at all, quite the
opposite, but it enables verse to be used for more things.
Cheers
E