[Uni-verse] Long post about everyting sound and more.

Lauri Savioja uni-verse@blender.org
Wed, 7 Apr 2004 08:40:59 +0300


Hi Eskil,
Hi All,

Your questions are very good. I hope our new draft for WP2.2 will
discuss these issues more. At the moment I have only some comments:

1) I'm sorry to say, but still I don't know where to find the new API
to comment this issue :-(. 

2) Your idea of having storage for audio enabling "GIMP<->Blender"-
styled interface for audio is a clever one!

3) For room acoustic modeling neither the audio streaming nor the
audio server is essential. They could provide a nice add-on for
concurrent distributed design, but basically we can manage with local
audio files.

4) From viewpoint of Verse both of your questions are important. In
Uni-verse the game/digital media demonstrator should take benefit of
this functionality. In WP2.2 the audio server can be specified in a
general level, but the spec should be done in co-operation with the
definition of the game demonstrator to ensure that all the required
functionality of the demo is incorporated in the audio spec.

5) Who is going to implement the audio server? Does it belong to WP6?
My point is that, it is not part of WP7 :-).

Regards,
	Lauri

On Mon, 05 Apr 2004, Eskil Steenberg wrote:

> Hi
> 
> Regarding the sound system, As you may know i have been traveling a bit 
> and have had the time ti think this thing over.
> 
> I sent out a API  to the mailing list and the latest version of verse 
> also has this API (although not fully implemented under the hood), still 
> I havent got any response on an issue i conceived as "hot". there are 
> how ever some thing i im beginning to doubt, First of all the time 
> stamps, since most of verse doesnt have time stamps having sound time 
> stamps will not sync the sound with the graphics. Next, to use time 
> stamps requires you to have more buffering of the sound introducing 
> latency. And then again each sound command has its own length and each 
> channel has its own frequency. shuldent the different commands be merged 
> by frequency not time stamp?
> 
> Then again comes the storage of sound:
> 
> I think we need storage of sound, but not for the same reasons as 
> discussed before. I think we need sound "assets" to be able to have two 
> sound editing apps like  CoolEdit and WaveLab work together on the same 
> sound clip just like Maya and Max on a mesh. Again tis is a move to 
> accommodate the content creation aspect of verse rather then the VR 
> aspect. For live sound we keep the streaming sound. We would not add any 
> functionality to trigger the asset sounds. if on wants to play a asset 
> sound one subscribes to it and then pipes it back to verse threw the 
> streaming channels. Why? because clients can not be expected to down 
> load large sets of clips they have o idea if they ever will be 
> triggered. My guess is that i would get a lot of flack over that but i 
> think its the right thing to do. It also avoids over complicating sound 
> players who would have to be able to play both clips and streams.
> 
> None of this makes client development any simpler at all, quite the 
> opposite, but it enables verse to be used for more things.
> 
> Cheers
> 
> E
> 
> 
> _______________________________________________
> Uni-verse mailing list
> Uni-verse@blender.org
> http://www.blender.org/mailman/listinfo/uni-verse

-- 
Lauri Savioja
Telecommunications Software and Multimedia Laboratory
Helsinki University of Technology
PO Box 5400, FIN-02015 HUT, Finland
Tel. +358-9-451 3237     Fax +358-9-451 5014