AW: AW: [Uni-verse] Re: OSGVerse

Peter Lundén plu at tii.se
Thu Apr 6 15:01:21 CEST 2006


Ok Im fine with that solution for the time being. Then I need to know 
the name or tag of your avatar node.

Concerning the orientation:
Yes, that will solve the problem.

--PLu


 

Marcus Hoffmann wrote:

>Technically, it is for me no difference to update my own object node and
>send the transformation command, or to update your UVAS object node and send
>the transformation command. I only have to know which one is your
>favorite... :)
>
>In my opinion it would be more flexible to update the renderers objectnode
>and UVAS will then scan the avatar transformation data of the renderer,
>because then you listen to more than one renderer in your system easily. You
>can then choose, what renderer you want to follow with your audio
>calculation. You would not have the need for a more complex tag definition,
>that every renderer would have to understand, but could choose by yourself,
>if you want to calculate the audiostream for this or that connected device.
>The renderer will save its camera position into the avatar object node
>contineously. This information will be sent through Verse - and with that to
>your UVAS. There you only would have to listen to the renderers avatar nodes
>transformations.
>And for you it does not make a big difference to look at your or any other
>object node internally, doesn't it?
>
>But maybe there are better reasons that I do not see right now to do it the
>other way... ;)
>
>Concerning the orientation:
>If I update the rotation of the avatar node this problem is solved, isn't
>it?
>And that is fortunately not the big challenge...I planned to integrate this
>functionality anyway.
>
>Best Regards
>Marcus
>
>Marcus Hoffmann
>Dipl. Ing. for Mediatechnology
>Dep. Realtime Solutions for Simulation and Visual Analytics
>Fraunhofer Institut für Graphische Datenverarbeitung
>Phone: +49 (0)6151 155 - 639
>Mail: marcus.hoffmann at igd.fraunhofer.de
>
>
> 
>
>  
>
>>-----Ursprüngliche Nachricht-----
>>Von: Peter Lundén [mailto:plu at tii.se] 
>>Gesendet: Donnerstag, 6. April 2006 10:43
>>An: Marcus Hoffmann
>>Cc: consortium mailing list
>>Betreff: Re: AW: [Uni-verse] Re: OSGVerse
>>
>>Hmm... think we missunderstood each other. What I refere to 
>>as orientation is identical to the information in 
>>verse_o_transform_rot_real32.
>>
>>It is an intressting discussion about who is responcible for 
>>moving the avatars. For me its not a technical issue but a 
>>conseptual. There are more than one way to view the problem. 
>>My perspective is that the acoustic simulation system is a 
>>back end system not realy visible (but audible :-) ) to the 
>>user. There is then a front end system, the navigator, which 
>>is responcible for the navigation and that system should 
>>control the listeners position in the world. The user should 
>>be able to manipulation everything from one client (or it 
>>should at least be percived as  a single client). If the user 
>>is forced to also be aware of UVAS everything gets mush more 
>>complicated from the users point of view.
>>
>>I think there is some consensus in that clients who are some 
>>kind of renderer then its avatar represents the 
>>camera/listener. If this is agreed up on then somebody else 
>>then the client himself has to drive the avatar if its a 
>>passive client. What do the KTH people say about this?
>>
>>Then the qustion in our case is who is controling the avatar 
>>of UVAS? I still think its the navigator. But technicly its 
>>simple to follow any verse object node, given it has its 
>>position and rotation updated often enough.
>>
>>--PLu
>>
>>Marcus Hoffmann wrote:
>>
>>    
>>
>>>Hi Peter,
>>>
>>>It will be included until the middle, latest the end of next week. 
>>>Sorry for the delay.
>>>We will - in the first version - upload the transformation 
>>>      
>>>
>>of the our 
>>    
>>
>>>avatar-object-node using the common verse commands.
>>>(verse_o_transform_pos_real32, verse_o_transform_rot_real32).
>>>Wouldn't it be a more general way, if you would get the 
>>>      
>>>
>>transformations 
>>>from that avatar-node in your code (because you have 
>>knowledge of our 
>>    
>>
>>>object node, when we connect)?
>>>Or don't you download the other objectnodes? (i think this wouldn't 
>>>drop too much performance on your side, object nodes generally are 
>>>small verse
>>>objects)
>>>
>>>If this would be not possible, we would have to identify 
>>>      
>>>
>>your node on 
>>    
>>
>>>the renderers side (via the tag) and would have to update its 
>>>coordinates, did I get that right?
>>>
>>>For the orientation:
>>>OpenSG has the possibility to give us the data needed to define the 
>>>orientation. We just would have to find a way to transmit 
>>>      
>>>
>>that to you.
>>    
>>
>>>Following the 2 approaches I described above for the 
>>>transformation-information-submission, we could just 
>>>      
>>>
>>establish another 
>>    
>>
>>>tag for that information, in our object node, or if this 
>>>      
>>>
>>version is not 
>>    
>>
>>>possible in your object node....
>>>Then you could get the orientation information everytime you want, 
>>>because it's up-to-date.
>>>
>>>Best Regards
>>>Marcus
>>>
>>>Marcus Hoffmann
>>>Dipl. Ing. für Medientechnologie
>>>Abt. Echtzeitlösungen für Simulation und Visual Analytics Fraunhofer 
>>>Institut für Graphische Datenverarbeitung
>>>Phone: +49 (0)6151 155 - 639
>>>Mail: marcus.hoffmann at igd.fraunhofer.de
>>>
>>>
>>>
>>> 
>>>
>>>      
>>>
>>>>-----Ursprüngliche Nachricht-----
>>>>Von: uni-verse-bounces at projects.blender.org
>>>>[mailto:uni-verse-bounces at projects.blender.org] Im Auftrag 
>>>>        
>>>>
>>von Peter 
>>    
>>
>>>>Lundén
>>>>Gesendet: Mittwoch, 5. April 2006 15:40
>>>>An: Marcus Hoffmann
>>>>Cc: consortium mailing list
>>>>Betreff: [Uni-verse] Re: OSGVerse
>>>>
>>>>Hi,
>>>>
>>>>What is the status of the OSGVerse renderer. Can it handle the 
>>>>navigation now?
>>>>
>>>>As we talked about, it must also be able to attach the UVAS 
>>>>        
>>>>
>>listener 
>>    
>>
>>>>object. As discussed in Stockholm the UVAS listener will be named 
>>>>'uvas'
>>>>and will have tag 'listener' in the 'uvas' tag group ( in 
>>>>        
>>>>
>>the future 
>>    
>>
>>>>we have to find a better naming scheme that works with more 
>>>>        
>>>>
>>then one 
>>    
>>
>>>>listener). You should find that object and update its position and 
>>>>orientation according to the camera in the visual renderer.
>>>>
>>>>Have you sorted out the problem of how to get the direction? 
>>>>I dont think there is a reference direction specified for Verse but 
>>>>UVAS ceartenly have one, the reference direction vector is (0 0 -1)
>>>>
>>>>We need to be able to test the whole demo system with all the 
>>>>components before the review and is just a few weeks left.
>>>>
>>>>Best regards,
>>>>--PLu
>>>>
>>>>
>>>>   
>>>>
>>>>        
>>>>
>>>_______________________________________________
>>>Uni-verse mailing list
>>>Uni-verse at projects.blender.org
>>>http://projects.blender.org/mailman/listinfo/uni-verse
>>>
>>> 
>>>
>>>      
>>>
>>    
>>
>
>
>  
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: plu.vcf
Type: text/x-vcard
Size: 306 bytes
Desc: not available
Url : http://projects.blender.org/mailman/private/uni-verse/attachments/20060406/6cffbe3b/plu-0001.vcf


More information about the Uni-verse mailing list