i have recently used a combination of Milkdrop(Winamp), TriDef3d(makes MilkDrop 3D), Kainy(Stream from PC - to Android), my Galaxy S4 Mini and the Durovis Dive(Use your Smart-phone as VR screen) to Stream MilkDrop in 3D to my VR Headset. I think you could make that process much easier if you added 3D support to the Android version of your Software. Also it would be nice if you implemented a look-around Feature(Motion sensors enabled, Visualization all around you).
I think Visualization in Virtual Reality might be quite successful in the future, with VR becoming more and more popular.
Greetings
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i have recently used a combination of Milkdrop(Winamp), TriDef3d(makes
MilkDrop 3D), Kainy(Stream from PC - to Android), my Galaxy S4 Mini and the
Durovis Dive(Use your Smart-phone as VR screen) to Stream MilkDrop in 3D to
my VR Headset. I think you could make that process much easier if you added
3D support to the Android version of your Software. Also it would be nice
if you implemented a look-around Feature(Motion sensors enabled,
Visualization all around you).
I think Visualization in Virtual Reality might be quite successful in the
future, with VR becoming more and more popular.
I guess you "just" have to render another view/camera from a slightly different angle so that you have an image for each eye(That is only for the 3D Version,without Head Movement).
There are a few resources on how you can use the head-tracking at the Durovis Dive Homepage(http://www.durovis.com/sdk.html) but i do not have any experience with that matter.
My (3D) programming skills are also quiet limited. So i might be misunderstanding what you mean by "developer's perspective".
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
without any knowledge of the code, my guess is that most of the visualizations have a depth field (a 'z' coordinate in addition to the display coordinate 'x' and 'y'). It certainly looks like they do. Then i think it is 'just' a matter of calling the <OpenGL_3D_View> function instead of the usual <View> (not sure what the specific function names are) and setting a few additional parameters, e.g. view separation or something like it, probably assume fixed view.
i imagine that VR should also be as 'straightforward', much of the hard work being done by the backend developers.
If the visualization data is 3D, i.e. expressed as a function of 3 coordinates (x,y,z) (e.g. Color(x,y,z) for (x,y,z):=[1:mx,1:my,1:mz])then a change in calls to OpenGL view functions should do the trick.
after a glance at the code, seems a new Renderer library would be need based on 3D output instead of 2Dtextures, maybe 3Dtextures?
Last edit: Charlson Kim 2014-06-19
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi there,
i have recently used a combination of Milkdrop(Winamp), TriDef3d(makes MilkDrop 3D), Kainy(Stream from PC - to Android), my Galaxy S4 Mini and the Durovis Dive(Use your Smart-phone as VR screen) to Stream MilkDrop in 3D to my VR Headset. I think you could make that process much easier if you added 3D support to the Android version of your Software. Also it would be nice if you implemented a look-around Feature(Motion sensors enabled, Visualization all around you).
I think Visualization in Virtual Reality might be quite successful in the future, with VR becoming more and more popular.
Greetings
Not a terrible idea, do you have a developer's perspective on how this
should be implemented?
On Wed, Apr 9, 2014 at 6:28 AM, Sewers Forger sewersforger@users.sf.netwrote:
I guess you "just" have to render another view/camera from a slightly different angle so that you have an image for each eye(That is only for the 3D Version,without Head Movement).
There are a few resources on how you can use the head-tracking at the Durovis Dive Homepage(http://www.durovis.com/sdk.html) but i do not have any experience with that matter.
My (3D) programming skills are also quiet limited. So i might be misunderstanding what you mean by "developer's perspective".
i am also interested in 3D output.
without any knowledge of the code, my guess is that most of the visualizations have a depth field (a 'z' coordinate in addition to the display coordinate 'x' and 'y'). It certainly looks like they do. Then i think it is 'just' a matter of calling the <OpenGL_3D_View> function instead of the usual <View> (not sure what the specific function names are) and setting a few additional parameters, e.g. view separation or something like it, probably assume fixed view.
i imagine that VR should also be as 'straightforward', much of the hard work being done by the backend developers.
If the visualization data is 3D, i.e. expressed as a function of 3 coordinates (x,y,z) (e.g. Color(x,y,z) for (x,y,z):=[1:mx,1:my,1:mz])then a change in calls to OpenGL view functions should do the trick.
after a glance at the code, seems a new Renderer library would be need based on 3D output instead of 2Dtextures, maybe 3Dtextures?
Last edit: Charlson Kim 2014-06-19