I put my 3D points to a TGLPoints instance and then display them. The user is able to rotate/translate the view. The problem is when I try to display a different set of 3D points, and if the area represented by the new points is smaller than the previous one, they look far away. What I want is to zoom to the new set of points automatically to fit them to screen. It would also be great if I did not have to change the rotation of the camera.
I did not really compute the Max X/Y/Z from my cloud points, instead I relied on 'BoundingSphereRadius' function which I now realize is not what I want.
Have just gotten the feel of this nice GLScene (using it with Lazarus). Am also interested in using it with point clouds, wonder if you ever found a nice solution to scaling the data to the screen..?
Am especially wondering where to put any code for scaling the points
without slowing down the points processing, would
the BeforeRender event of the SceneViewer be the correct place..?
Tried using NormalizeVector() and ScaleVector() for scaling without success.
Below is a sample of my code:
Much obliged for any help,
Regards,
Sami
at the event FormCreate,
after the file containing the points (X, Y and Z as well as RGB) has been
read and loaded into a TstringList, I process each
line by reading in the X-cordinate into XX_, Y -coordinate into YY_,
Z -coordinate into ZZ and the RGB into RedText, Green_Text
and Blue_Text respectively
as follows in a for loop:
Thanks a lot for your kind help. Your example cleared a lot as to where to place
the Camera adjustment code after the min and max are known. It was helpful to know
it's only done once at the end after processing all the points...
The problem had so far was that the points were appearing way too far from the
screen center, and after modifying some parameters, seem to have lost
any visibility (am not seeing any points).
Have made sure that vmin and vmax are working properly
and that the variables radius, fov and distance yield sensible values. Checked that the target object of the camera is indeed 'DummyCube', but am not so sure about how to position the camera (Direction/Position). Am including the settings and would be very grateful if you could please just take a quick look and may be comment on them.
Changed a bit the routine to find the min/max by just testing separately for
x,y,and z as so:
Yes you can also avoid testing Z. or you can can add 1more dimension examples you are in the case of XY view just compute XY min and max for each axis, same for XZ, YZ view so globally you can compute what point have the min and max for each axis depend of the view. (sorry for my bad english)
For increase performances perhaps in you're file you'll can include default camera properties and "min" "max" of each axis for the clouds. Like thisyou'll not need to compute at start, just at saving
Did manage to get some point clouds files to display OK, whereas for the some, there's
just one dot on the screen, perhaps it's the scaling..
Yes it's due to the point size scaling. Just adjust (Distance * 0.5) where 0.5 is the zoom factor (0.65 will be good enought)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks a lot Jerome for all your kind help and comments . Am trying out your hints and
things are progressing nicely, was helpful to know that 0.5 is the zoom.
Defining the min and max properties earlier on sounds like a very good idea.
How to efficiently read huge text files seems to be
a main issue here (reading in millions of lines can take a few mins), but then that is beyond the scope of GLScene..
//and BTW, your English is just fine : )
Regards
Sami
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Have been now able to load several point (cloud) files, for
the very large ones am reading only a part of the points (for instance every 50th point)
to get a speed increase.
Am now trying to make a simple interface for letting the user scale and zoom
on the resulting image.
Scaling works nicely with the following code which is called any time
the user changes the scale in the edit box 'AMain.Ed_Scale':
However, for zooming, cannot get similar results...
Any change by the user on the zoom factor (say from 0.5 to 0.65)
will make the image nearly invisible. This is strange as both 0.5 and 0.65
work the first time the image is displayed, but any subsequent change in the zoom
renders the image very very small...As I understand it, in the following code, 'zoomer'
is the only parameter which needs to be set to affect the zooming percentage..?
~~~
Var
Radius, fov, distance :Single;
BEGIN
With AMain Do
Begin
radius := VectorDistance(vmin,vmax) * 0.5;
fov := DegToRad(SceneViewer.FieldOfView)*0.5;
distance := Radius / Abs(tan(fov));
GLCamera.AdjustDistanceToTarget(distance*zoomer);
GLCamera.DepthOfView := 2 * GLCamera.DistanceToTarget + 2 * radius;
~~~
where zoomer can be modified by the user so that an OnChangeEvent calls the above
code.
Would be very grateful for any hints here.
Regards,
Sami
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Sami, i think the problems come from "GLCamera1.AdjustDistanceToTarget" this is the help
Adjusts distance from camera to target by applying a ratio.
If TargetObject is nil, nothing happens. This method helps in quickly
implementing camera controls. Only the camera's position is changed.
So the the behaviours is normal before making computing you must reset the camera. But for what you can do the best way is to use Mouse and setup the camera. See attached sample
LMB and LMB+Shift : Adjust Distance by 1.25%
RMB : Adjust Focal Length
Wheel : Adjust Distance
Just a question why do you use Currency instead of Double or Single ?
Thanks so much for your help Jerome, really appreciate it.
Tried your code and indeed it looks much better now.
Though with a large point cloud, when rotating the mouse wheel inwards (towards yourself), the image first grows but then seems to 'collapse' on itself so that it eventually disappears altogether..probably something to do with the parameters...
Yes you're right, Currency should be Single/Double, was using it only as a reminder that the value is provided by the user with a max. 2-decimal accuracy..
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Though with a large point cloud, when rotating the mouse wheel inwards (towards yourself), the image first grows but then seems to 'collapse' on itself so that it eventually disappears
It's probably come from a range overflow or casue by the using of the power function. If you can provide me a simple sample i'll can test and see
I think you can set the number of decimal in Lazarus with TFloatSpinEdit component you have DecimalPlaces property for that
Last edit: Jerome.D (BeanzMaster) 2018-08-03
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
Thanks for your kind help, Jerome. Am attaching the full application and a sample data, the sample data that was referring to is huge (even when compressed), and could not reproduce the problem every time.
However, something incorrect seems to occur with this simpler file (file dog.txt, each row contains X Y Z and RGB colors). After launching the applic, the file dog.txt is selected, then button 'Analyze' is pressed and the image should appear. After rotating the dog (using the left mouse button) a bit and keeping the head in front, then zooming in using the wheel button, it would appear that the end part/tail-part of the dog is not in proportion with its front part...
Regards,
Sami
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello
At first thank you for your pragram. Now I have changed it from lazarus to delphi. If I build it to 32bit program ,it is ok. But build it to 64bit program , it can not run and display error mesage"Floating point division by zero", Why?
Sorry for late reply, am not yet running GLScene with Delphi, so unfortunately can't reproduce the error. But would imagine you can track the error by checking at what point it occurs...Were you able to zoom in on the dog..?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks at first. I have builded this program for 64Bit in this attachment. You can run this EXE file to see the error. If you run this program, you can not display this dog.
How can I do? Thanks
sxbug
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
I put my 3D points to a TGLPoints instance and then display them. The user is able to rotate/translate the view. The problem is when I try to display a different set of 3D points, and if the area represented by the new points is smaller than the previous one, they look far away. What I want is to zoom to the new set of points automatically to fit them to screen. It would also be great if I did not have to change the rotation of the camera.
I tried the formulae given here https://www.opengl.org/discussion_boards/showthread.php/169865-Zoom-to-fit-screen. However, it did not work. Maybe because I do not have bounding sphere center.
How can I achieve what I want with GLScene?
Thanks in advance.
Last edit: hckr 2017-12-15
Hi, it's hard to answer without a bit of code. so describe how you do. Do you compute MaxX/Y/Z from your cloud Points ? How you set camera ?
Hi,
I did not really compute the Max X/Y/Z from my cloud points, instead I relied on 'BoundingSphereRadius' function which I now realize is not what I want.
The camera's target is the GLPoints object.

Last edit: hckr 2017-12-15
Hello,
Have just gotten the feel of this nice GLScene (using it with Lazarus). Am also interested in using it with point clouds, wonder if you ever found a nice solution to scaling the data to the screen..?
Am especially wondering where to put any code for scaling the points
without slowing down the points processing, would
the BeforeRender event of the SceneViewer be the correct place..?
Tried using NormalizeVector() and ScaleVector() for scaling without success.
Below is a sample of my code:
Much obliged for any help,
Regards,
Sami
at the event FormCreate,
after the file containing the points (X, Y and Z as well as RGB) has been
read and loaded into a TstringList, I process each
line by reading in the X-cordinate into XX_, Y -coordinate into YY_,
Z -coordinate into ZZ and the RGB into RedText, Green_Text
and Blue_Text respectively
as follows in a for loop:
Last edit: Sami M. 2018-07-17
Hi Sami can you attach a complete test case . Like i said you must compute the Min/Max boundig box
Quick and dirty code from scratch :
Becarefull each time you'll change GLPoints you need reset GLCamera1 and GLSceneViewer1.FieldOfView.
GLCamera2 = Same as GLCamera1
Last edit: Jerome.D (BeanzMaster) 2018-07-17
Hello Jerome,
Thanks a lot for your kind help. Your example cleared a lot as to where to place
the Camera adjustment code after the min and max are known. It was helpful to know
it's only done once at the end after processing all the points...
The problem had so far was that the points were appearing way too far from the
screen center, and after modifying some parameters, seem to have lost
any visibility (am not seeing any points).
Have made sure that vmin and vmax are working properly
and that the variables radius, fov and distance yield sensible values. Checked that the target object of the camera is indeed 'DummyCube', but am not so sure about how to position the camera (Direction/Position). Am including the settings and would be very grateful if you could please just take a quick look and may be comment on them.
Much obliged for your help,
Sami
Hi, you need to play with TGLPoints 's size and pointparameters properties
See attached sample
Hello,
Many many thanks for all your help and pointers.
Changed a bit the routine to find the min/max by just testing separately for
x,y,and z as so:
instead of using:
Did manage to get some point clouds files to display OK, whereas for the some, there's
just one dot on the screen, perhaps it's the scaling..
As the point cloud files can really get huge (2 million points/rows is considered small)
testing is rather slow :-)
Regards,
Sami
Hello Sami
Yes you can also avoid testing Z. or you can can add 1more dimension examples you are in the case of XY view just compute XY min and max for each axis, same for XZ, YZ view so globally you can compute what point have the min and max for each axis depend of the view. (sorry for my bad english)
For increase performances perhaps in you're file you'll can include default camera properties and "min" "max" of each axis for the clouds. Like thisyou'll not need to compute at start, just at saving
Yes it's due to the point size scaling. Just adjust (Distance * 0.5) where 0.5 is the zoom factor (0.65 will be good enought)
Hello ,
Thanks a lot Jerome for all your kind help and comments . Am trying out your hints and
things are progressing nicely, was helpful to know that 0.5 is the zoom.
Defining the min and max properties earlier on sounds like a very good idea.
How to efficiently read huge text files seems to be
a main issue here (reading in millions of lines can take a few mins), but then that is beyond the scope of GLScene..
//and BTW, your English is just fine : )
Regards
Sami
Hello,
Have been now able to load several point (cloud) files, for
the very large ones am reading only a part of the points (for instance every 50th point)
to get a speed increase.
Am now trying to make a simple interface for letting the user scale and zoom
on the resulting image.
Scaling works nicely with the following code which is called any time
the user changes the scale in the edit box 'AMain.Ed_Scale':
where get_scale_AMAINER is obtained from a user set value, defined as follows:
However, for zooming, cannot get similar results...
Any change by the user on the zoom factor (say from 0.5 to 0.65)
will make the image nearly invisible. This is strange as both 0.5 and 0.65
work the first time the image is displayed, but any subsequent change in the zoom
renders the image very very small...As I understand it, in the following code, 'zoomer'
is the only parameter which needs to be set to affect the zooming percentage..?
~~~
Var
Radius, fov, distance :Single;
BEGIN
With AMain Do
Begin
radius := VectorDistance(vmin,vmax) * 0.5;
fov := DegToRad(SceneViewer.FieldOfView)*0.5;
distance := Radius / Abs(tan(fov));
GLCamera.AdjustDistanceToTarget(distance*zoomer);
GLCamera.DepthOfView := 2 * GLCamera.DistanceToTarget + 2 * radius;
~~~
where zoomer can be modified by the user so that an OnChangeEvent calls the above
code.
Would be very grateful for any hints here.
Regards,
Sami
Hi Sami, i think the problems come from "GLCamera1.AdjustDistanceToTarget" this is the help
So the the behaviours is normal before making computing you must reset the camera. But for what you can do the best way is to use Mouse and setup the camera. See attached sample
LMB and LMB+Shift : Adjust Distance by 1.25%
RMB : Adjust Focal Length
Wheel : Adjust Distance
Just a question why do you use Currency instead of Double or Single ?
Last edit: Jerome.D (BeanzMaster) 2018-08-03
Thanks so much for your help Jerome, really appreciate it.
Tried your code and indeed it looks much better now.
Though with a large point cloud, when rotating the mouse wheel inwards (towards yourself), the image first grows but then seems to 'collapse' on itself so that it eventually disappears altogether..probably something to do with the parameters...
Yes you're right, Currency should be Single/Double, was using it only as a reminder that the value is provided by the user with a max. 2-decimal accuracy..
It's probably come from a range overflow or casue by the using of the power function. If you can provide me a simple sample i'll can test and see
I think you can set the number of decimal in Lazarus with TFloatSpinEdit component you have DecimalPlaces property for that
Last edit: Jerome.D (BeanzMaster) 2018-08-03
Hello Jerome,
Wonder if you found time to test the sample application I posted last week, would be very grateful for your comments.
Regards,
Sami
Hello,
Thanks for your kind help, Jerome. Am attaching the full application and a sample data, the sample data that was referring to is huge (even when compressed), and could not reproduce the problem every time.
However, something incorrect seems to occur with this simpler file (file dog.txt, each row contains X Y Z and RGB colors). After launching the applic, the file dog.txt is selected, then button 'Analyze' is pressed and the image should appear. After rotating the dog (using the left mouse button) a bit and keeping the head in front, then zooming in using the wheel button, it would appear that the end part/tail-part of the dog is not in proportion with its front part...
Regards,
Sami
Hello
Can you make a Delphi version of the program for me?
thanks you!
sxbug@163.com
Hello
At first thank you for your pragram. Now I have changed it from lazarus to delphi. If I build it to 32bit program ,it is ok. But build it to 64bit program , it can not run and display error mesage"Floating point division by zero", Why?
I am using win10 64bit sys , GLScene 1.6. thanks.
Hello,
Sorry for late reply, am not yet running GLScene with Delphi, so unfortunately can't reproduce the error. But would imagine you can track the error by checking at what point it occurs...Were you able to zoom in on the dog..?
Hi, I've a lot of work. I'll take look at soon as possible
Ok sure, no hurries, thanks :-)
Hello
Thanks at first. I have builded this program for 64Bit in this attachment. You can run this EXE file to see the error. If you run this program, you can not display this dog.
How can I do? Thanks
Hi can you put your sample code as attachement please
I have put it in my old post. https://sourceforge.net/p/glscene/discussion/93606/thread/a5922617/8927/attachment/test_points.rar