3D Functions

function args returns description
[new_3dmodel] contvid Creates a video object with an empty 3D container.
[add_3dmesh] contvid, resource, nmaps Allocate a slot in contvid, schedule resource to be loaded into that slot, consuming nmaps of the frameset.
[camtag_model] vid Converts the target VID to act as a camera.
[mesh_shader] vid, shader_id, slot Associate a shader for the specified mesh slot.
[scale_3dvertices] contvid Rescale all the vertices of all the associated meshes for the bounding volume of the mesh to fit -1..1 range for both X,Y and Z.
[video_3dorder] orderconst Define when in the rendering process the 3D pipeline should be processed.
orient3d_model] contvid, roll, pitch, yaw
scale3d_model contvid, xs, ys, zs, time
[move3d_model] vid x, y, z, time
[rotate3d_model] vid roll, pitch, yaw, time, absconst

About:

The 3D featureset is still in its infancy, and is treated as an edge case or subset of the regular 2D parts of the engine, with a few necessary support features. It is structured around a VID container obtained from [new_3dmodel] that then gets populated with a series of meshes.

The only supported mesh format is CTM or "compressed triangle mesh", which is a fast, storage effecting, free and open format, but more on that in [3D Models]. To get texturing going, you define a frameset for the original video container, and then populate the slots in the frameset with references to the textures you would want to use.

There is a generic shader active for the entire pipeline which does nothing more than apply perspective / camera transforms and texturing. This shader cannot be overridden. You can, however, attach a shader to the container VID and it would then apply for the entire model. It is also possible to have other shaders attached for the different meshes of a model through the [mesh_shader] function.

In order for the models to be visible however, you need a camera. This is achieved by creating a storage video object using the [fill_surface] function, which also allows for a clean and easy way of defining shadow casters, separate pipelines for reflections, glows and other effects that require corresponding framebuffer- object management etc. thanks through the [define_rendertarget] feature.

In order to convert this container object into a camera, simply use the [camtag_model] which converts the rendering of the video object from a regular one into that of the separate 3D pipeline.

Quick Example:

::lua

camera = fill_surface(4, 4, 0, 0, 0);
camtag_model(camera);

model = new_3dmodel();
add_3dmesh(model, "model_a.ctm");
add_3dmesh(model, "model_b.ctm");
add_3dmesh(model, "model_c.ctm");
scale_3dvertices(model);

image_framesetsize(model, 3);
set_image_as_frame(model, fill_surface(4, 4, 255, 0, 0), 0, FRAMESET_DETACH);
set_image_as_frame(model, fill_surface(4, 4, 0, 255, 0), 0, FRAMESET_DETACH);
set_image_as_frame(model, fill_surface(4, 4, 0, 0, 255), 0, FRAMESET_DETACH);
move3d_model(model, -1.0, 0.0, -4.0);
show_image(model);

In the example above, we create a container and converts into a camera.
Thereafter, a model container is created, and three meshes are associated with it. When the meshes have been loaded successfully, their coordinates are scaled to fit a known bounding volume. Then we allocate space for three texture, create a red, a green and a blue and associate these with the meshes of the model and hand over resource management to the model, meaning that if one would call delete_image(model); the textures etc. would be deleted along with it.
Lastly, we reposition the model in front of the camera and show it.


Related

Wiki: 3D Models
Wiki: fill_surface

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.