Tripo AI
Tripo is an AI-powered 3D workspace that enables users to generate production-ready 3D models from text, images, or sketches in seconds. The platform simplifies the entire 3D creation process by combining model generation, segmentation, texturing, rigging, and animation into one seamless workflow. With text-to-3D and image-to-3D capabilities, Tripo produces clean geometry and solid topology suitable for real-time engines and professional tools. Intelligent segmentation allows creators to split complex models into structured, editable parts with precision and control. AI texturing applies high-resolution, PBR-ready materials instantly, with Magic Brush enabling detailed local refinements. Automatic rigging and animation transform static meshes into animated assets without manual setup. Overall, Tripo dramatically reduces production time while making advanced 3D creation accessible to creators of all skill levels.
Learn more
Fast3D
Fast3D is a lightning‑fast AI‑powered 3D model generator that transforms text prompts or single/multi‑view images into professional‑grade mesh assets with customizable texture synthesis, mesh density, and style presets, all in under ten seconds without any modeling experience. It combines high‑fidelity PBR material generation with seamless tiling and intelligent style transfer, delivers precise geometric accuracy for realistic structures, and supports both text‑to‑3D and image‑to‑3D workflows. Outputs are compatible with any pipeline, offering export in GLB/GLTF, FBX, OBJ/MTL, and STL formats, while its intuitive web interface requires no login or setup. Whether for gaming, 3D printing, AR/VR, metaverse content, product design, or rapid prototyping, Fast3D’s AI core enables creators to explore diverse ideas through batch uploads, random inspiration galleries, and adjustable quality tiers, bringing concepts to 3D reality in seconds rather than days.
Learn more
Seed3D
Seed3D 1.0 is a foundation-model pipeline that takes a single input image and generates a simulation-ready 3D asset, including closed manifold geometry, UV-mapped textures, and physically-based rendering material maps, designed for immediate integration into physics engines and embodied-AI simulators. It uses a hybrid architecture combining a 3D variational autoencoder for latent geometry encoding, and a diffusion-transformer stack to generate detailed 3D shapes, followed by multi-view texture synthesis, PBR material estimation, and UV texture completion. The geometry branch produces watertight meshes with fine structural details (e.g., thin protrusions, holes, text), while the texture/material branch yields multi-view consistent albedo, metallic, and roughness maps at high resolution, enabling realistic appearance under varied lighting. Assets generated by Seed3D 1.0 require minimal cleanup or manual tuning.
Learn more
GET3D
We generate a 3D SDF and a texture field via two latent codes. We utilize DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. We train with adversarial losses defined on 2D images. In particular, we use a rasterization-based differentiable renderer to obtain RGB images and silhouettes. We utilize two 2D discriminators, each on RGB image, and silhouette, respectively, to classify whether the inputs are real or fake. The whole model is end-to-end trainable. As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications.
Learn more