Sloyd
Sloyd is on a mission to create the ultimate 3D creation platform, enabling creators to make 3D assets fast and easy. Our web app allows quick editing of 3D assets using AI prompting, and with simple sliders and toggles. Users can access hundreds of templates to customize 3D assets. Our SDK handles generation of huge worlds in realtime, at runtime. It enables 99% storage space saving, in-game creation tools, procedural worlds, and liveops asset changes. We combine parametric models with AI, which ensures that assets are always game-ready. We generate UV maps and LODs instantly, with optimized meshes, but still allow prompting for 3D models with immediate results.
Learn more
Shap-E
This is the official code and model release for Shap-E. Generate 3D objects conditioned on text or images. Sample a 3D model, conditioned on a text prompt, or conditioned on a synthetic view image. To get the best result, you should remove the background from the input image. Load 3D models or a trimesh, and create a batch of multiview renders and a point cloud encode them into a latent and render it back. For this to work, install Blender version 3.3.1 or higher.
Learn more
OpenDream
Create AI art in seconds. Create stunning ai images in minutes using our customizable templates. Choose from a wide selection of friendly designed, and easy-to-use templates at OpenDream. Select from hundreds of styles to help build your next creation. Easily change the perspective, colors, lighting and so much more.
Fast: Discover AI picture generation in a matter of seconds, with lightning-fast page loads.
Easy: Do you believe that in order to make art, you need to have a great deal of talent? No! Use our templates and you won't have any issues. All you have to do is type in a subject!
Many Unique Ideas: We are more than just an AI Art Generator; we are a source of inspiration for your own original creations. If you give us a single prompt, we will generate for you up to 8 different ideas at the same time.
OpenDream's mission is to provide everyone, regardless of their artistic ability.
Learn more
GET3D
We generate a 3D SDF and a texture field via two latent codes. We utilize DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. We train with adversarial losses defined on 2D images. In particular, we use a rasterization-based differentiable renderer to obtain RGB images and silhouettes. We utilize two 2D discriminators, each on RGB image, and silhouette, respectively, to classify whether the inputs are real or fake. The whole model is end-to-end trainable. As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications.
Learn more