Audience
Developers and game makers in need of a solution to generate 3D objects conditioned on text or images
About Shap-E
This is the official code and model release for Shap-E. Generate 3D objects conditioned on text or images. Sample a 3D model, conditioned on a text prompt, or conditioned on a synthetic view image. To get the best result, you should remove the background from the input image. Load 3D models or a trimesh, and create a batch of multiview renders and a point cloud encode them into a latent and render it back. For this to work, install Blender version 3.3.1 or higher.
Popular Alternatives
Stable 3D
For graphic designers, digital artists and game developers, 3D content creation can be among the most complex and time-consuming tasks, often taking hours - sometimes days - to create a moderately complex 3D object. Stability AI is pleased to introduce a private preview of Stable 3D, an automatic process to generate concept-quality textured 3D objects that eliminates much of that complexity and allows a non-expert to generate a draft-quality 3D model in minutes, by selecting an image or illustration, or writing a text prompt. Objects created with Stable 3D are delivered in the “.obj” standard file format, and can be further edited and improved in 3D tools like Blender and Maya, or imported in a game engine, such as Unreal Engine 5 or Unity.
Learn more
Text2Mesh
Text2Mesh produces color and geometric details over a variety of source meshes, driven by a target text prompt. Our stylization results coherently blend unique and ostensibly unrelated combinations of text, capturing both global semantics and part-aware attributes. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt. We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term neural style field network. In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization.
Learn more
GET3D
We generate a 3D SDF and a texture field via two latent codes. We utilize DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. We train with adversarial losses defined on 2D images. In particular, we use a rasterization-based differentiable renderer to obtain RGB images and silhouettes. We utilize two 2D discriminators, each on RGB image, and silhouette, respectively, to classify whether the inputs are real or fake. The whole model is end-to-end trainable. As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications.
Learn more
Poly
Poly is an AI-enabled texture creation tool that lets you quickly generate customized, 8K HD, and seamlessly tile-able textures with up to 32-bit PBR maps using a simple prompt (text and/or image) in seconds. It's perfect for use in 3D applications such as 3D modeling, character design, architecture visualization, game development, AR/VR world-building, and much more. We're thrilled to share the result of our team's research work with the community and hope you will find it useful and fun. Type in a prompt, select a texture material type, and watch as Poly creates a fully-formed 32-bit EXR texture for you. You can use this to play around with Poly's AI, seeing what it is capable of and experimenting with prompting strategies. The dock at the bottom of the screen lets you switch views. You can view your past prompts, view a model in 3D, or view any of the six available physical-based rendering maps.
Learn more
Pricing
Starting Price:
Free
Free Version:
Free Version available.
Integrations
Company Information
OpenAI
United States
github.com/openai/shap-e
You Might Also Like
Our Free Plans just got better! | Auth0 by Okta
You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
Product Details
Platforms Supported
SaaS
Training
Documentation
Support
Online