Shap-E
Generate 3D objects conditioned on text or images
...The model is built with a two-stage architecture: first an encoder that maps existing 3D assets into parameterizations of implicit functions, and then a conditional diffusion model trained on those parameterizations to generate new assets. Because it works at the level of implicit functions, Shap-E can render output both as textured meshes and NeRF-style volumetric renderings. The repository contains sample notebooks (e.g. sample_text_to_3d.ipynb, sample_image_to_3d.ipynb) so users can try out text → 3D or image → 3D generation. ...