GET3D

GET3D

NVIDIA
+
+

Related Products

  • Picsart Enterprise
    27 Ratings
    Visit Website
  • Yodeck
    7,403 Ratings
    Visit Website
  • MicroStation
    561 Ratings
    Visit Website
  • Ganttic
    240 Ratings
    Visit Website
  • StrongDM
    100 Ratings
    Visit Website
  • LALAL.AI
    4,805 Ratings
    Visit Website
  • OneTimePIM
    87 Ratings
    Visit Website
  • Fraud.net
    56 Ratings
    Visit Website
  • BrandMap® 10
    Visit Website
  • TeleRay
    6 Ratings
    Visit Website

About

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs) via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality. In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. To achieve this, we propose DragGAN, which consists of two main components including: 1) a feature-based motion supervision that drives the handle point to move towards the target position, and 2) a new point tracking approach that leverages the discriminative GAN features to keep localizing the position of the handle points. Through DragGAN, anyone can deform an image with precise control over where pixels go.

About

We generate a 3D SDF and a texture field via two latent codes. We utilize DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. We train with adversarial losses defined on 2D images. In particular, we use a rasterization-based differentiable renderer to obtain RGB images and silhouettes. We utilize two 2D discriminators, each on RGB image, and silhouette, respectively, to classify whether the inputs are real or fake. The whole model is end-to-end trainable. As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Users that want to manipulate images using AI

Audience

Anyone seeking a generative model of high quality 3D textured shapes learned from images

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

DragGAN
Founded: 2023
vcai.mpi-inf.mpg.de/projects/DragGAN/

Company Information

NVIDIA
United States
nv-tlabs.github.io/GET3D/

Alternatives

Alternatives

Seed3D

Seed3D

ByteDance

Categories

Categories

Integrations

No info available.

Integrations

No info available.
Claim DragGAN and update features and information
Claim DragGAN and update features and information
Claim GET3D and update features and information
Claim GET3D and update features and information