OpenAI has released a new tool called “Shap-E” that can generate 3D implicit functions from text. This is a significant step forward in the field of natural language processing (NLP) and computer vision. The tool uses a neural network to learn how to map text descriptions to 3D shapes. It can then generate new shapes based on new text inputs. The researchers behind Shap-E hope that it will be used to create more realistic virtual environments and improve the accuracy of computer vision systems. The tool is still in the early stages of development, but it has already shown promising results. Here is the research paper.
Shap·E is a 3D modeling AI designed to generate more accurate and detailed 3D models than its predecessor, Point·E. Unlike Point·E, which creates point clouds, Shap·E generates the parameters of implicit functions directly, allowing it to render both textured meshes and neural radiance fields (NeRFs). This key difference empowers Shap·E to produce more detailed and accurate 3D models.
In short, Shap·E will be able to generate 3D artifacts the same way as Image AI’s are generating images directly from a prompt. If you’ve checked our articles, you are already familiar with how powerfull tools like midjourney are. Here’s a sample of a Shiba Inu we generated for this article.
The success of Shap·E lies in its use of NeRFs, which enable the generation of photorealistic renderings of 3D scenes. Shap·E combines NeRFs with diffusion models to efficiently generate detailed 3D models that capture the shape and texture of objects more accurately than Point·E. Shap·E is also much faster, taking only 13 seconds to generate a sample on a single NVIDIA V100 GPU, while Point·E took one to two minutes on the same hardware.
Despite the computational intensity of the generation process and the AI model’s limitations in dealing with complex objects, Shap·E represents a significant improvement over Point·E and shows great promise for future development. Its ability to generate more accurate and detailed 3D models opens up new possibilities for applications in the domains of augmented reality and virtual reality.