Meta Platforms’ Research Division has introduced a groundbreaking generative algorithm, 3D Gen, which allows users to create high-quality 3D objects based on text descriptions. According to the developers, this neural network surpasses existing models in both the quality of the generated models and the speed of generation.
Key Features of 3D Gen
“This system can generate 3D objects with high-resolution textures,” Meta stated in a post on the Threads social network. The company also highlighted that the neural network significantly outperforms similar algorithms in terms of the quality of generated objects and is 3-10 times faster in generation speed. According to reports, Meta 3D Gen can create 3D objects and textures based on a simple text description in less than a minute. Functionally, the new algorithm is similar to some existing analogues, such as Midjourney and Adobe Firefly. One difference is that 3D Gen creates models that support physically based rendering. This means that the models created by the neural network can be used in applications for modeling and rendering real-life objects, notes NIXSOLUTIONS.
Advanced Methodology and Artist Feedback
“Meta 3D Gen is a two-step method that combines two components: one for converting text into 3D, and the other for converting text into textures,” explains the algorithm description. According to the developers, this approach allows for “higher quality 3D generation for creating immersive content.” 3D Gen combines two foundational language models, Meta AssetGen and TextureGen. Based on feedback from professional 3D artists, Meta asserts that its new technology is preferable to competing analogues, which also allow generating 3D objects from a text description.
Meta Platforms is committed to innovation in the field of 3D generative algorithms and we’ll keep you updated on the latest developments.