A significant development has emerged in the world of AI-driven 3D content creation. A new tool, enabling 3D inpainting directly on meshes, is now accessible through a Google Colab notebook featuring a user-friendly Gradio interface. This marks a notable step forward, as publicly available tools specifically for 3D inpainting have been virtually non-existent until now.
This innovative tool is built upon the foundations of existing technologies, Hi3DGen and Trellis. The developer has implemented the novel inpainting capability, making it possible to modify specific regions of a 3D model while intelligently preserving the surrounding context. This resource offers exciting possibilities for artists, designers, and researchers working with 3D assets.
The project is currently shared via Colab, inviting the community to explore, utilize, and potentially expand upon its capabilities.
Table of contents
What Exactly is 3D Inpainting?
Many are familiar with 2D image inpainting, where AI algorithms intelligently fill in missing or selected parts of a picture. Think of removing an object from a photo or restoring damaged areas – the AI analyzes the surrounding pixels to generate plausible content for the gap.
3D inpainting applies this concept to three-dimensional models. Instead of just working with pixels on a flat plane, it operates on the volumetric data (like voxels) or mesh structure of a 3D object. This allows users to select a specific region of a 3D model and have the AI regenerate or alter that area, aiming for a seamless blend with the existing geometry and texture. It’s a powerful technique for editing, repairing, or creatively modifying 3D assets.
A New Tool Built on Hi3DGen and Trellis
This particular 3D inpainting implementation leverages the capabilities of Hi3DGen and Trellis, likely utilizing their underlying generative models trained on 3D data. The developer’s contribution lies in adapting these frameworks to specifically handle the inpainting task – identifying a target region and using the generative AI to fill it based on the unmasked parts of the model and potentially a conditioning image.
The result is a workflow accessible to users without needing complex local setups, thanks to Google Colab and the intuitive Gradio web interface. This lowers the barrier to entry for experimenting with this cutting-edge 3D inpainting technology.
How Does This 3D Inpainting Tool Work?
Using the tool involves several steps within the Colab environment. It requires some patience and attention, as parts of the process can be computationally intensive.
Setup and Preparation
The first stage involves running preparatory code cells within the Colab notebook. Be aware that each of these cells can take approximately 10 minutes to complete. It’s crucial to monitor their execution, as they might occasionally encounter issues or crash, requiring a restart of that specific cell. Successful completion of all prep cells is necessary before proceeding.
Inputting Your Data
Once the environment is ready, you need to provide two key inputs:
- Your Mesh: Upload your 3D model in the .ply file format.
- Conditioning Image: Provide an image that guides the inpainting process. The tool works best when this image is related to your model, such as a modified screenshot or a render. This helps the AI understand the desired style and context, reducing the chance of generating disconnected or out-of-place geometry.
Using the Gradio Interface
After uploading, the Gradio interface allows you to interactively prepare the inpainting task. You can:
- Position and scale your 3D model within the viewer.
- Define the specific region you want to inpaint by adjusting a selection widget (likely a bounding box or similar).

This visual interaction makes it easier to precisely target the area for modification.
Key Parameters Explained
Compared to the original Trellis interface, this tool introduces specific parameters crucial for 3D inpainting:
- Shape Guidance: This parameter controls how strongly the generated inpainting adheres to the original shape of the underlying model within the masked region. It influences the blending between the original and generated parts. Early tests suggest setting this to a high value (e.g., 0.5 to 0.8) works well for smooth transitions that respect the base geometry.
- Low Interval (related to Shape Guidance): Using a low interval (e.g., less than 0.2) alongside high Shape Guidance seems beneficial for achieving smooth, shape-following results. Experimentation is key, as these are preliminary findings.
- Blur Kernel Size: This parameter blurs the boundary of the inpainting mask. A larger value creates a softer, more gradual transition between the original mesh and the inpainted section. Keep in mind the model operates on a 64x64x64 voxel grid, so even a small kernel size like 3 can represent a significant blur effect relative to the model’s resolution.
Other parameters likely function similarly to those in the base Trellis or Hi3DGen models.
Why is This Significant? The Potential of 3D Inpainting
The availability of an accessible 3D inpainting tool opens up numerous possibilities:
- Repairing Models: Easily fix holes, artifacts, or unwanted geometry in 3D scans or existing models.
- Creative Modification: Selectively change parts of a model – add features, alter textures in specific areas, or reimagine sections of an object.
- Iterative Design: Quickly test variations on specific parts of a design without remodeling the entire object.
- Asset Customization: Modify existing 3D assets for games or simulations by inpainting changes directly onto the mesh.
It represents a more targeted approach to AI-driven 3D editing compared to generating entirely new models from scratch.
Community Focus and Future Potential
The developer explicitly encourages community involvement to further develop this tool. Several exciting avenues exist:
- Integration with Other Tools: A script exists to encode the model into latents suitable for Trellis. This hints at the potential for integrating this 3D inpainting functionality into popular node-based AI workflows like ComfyUI or even directly into 3D software like Blender via add-ons.
- 3D-to-3D Transformation: The framework can potentially be used for more complex 3D-to-3D tasks, where the inpainting is heavily guided by the original mesh structure but allows for significant stylistic or geometric changes within the selected region.
This open approach could lead to rapid improvements and broader adoption.
Getting Started with the 3D Inpainting Tool
Ready to try it yourself? You can access the tool via the Google Colab notebook:
Remember the basic workflow:
- Open the Colab notebook.
- Carefully run each preparation cell, monitoring for completion (allow ~10 mins per cell).
- Upload your .ply mesh file and a relevant conditioning image.
- Use the Gradio app to position your model and define the inpainting region.
- Adjust parameters like Shape Guidance and Blur Kernel Size for desired results.
- Run the inpainting process.
The Future of AI-Powered 3D Editing
This new Colab implementation represents an exciting and practical step towards more sophisticated AI-assisted 3D workflows. While still experimental, the introduction of accessible 3D inpainting provides a powerful new capability for anyone working with 3D models. As the community engages with and builds upon this tool, we can expect further refinements and integrations that will continue to reshape the landscape of 3D content creation and modification. The ability to seamlessly edit and repair 3D meshes using AI is no longer just theoretical; it’s now a practical reality available for exploration.
| Latest From Us
- Robotaxis Are Watching You: How Autonomous Cars Are Fueling a New Era of Surveillance
- AI Unmasks JFK Files: Tulsi Gabbard Uses Artificial Intelligence to Classify Top Secrets
- FDA’s Shocking AI Plan to Approve Drugs Faster Sparks Controversy
- AI in Consulting: McKinsey’s Lilli Makes Entry-Level Jobs Obsolete
- AI Job Losses Could Trigger a Global Recession, Klarna CEO Warns