Recent advancements in radiance fields have opened new avenues for creating high-quality 3D assets and scenes. Style transfer can enhance these 3D assets with diverse artistic styles, transforming creative expression. However, existing techniques are often slow or unable to localize style transfer to specific objects.
We introduce StyleSplat, a lightweight method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images. Our approach first learns a photorealistic representation of the scene using 3D Gaussian splatting while jointly segmenting individual 3D objects. We then use a nearest-neighbor feature matching loss to finetune the Gaussians of the selected objects, aligning their spherical harmonic coefficients with the style image to ensure consistency and visual appeal. StyleSplat allows for quick, customizable style transfer and localized stylization of multiple objects within a scene, each with a different style. We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
Our approach: We first use SAM + DEVA to generate view-consistent 2D object masks of the dataset. Then, we use these multi-view images to learn the geometry and color of 3D Gaussians while simultaneously learning a per Gaussian feature vector. These feature vectors are decoded into object labels using a linear classifier to collect the Gaussians corresponding to the user- specified objects. The SH coefficients of these selected Gaussians are finetuned to align with the style image using NNFM loss.
Our method extends to stylizing multiple objects within a scene, where we select two distinct objects and apply different style images to each. The figure above illustrates the results for these scenes, showcasing both single-object and multi-object applications of our style transfer method across scenes featuring numerous objects.
@misc{jain2024stylesplat3dobjectstyle,
title={StyleSplat: 3D Object Style Transfer with Gaussian Splatting},
author={Sahil Jain and Avik Kuthiala and Prabhdeep Singh Sethi and Prakanshul Saxena},
year={2024},
eprint={2407.09473},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.09473},
}