SPLICE: Part-Level 3D Shape Editing from Local Semantic Extraction to Global Neural Mixing

CSSE, Shenzhen University
Pacific Graphics 2025
MRPilot Overview

Part-level editing results produced by SPLICE: Our method supports a wide range of intuitive editing operations, including sequential edits, copy, move, delete, rotate, scale, and mix, without requiring manual post-adjustment. The edited shapes remain structurally coherent and visually plausible. Additionally, SPLICE exhibits strong robustness under multi-step editing, consistently maintaining high reconstruction quality throughout the editing process.

Abstract

Neural implicit representations of 3D shapes have shown great potential in 3D shape editing due to their ability to model high-level semantics and continuous geometric representations. However, existing methods often suffer from limited editability, lack of part-level control, and unnatural results when modifying or rearranging shape parts. In this work, we present SPLICE, a novel part-level neural implicit representation of 3D shapes that enables intuitive, structure-aware, and high-fidelity shape editing. By encoding each shape part independently and positioning them using parameterized Gaussian ellipsoids, SPLICE effectively isolates part-specific features while discarding global context that may hinder flexible manipulation. A global attention-based decoder is then employed to integrate parts coherently, further enhanced by an attention-guiding filtering mechanism that prevents information leakage across symmetric or adjacent components. Through this architecture, SPLICE supports various part-level editing operations, including translation, rotation, scaling, deletion, duplication, and cross-shape part mixing. These operations enable users to flexibly explore design variations while preserving semantic consistency and maintaining structural plausibility. Extensive experiments demonstrate that SPLICE outperforms existing approaches both qualitatively and quantitatively across a diverse set of shape-editing tasks. Code will be released upon the acceptance of this paper.

Overview of SPLICE Pipeline

Overview of SPLICE pipeline
Figure 1: Overview of our SPLICE pipeline. Given a 3D shape decomposed into parts, we first apply Part Feature Extraction using a shared convolutional encoder \( f_{\mathrm{enc}} \) to obtain per-part geometry latent codes \( \{\mathbf{z}_i\} \) and Gaussian proxies \( \{\mathbf{g}_i\} \). In User Editing and Diffusion-based Refinement, these proxies can be directly modified by user operations (e.g., move, scale, mix) or adjusted by a latent diffusion model \( f_{\mathrm{adj}} \) to restore global coherence. The resulting updated proxies \( \{(\mathbf{z}'_i, \mathbf{g}'_i)\} \) are then processed in Pose Feature Mixing, where each \( \mathbf{g}'_i \) is encoded by a SIREN-based pose encoder \( \phi \), and combined with \( \mathbf{z}'_i \) via a multilayer perceptron \( f_{\mathrm{MLP}} \) to obtain the final part embedding \( \mathbf{h}_i \). Finally, in Attention-based Shape Decoding, sampled query points attend to \( \{\mathbf{h}_i\} \) through a cross-attention transformer \( f_{\mathrm{dec}} \), followed by occupancy decoding to reconstruct the final shape via marching cubes.

Comparison of Copy Editing Results

Comparison of copy editing results
Figure 2: Comparison of the copy editing results between our method and SPAGHETTI.

Comparison of Delete Editing Results

Comparison of delete editing results
Figure 3: Comparison of delete editing results between our method and SPAGHETTI. The gray parts are deleted during the editing.

Comparison of Move Editing Results

Comparison of move editing results
Figure 4: Comparison of move editing results between our method and SPAGHETTI and DualSDF. Our method ensures both editing precision and the preservation of shape semantics. For example, in the second row, when the two chair legs are moved closer together, our method accurately completes the move while introducing only minor deformations to maintain a natural and plausible result.

Comparison of Rotation Editing Results

Comparison of rotation editing results
Figure 5: Comparison of rotation editing results between our method and SPAGHETTI. The purple parts are rotated during the editing. By incorporating a diffusion model for optimization, our method achieves the desired rotation magnitude without causing unnecessary collateral rotations or distortions to the rest of the shape.

Comparison of Scaling Editing Results

Comparison of scaling editing results
Figure 6: Comparison of scaling editing results between our method, SPAGHETTI, and DualSDF. The yellow parts are scaled during the editing. Similar to rotation, Our method leverages a diffusion model to adjust the shape encoding, enabling effective and precise axial scaling.

Comparison of Part Mixing Results

Comparison of part mixing results
Figure 7: Comparison of part mixing results between our method and SPAGHETTI.Since our method extracts features for each part independently, mixing parts from different shapes is handled as seamlessly as mixing parts from the same shape. This enables our approach to produce high-quality and coherent results.

Editing Comparisons on Airplane Category

Editing comparisons on the airplane category
Figure 8: Editing comparisons on the airplane category using our method, DualSDF, and SPAGHETTI. Different colors indicate different types of editing operations.

BibTeX Citation


@inproceedings{Zhou2025SPLICE,
  author    = {Zhou, Jin and Yang, Hongliang and Xu, Pengfei and Huang, Hui},
  title     = {SPLICE: Part‑Level 3D Shape Editing from Local Semantic Extraction to Global Neural Mixing},
  booktitle = {Eurographics / Pacific Graphics 2025 Conference Proceedings},
  year      = {2025},
  publisher = {The Eurographics Association},
  note      = {Full paper, open‑access},
  url       = {https://diglib.eg.org/handle/10.2312/pg20251288},
}