MILo: Mesh-In-the-Loop
Gaussian Splatting for Detailed
and Efficient Surface Reconstruction

SIGGRAPH Asia 2025 - Journal Track
ACM Transactions on Graphics

Antoine Guédon1*Diego Gomez1*Nissim Maruani2Bingchen Gong1
George Drettakis2Maks Ovsjanikov1

1LIX, Ecole Polytechnique, IP Paris, France

2Inria & Université Côte d'Azur, France

*Both authors contributed equally to the paper.

teaser.png


We propose Mesh-in-the-Loop Gaussian Splatting (MILo), a novel differentiable mesh extraction framework that operates during the optimization of 3D Gaussian Splatting representations.

At every training iteration, we differentiably extract a mesh—including both vertex locations and connectivity—only from Gaussian parameters. This enables gradient flow from the mesh to Gaussians, allowing us to promote bidirectional consistency between volumetric (Gaussians) and surface (extracted mesh) representations.

This approach guides Gaussians toward configurations better suited for surface reconstruction, resulting in higher quality meshes with significantly fewer vertices. As a result, MILo makes the reconstructions more practical for downstream applications like physics simulations and animation.

MILo can be plugged into any Gaussian splatting representation, opening the door to new applications for Gaussian Splatting requiring differentiable surface processing during training.

Updates


09-2025

Code release.

08-2025

MILo has been accepted to SIGGRAPH Asia 2025 - Journal Track (TOG)!

06-2025

Preprint release on arXiv


Differentiable Gaussians-to-Mesh Extraction



pipeline.png

MILo provides a differentiable pipeline for extracting a mesh only from Gaussian parameters, including both vertex locations and connectivity. At each training iteration, MILo proceeds as follows:

  1. Each Gaussians spawns a set of pivots. These pivots are likely to be located near the surface of the scene, and their coordinates are differentiable with respect to the Gaussian parameters.
  2. Delaunay triangulation is applied to the pivots. The triangulation can be cached to avoid recomputing it at every iteration.
  3. SDF values are assigned to the pivots.
  4. Differentiable Marching Tetrahedra is used to extract a mesh from the pivots and SDF values.

MILo enables gradient flow from the extracted mesh to Gaussians. Supervision on this mesh (through differentiable depth and normal renderings) allows us to impose a soft prior on the Gaussians which results in better reconstructed surfaces. Conceptually, Gaussians can be seen as proxies for parameterizing a surface mesh extracted at every iteration.


Higher-Quality Meshes with Fewer Vertices


stump2.png

(a) Stump (320 MB)

garden0.png

(b) Garden (301 MB)

barn1.png

(c) Barn (313 MB)


MILo produces significantly lighter meshes, weighing between 60MB and 400MB depending on the scene and configurations used. Compared to previous approaches such as Gaussian Opacity Fields or RaDe-GS, MILo increases performance while generating an order of magnitude fewer mesh vertices.

Part of this efficiency is achieved by selecting only a subset of Gaussians to spawn pivots, specifically targeting those most likely to be located on the surface. To this end, we repurpose the importance sampling strategy introduced in Mini-Splatting2.

Despite their lightweight nature, our meshes capture the complete scene, including all background elements. This makes our method significantly more scalable than many concurrent approaches that rely on TSDF applied to regular grids—a technique that scales poorly and is typically limited to foreground-only reconstruction.


Depth-Order Regularization


stump2.png

(a) Colored mesh

garden0.png

(b) Mesh without depth-order regul.

barn1.png

(c) Mesh with depth-order regul.


MILo scales to full scenes, and allows for extracting meshes including all background elements. However, without any additional learned prior, the background can contain inaccurate geometry and messy structures.

In our codebase, we propose an optional depth-order regularization loss to drastically improve the quality of the background. This loss relies on DepthAnythingV2, and enforces the order of the depth values in the rendered depth maps to match the order of the depth values in predicted depth maps: If one pixel is predicted to be closer than another, it should also be closer in the rendered depth map.

This loss does not require the predicted depth maps to be multi-view consistent. Please note that this loss is not used in the paper.


Animation


stump2.png

(a) Initial scenes

garden0.png

(b) Editing and animating in Blender

barn1.png

(c) Resulting animated Gaussians

While MILo provides a differentiable solution for extracting meshes from 3DGS representations, it also implicitly encourages Gaussians to align with the surface of the mesh. As a result, any modification made to the mesh can be easily propagated to the Gaussians, making the reconstructed mesh an excellent proxy for editing and animating the 3DGS representation.

Similarly to previous works SuGaR and Gaussian Frosting, we provide a Blender addon allowing to combine, edit and animate 3DGS representations just by manipulating meshes reconstructed with MILo in Blender. No code is needed to use the addon.


Integrating MILo to your Own Project


MILo can be easily integrated into your existing Gaussian Splatting pipeline. In milo.functional, we provide straightforward functions to leverage our differentiable Gaussians-to-Mesh pipeline in your own 3DGS projects.

These functions only require Gaussian parameters as inputs (means, scales, rotations, opacities) and can extract a mesh from these parameters in a differentiable manner, allowing for performing differentiable operations on the surface mesh and backpropating gradients directly to the Gaussians.

An example is provided below. Please refer to our codebase for more details.

from functional import (
    sample_gaussians_on_surface,
    extract_gaussian_pivots,
    compute_initial_sdf_values,
    compute_delaunay_triangulation,
    extract_mesh,
    frustum_cull_mesh,
)

# Load or initialize a 3DGS-like model and training cameras
gaussians = ...
training_cameras = ...

# Define a simple wrapper for your Gaussian Splatting rendering function, 
# following this template. It will be used only for initializing SDF values.
# The wrapper should accept just a camera as input, and return a dictionary 
# with "render" and "depth" keys. Here is an example:
from gaussian_renderer.radegs import render_radegs
pipe = ...
background = torch.tensor([0., 0., 0.], device="cuda")
def render_func(view):
    render_pkg = render_radegs(
        viewpoint_camera=view, 
        pc=gaussians, 
        pipe=pipe, 
        bg_color=background, 
        kernel_size=0.0, 
        scaling_modifier = 1.0, 
        require_coord=False, 
        require_depth=True
    )
    return {
        "render": render_pkg["render"],
        "depth": render_pkg["median_depth"],
    }

# Only the parameters of the Gaussians are needed for extracting the mesh.
means = gaussians.get_xyz
scales = gaussians.get_scaling
rotations = gaussians.get_rotation
opacities = gaussians.get_opacity

# Sample Gaussians on the surface.
# Should be performed only once, or just once in a while.
# In this example, we sample at most 600_000 Gaussians.
surface_gaussians_idx = sample_gaussians_on_surface(
    views=train_cameras,
    means=means,
    scales=scales,
    rotations=rotations,
    opacities=opacities,
    n_max_samples=600_000,
    scene_type='indoor',
)

# Compute initial SDF values for pivots. Should be performed only once.
# In the paper, we propose to learn optimal SDF values by maximizing the 
# consistency between volumetric renderings and surface mesh renderings.
initial_pivots_sdf = compute_initial_sdf_values(
    views=train_cameras,
    render_func=render_func,
    means=means,
    scales=scales,
    rotations=rotations,
    gaussian_idx=surface_gaussians_idx,
)

# Compute Delaunay Triangulation.
# Can be performed once in a while.
delaunay_tets = compute_delaunay_triangulation(
    means=means,
    scales=scales,
    rotations=rotations,
    gaussian_idx=surface_gaussians_idx,
)

# Differentiably extract a mesh from Gaussian parameters, including initial 
# or updated SDF values for the Gaussian pivots.
# This function is differentiable with respect to the parameters of the Gaussians, 
# as well as the SDF values. Can be performed at every training iteration.
mesh = extract_mesh(
    delaunay_tets=delaunay_tets,
    pivots_sdf=initial_pivots_sdf,
    means=means,
    scales=scales,
    rotations=rotations,
    gaussian_idx=surface_gaussians_idx,
)

# You can now apply any differentiable operation on the extracted mesh, 
# and backpropagate gradients back to the Gaussians!
# In the paper, we propose to use differentiable mesh rendering.
from scene.mesh import MeshRasterizer, MeshRenderer
renderer = MeshRenderer(MeshRasterizer(cameras=train_cameras))

# We cull the mesh based on the view frustum for more efficiency
i_view = np.random.randint(0, len(train_cameras))
mesh_render_pkg = renderer(
    frustum_cull_mesh(mesh, train_cameras[i_view]), 
    cam_idx=i_view, 
    return_depth=True, return_normals=True
)
mesh_depth = mesh_render_pkg["depth"]
mesh_normals = mesh_render_pkg["normals"]
      



Click on any scene below to open the interactive 3D viewer with both mesh and Gaussian Splatting representations. For a smooth 60 FPS experience, please make sure WebGL uses your graphics card and not your CPU.

For resource constraints, we only display in this gallery results obtained with the low-resolution, base version of MILo: As a result, Gaussian Splatting representations contain much fewer Gaussians than usual, and the meshes have much fewer vertices than concurrent approaches.

Still, these lighter meshes reach higher quality on T&T and DTU benchmarks than concurrent approaches. The mesh sizes reported below include vertex colors.

Please note that these meshes were not optimized with depth-order regularization; As a result, the background can contain inaccurate geometry and messy structures.

Stump Scene

Stump

Mesh size: 349 MB
Nb Gaussians: 582K
Ignatius Scene

Ignatius

Mesh size: 256 MB
Nb Gaussians: 417K
Shine Grey Scene

Red Robot

Mesh size: 150 MB
Nb Gaussians: 261K
Garden Scene

Garden

Mesh size: 354 MB
Nb Gaussians: 684K
Buzz Scene

Buzz

Mesh size: 68 MB
Nb Gaussians: 120K
Bicycle Scene

Bicycle

Mesh size: 298 MB
Nb Gaussians: 563K
Knight Scene

Knight

Mesh size: 58 MB
Nb Gaussians: 103K
Rising Freedom Scene

Blue Robot

Mesh size: 61 MB
Nb Gaussians: 105K
Kitchen Scene

Kitchen

Mesh size: 217 MB
Nb Gaussians: 409K

More Scenes

Coming Soon

 

BibTex


If you find this work useful for your research, please cite:

      @article{guedon2025milo,
        author       = {Gu{\'e}don, Antoine and Gomez, Diego and Maruani, Nissim and Gong, Bingchen and Drettakis, George and Ovsjanikov, Maks},
        title        = {MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction},
        journal      = {ACM Transactions on Graphics},
        number       = {},
        volume       = {},
        month        = {},
        year         = {2025},
        url          = {https://anttwo.github.io/milo/}
      }
    


Acknowledgements


Parts of this work were supported by:

This work was granted access to the HPC resources of IDRIS under the allocation 2024-AD011013387R2 made by GENCI.

We also thank the authors of the following awesome projects for their inspiring works and open-source implementations:


Further information


If you like this project, check out our previous works related to 3D reconstruction, radiance fields or Gaussian splatting:

© You are welcome to copy the code of the webpage, please attribute the source with a link back to this page and remove the analytics.