MeshFeat: Multi-Resolution Features for Neural Fields on Meshes

*equal contribution
1Technical University of Munich
2Munich Center for Machine Learning
Teaser Figure

MeshFeat learns Neural Fields directly on a mesh.

Abstract

Parametric feature grid encodings have gained significant attention as an encoding approach for neural fields since they allow for much smaller MLPs which decreases the inference time of the models significantly.

In this work, we propose MeshFeat, a parametric feature encoding tailored to meshes, for which we adapt the idea of multi-resolution feature grids from Euclidean space. We start from the structure provided by the given vertex topology and use a mesh simplification algorithm to construct a multi-resolution feature representation directly on the mesh.

The approach allows the usage of small MLPs for neural fields on meshes, and we show a significant speed-up compared to previous representations while maintaining comparable reconstruction quality for texture reconstruction and BRDF representation. Given its intrinsic coupling to the vertices, the method is particularly well-suited for representations on deforming meshes, making it a good fit for object animation.

Video

Experiments

Inference speed-up

MeshFeat enables an inference speed-up of over a magnitude compared to the baseline methods. Moreover, a non-neural approach is only twice as fast, highlighting the efficiency of MeshFeat.

Inference Speedup

Texture Reconstruction

MeshFeat enables high-quality reconstructions, matching state-of-the-art methods in visual fidelity.

MeshFeat on deforming meshes

MeshFeat supports mesh deformations natively making it a good fit for object animations.

Reference Mesh (Face)

Image 1

Reference Mesh (Elephant)

Image 1

Deformed Meshes (Face)

Image 1

Animated deformations (Elephant)

Image 1

BibTeX


      @inproceedings{mahajan2024meshFeat,
        author = {M Mahajan and F Hofherr and D Cremers},
        title = {MeshFeat: Multi-Resolution Features for Neural Fields on Meshes},
        booktitle = {European Conference on Computer Vision (ECCV)},
        year = {2024},
        eprint = {2407.13592},
        eprinttype = {arXiv},
      }