Documents
Supplementary Material
Supplementary Material for LeMoRe
- DOI:
- 10.60864/e7pm-e443
- Citation Author(s):
- Submitted by:
- Mian Muhammad N...
- Last updated:
- 5 February 2025 - 3:58pm
- Document Type:
- Supplementary Material
- Categories:
- Keywords:
- Log in to post comments
Lightweight semantic segmentation is essential for many downstream vision tasks. Unfortunately, existing methods often struggle to balance efficiency and performance due to the complexity of feature modeling. Many of these existing approaches are constrained by rigid architectures and implicit representation learning, often characterized by parameter-heavy designs and a reliance on computationally intensive Vision Transformer-based frameworks. In this work, we introduce an efficient paradigm by synergizing explicit and implicit modeling to balance computational efficiency with representational fidelity. Our method combines well-defined Cartesian directions with explicitly modeled views and implicitly inferred intermediate representations, efficiently capturing global dependencies through a nested attention mechanism. Extensive experiments on challenging datasets, including ADE20K, CityScapes, Pascal Context, and COCO-Stuff, demonstrate that LeMoRe strikes an effective balance between performance and efficiency.