- Read more about NEF: Neural Error Fields for Follow-up Training with Fewer Rays
- Log in to post comments
A Neural Radiance Field (NeRF) is capable of representing scenes by capturing view-dependent properties from a specific set of images through neural network training. The lack of a significant initial image set can lead to a subsequent photographing session and training to improve the final view synthesis. For this purpose, we introduce a new variant of NeRF training analysis, termed the Neural Error Field (NEF). NEF visualizes and identifies view-dependent errors to reduce the number of ray samples used in the follow-up training.
- Categories:
- Read more about Supplementary Materials for Iterative Self-Improvement of Vision Language Models for Image Scoring and Self-Explanation
- Log in to post comments
Supplementary Materials for Iterative Self-Improvement of Vision Language Models for Image Scoring and Self-Explanation
- Categories:
- Read more about Supplementary Materials for DYNAMIC MULTI-OBJECT SKETCH ANIMATION VIA SCENE SEGMENTATION AND ITERATIVE TEXT-TO-VIDEO OPTIMIZATION
- Log in to post comments
The supplementary material contains information on the generated strokes for each sketch,
the filenames of the sketch animation GIFs, the prompts used for the sketches,
and illustrations of the separation process for individual objects.
Additionally, there is a GIFs.zip file inside Supplements.zip.
One can check the generated GIF animations with that.
- Categories:
- Read more about Supplementary Materials for ICIP 2025
- Log in to post comments
Supplementary Materials for ICIP 2025
- Categories:
- Read more about ESCT3D
- Log in to post comments
Recent advancements in text-driven 3D content generation highlight several challenges. Surveys show that users often provide simple text inputs while expecting high-quality results. Generating optimal 3D content from minimal prompts is difficult due to the strong dependency of text-to-3D models on input quality. Moreover, the generation process exhibits high variability, often requiring many attempts to meet user expectations, reducing efficiency. To address this, we propose GPT-4V for self-optimization, enhancing generation efficiency and enabling satisfactory results in a single attempt.
- Categories:
- Read more about practice
- 1 comment
- Log in to post comments
Practice for uploading
- Categories:
- Read more about Relational Representation Distillation Supplementary Material
- Log in to post comments
Relational Representation Distillation Supplementary Material
- Categories:
- Read more about RESSCAL3D++: Joint Acquisition and Semantic Segmentation of 3D Point Clouds
- Log in to post comments
3D scene understanding is crucial for facilitating seamless interaction between digital devices and the physical world. Real-time capturing and processing of the 3D scene are essential for achieving this seamless integration. While existing approaches typically separate acquisition and processing for each frame, the advent of resolution-scalable 3D sensors offers an opportunity to overcome this paradigm and fully leverage the otherwise wasted acquisition time to initiate processing.
- Categories:
- Read more about CORRELATION-AWARE JOINT PRUNING-QUANTIZATION USING GRAPH NEURAL NETWORKS
- Log in to post comments
Deep learning in image classification has achieved remarkable success but at the cost of high resource demands. Model compression through automatic joint pruning-quantization addresses this issue, yet most existing techniques overlook a critical aspect: layer correlations. These correlations are essential as they expose redundant computations across layers, and leveraging them facilitates efficient design space exploration. This study employs Graph Neural Networks (GNN) to learn these inter-layer relationships, thereby optimizing the pruning-quantization strategy for the targeted model.
- Categories:
- Read more about Adversarial Robustness for Deep Metric Learning
- Log in to post comments
Deep Metric Learning (DML) based on Convolutional Neural Networks (CNNs) is vulnerable to adversarial attacks. Adversarial training, where adversarial samples are generated at each iteration, is one of the prominent defense techniques for robust DML. However, adversarial training increases computational complexity and causes a trade-off between robustness and generalization. This study proposes a lightweight, robust DML framework that learns a non-linear projection to map the embeddings of a CNN into an adversarially robust space.
- Categories: