
- Read more about INVESTIGATING ROBUSTNESS OF UNSUPERVISED STYLEGAN IMAGE RESTORATION
- Log in to post comments
Recently, generative priors have shown significant improvement for unsupervised image restoration. This study explores the incorporation of multiple loss functions that capture various perceptual and structural aspects of image quality. Our proposed method improves robustness across multiple tasks, including denoising, upsampling, inpainting, and deartifacting, by utilizing a comprehensive loss function based on Learned Perceptual Image Patch Similarity(LPIPS), Multi-Scale Structural Similarity Index Measure Loss(MS-SSIM), Consistency, Feature, and Gradient losses.
- Categories:


This document contains the supplementary material for the ICIP 2024 Paper with ID #2494 and Title "An End-to-End Class-Aware and Attention-Guided Model \\ for Object State Classification".
- Categories:

- Read more about Supplementary Materials
- Log in to post comments
Supplementary materials for the paper on REBIS.
- Categories:

- Read more about Place-NeRFs
- Log in to post comments
We present the Place-NeRFs, a scalable approach to large-scale 3D scene reconstruction that subdivides scenes into non-overlapping regions that can be handled by off-the-shelf NeRF models, striking a balance between reconstruction quality and efficient use of computational resources. By leveraging rough single-view depth estimation and visibility graphs, Place-NeRFs effectively groups spatially correlated photospheres, enabling independent volumetric reconstructions. This approach significantly reduces processing time and enhances scalability during NeRF models' training.
- Categories:

- Read more about Supplementary Material for Effective relationship between characteristics of training data and learning progress on knowledge distillation
- Log in to post comments
In image recognition, knowledge distillation is a valuable approach to train a compact model with high accuracy by exploiting outputs of a highly accurate large model as correct labels. In knowledge distillation, studies have shown the usefulness of data with high entropy output generated by image mix data augmentation techniques. Other strategies such as curriculum learning have also been proposed to improve model generalization by the control of the difficulty of training data over the learning process.
- Categories:

- Read more about SUPPLEMENT FOR BIDIRECTIONAL FLOW FIELDS FOR SPARSE INPUT NOVEL VIEW SYNTHESIS OF DYNAMIC SCENES
- Log in to post comments
Supplemental material
- Categories:


- Read more about NEF: Neural Error Fields for Follow-up Training with Fewer Rays
- Log in to post comments
A Neural Radiance Field (NeRF) is capable of representing scenes by capturing view-dependent properties from a specific set of images through neural network training. The lack of a significant initial image set can lead to a subsequent photographing session and training to improve the final view synthesis. For this purpose, we introduce a new variant of NeRF training analysis, termed the Neural Error Field (NEF). NEF visualizes and identifies view-dependent errors to reduce the number of ray samples used in the follow-up training.
- Categories:

- Read more about Supplementary Materials for Iterative Self-Improvement of Vision Language Models for Image Scoring and Self-Explanation
- Log in to post comments
Supplementary Materials for Iterative Self-Improvement of Vision Language Models for Image Scoring and Self-Explanation
- Categories: