Sorry, you need to enable JavaScript to visit this website.

Accurate segmentation of humans from live videos is an important problem to be solved in developing immersive video experience. We propose to extract the human segmentation information from color and depth cues in a video using multiple modeling techniques. The prior information from human skeleton data is also fused along with the depth and color models to obtain the final segmentation inside a graph-cut framework. The proposed method runs real time on live videos using single CPU and is shown to be quantitatively outperforming the methods that directly fuse color and depth data.

Categories:
75 Views