Adaptive Viewport For 360degree Video Players

Current encoding and transmission of 360 degree videos involves transfer of content in all directions per frame while only a limited portion of this is visible to the viewer at any point in time. This results in a significant wastage of bandwidth and/or compromise in the size of the video.

Our research aims to look for patterns and develop heuristics to be able to reduce the amount of information that needs to be sent per frame, considering the limited visibility at the viewer’s end.

Current steps in this direction includes construction of a test bed in order to study viewer behavior in a 360 degree environment. This includes both, a browser based platform as well as a VR headset. The trace collected from these tests will be used to curate a dataset of time-series information which may be used in correlation with content based features in the video as a means to predict the region to function as viewport in the next frame.

The testbed is currently accessible only within the NYU network and will soon be available on the internet.

Page last modified on April 11, 2017, at 11:25 AM EST