Abstract

We present a novel method for rendering and compositing video in augmented reality. We focus on calculatingthe physically correct result of the depth of field caused by a lens with finite sized aperture. In order to correctlysimulate light transport, ray-tracing is used and in a single pass combined with differential rendering to composethe final augmented video. The image is fully rendered on GPUs, therefore an augmented video can be produced atinteractive frame rates in high quality. Our method runs on the fly, no video postprocessing is needed. In additionwe evaluated the user experiences with our rendering system with the hypothesis that a depth of field effect inaugmented reality increases the realistic look of composited video. Results with 30 users show that 90% perceivevideos with a depth of field considerably more realistic.

Reference

Kán, P., & Kaufmann, H. (2012). Physically-Based Depth of Field in Augmented Reality. In C. Andujar & E. Puppo (Eds.), Eurographics 2012 - Short Papers (pp. 89–92). Eurographics Association. https://doi.org/10.2312/conf/EG2012/short/089-092