Abstract
While true AR/MR consumers devices are probably still years away, devices like HoloLens2 already have compelling applications in industry today. In this talk we will review what devices can do today and then present ongoing research expanding those capabilities. We will discuss how egocentric activity recognition can be used to enable devices to better assist users in learning and performing tasks. We will also see how combining edge devices with cloud compute capabilities can provide much more powerful solutions. We’ll briefly look at remote rendering as an option to remove constraints on 3D model complexity. Next, we’ll focus on spatial computing. While mixed reality devices typically build their own 3D map of the environment on device, many high value scenarios require to be able to reliably share and persist spatially localized information with respect to a common coordinate system. We will see how distributed cloud mapping and localization can enable these type of scenarios. We will present results involving HMDs, but also robots as well as 3D reality capture devices. Our goal is to enable seamless collaboration between on-site and remote people, as well as autonomous robots, through mixed reality.