JPL Technical Report Server

Augmented Reality Data Generation for Training Deep Learning Neural Network

Show simple item record Payumo, Kevin Huyen, Alexander Seguin, Landan Lu, Thomas Chow, Edward Torres, Gil 2020-04-28T15:30:59Z 2020-04-28T15:30:59Z 2018-04-17
dc.identifier.citation SPIE Defense + Commercial Sensing 2018, Orlando, Florida, April 17 - 19, 2018 en_US
dc.identifier.clearanceno 18-1655
dc.description.abstract One of the major challenges in deep learning is retrieving sufficiently large labeled training datasets, which can become expensive and time consuming to collect. A unique approach to training segmentation is to use Deep Neural Network (DNN) models with a minimal amount of initial labeled training samples. The procedure involves creating synthetic data and using image registration to calculate affine transformations to apply to the synthetic data. The method takes a small dataset and generates a highquality augmented reality synthetic dataset with strong variance while maintaining consistency with real cases. Results illustrate segmentation improvements in various target features and increased average target confidence. en_US
dc.description.sponsorship NASA/JPL en_US
dc.language.iso en_US en_US
dc.publisher Pasadena, CA: Jet Propulsion Laboratory, National Aeronautics and Space Administration, 2018 en_US
dc.subject synthetic data en_US
dc.subject IR image processing en_US
dc.subject computer vision en_US
dc.subject deep learning en_US
dc.subject neural network en_US
dc.subject image transformation en_US
dc.subject image registration en_US
dc.subject augmented reality en_US
dc.title Augmented Reality Data Generation for Training Deep Learning Neural Network en_US
dc.type Preprint en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record



My Account