I will use Google Tensorflow platform to manipulate a deep convolutional neural network model called Inception v3. The main goal of this manipulation is to recreate the last layer of the Inception with categories more relevant to my project. I want the model to be able to predict everyday objects once provided as an image feed. To make this more user-friendly, I will code an application for iOS platform using Core ML Machine Learning developer kit. This kit will allow me to integrate the image recognition model from Tensorflow to the application. For regular users, I want to show capabilities of this model by using cutting-edge technologies that are available today. The application will allow users to tag objects within space using ARKit: Augmented reality platform of Apple.
I aim this application to be more than just “fun.” The main focus for this application will be integrating VoiceOver capabilities that iPhones already offer system-wide. Many visually impaired individuals depend on the VoiceOver technology that Apple perfected over the years with their haptic feedback system. The application should adapt and create an interface that can be used easily by visually impaired individuals once it detects VoiceOver setting is enabled on the phone. I want the application to be able to speak out the predictions that are being made so that individuals who cannot see can hear what is in front of them. This application will work offline without any dependencies on cloud computing or servers, making it available whenever and wherever it is needed.
University Honors Capstone Project Paper and Poster, University of Minnesota Duluth, 2018. Mert Okyay authored paper and poster; Dahui Li authored poster.
Okyay, Mert; Li, Dahui.
Seventh Sense: A Neural Network Based Real-Time Image Recognition Application.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.