Kachelmeier, Rosalie2023-02-032023-02-032022-10https://hdl.handle.net/11299/252316University of Minnesota M.S. thesis. Octobedr 2022. Major: Computer Science. Advisor: Richard Maclin. 1 computer file (PDF); v, 42 pages.Since at least antiquity, humans have been categorizing art based on various attributes. With the invention of the internet, the amount of art available and people searching for art has grown significantly. One way to keep up with these increases is to use computers to automatically suggest categories for paintings. Building upon past research into this topic using transfer learning as well as research showing that artistic movement affected gaze data, we worked to combine transfer learning with gaze data in order to improve automatic painting classification. To do this, we first trained a model on a large object recognition dataset with synthesized saliency data. We then repurposed it to classify paintings by 19th century artistic movement and trained it further on a dataset of 150 paintings with saliency data collected from 21 people. Training on this was split into two stages. In the first, the final layer of the model was trained on the dataset with the rest of the model frozen. Next, the entire model was fine-tuned on the data using a much lower learning rate. Fifteen trials of this were done with different random seeds in order to decrease any effect that randomness might have. Overall it achieved an accuracy of 0.569 with standard deviation of 0.0228. Comparatively, a similar existing method had an accuracy of 0.523 with standard deviation of 0.0156. This ends up being a statistically significant difference (p = 0.0479), suggesting that when given enough training time a more complex model utilizing saliency data can outperform a simpler model that does not use saliency data when it comes to classifying paintings.enArt ClassificationDeep LearningSaliencyTransfer LearningImproving Automatic Painting Classification Using Saliency DataThesis or Dissertation