Browsing by Subject "Saliency"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Global self-similarity and saliency measures based on sparse representations for classification of objects and spatio-temporal sequences(2012-12) Somasundaram, GuruprasadExtracting the truly salient regions in images is critical for many computer vision applications. Salient regions are considered the most informative regions of an image. Traditionally these salient regions have always been considered as local phenomena in which the salient regions stand out as local extrema with respect to their immediate neighbors. We introduce a novel global saliency metric based on sparse representation in which the regions that are most dissimilar with respect to the entire image are deemed salient. We examine our definition of saliency from the theoretical stand point of sparse representation and minimum description length. Encouraged by the efficacy of our method in modeling foreground objects, we propose two classification methods for recognizing objects in images. First, we introduce two novel global self-similarity descriptors for object representation which can directly be used in any classification framework. Next, we use our salient feature detection approach with conventional region descriptors in a bag-of-features framework. Experimentally we show that our feature detection method enhances the bag-of-features framework. Finally, we extend our salient bag-of-features approach to the spatio-temporal domain for use with three-dimensional dense descriptors. We apply this method successfully to video sequences involving human actions. We obtain state-of-the-art recognition rates in three distinct datasets involving sports and movie actions.Item Improving Automatic Painting Classification Using Saliency Data(2022-10) Kachelmeier, RosalieSince at least antiquity, humans have been categorizing art based on various attributes. With the invention of the internet, the amount of art available and people searching for art has grown significantly. One way to keep up with these increases is to use computers to automatically suggest categories for paintings. Building upon past research into this topic using transfer learning as well as research showing that artistic movement affected gaze data, we worked to combine transfer learning with gaze data in order to improve automatic painting classification. To do this, we first trained a model on a large object recognition dataset with synthesized saliency data. We then repurposed it to classify paintings by 19th century artistic movement and trained it further on a dataset of 150 paintings with saliency data collected from 21 people. Training on this was split into two stages. In the first, the final layer of the model was trained on the dataset with the rest of the model frozen. Next, the entire model was fine-tuned on the data using a much lower learning rate. Fifteen trials of this were done with different random seeds in order to decrease any effect that randomness might have. Overall it achieved an accuracy of 0.569 with standard deviation of 0.0228. Comparatively, a similar existing method had an accuracy of 0.523 with standard deviation of 0.0156. This ends up being a statistically significant difference (p = 0.0479), suggesting that when given enough training time a more complex model utilizing saliency data can outperform a simpler model that does not use saliency data when it comes to classifying paintings.