Karanikolas, Vasileios Georgios2021-10-132021-10-132021-08https://hdl.handle.net/11299/225035University of Minnesota Ph.D. dissertation. August 2021. Major: Electrical/Computer Engineering. Advisor: Georgios B. Giannakis. 1 computer file (PDF); vii, 89 pages.Network science has successfully permeated a wide gamut of disciplines. A key first step to applying network science concepts to a complex system of interest, is obtaining a graph representation thereof, that accurately captures the dependencies among its constituent nodes. This is a challenging task, especially when accounting for generally nonlinear or time-varying nodal interactions. A key domain where these may arise, that is also of particular interest to this thesis, is that of identifying the topology of networks of the brain from functional magnetic resonance imaging (fMRI) time-series. To handle complex dependencies, the present thesis introduces novel nonlinear counterparts of linear association measures such as partial correlation and Granger causality, along with a nonlinear graph change detection scheme to unveil temporal variations. Choosing the type of nonlinearity to model is instrumental, as it dictates the form of nodal dependencies under consideration. To address this challenge in a data-driven fashion, the present thesis introduces the use of multi-kernel learning approaches to the realm of graph topology identification. The merits of the proposed framework are illustrated on real and synthetic resting-state fMRI data. Key to the novel approaches for capturing complex dependencies are methods for learning nonlinear functions. In this context, the second part of this thesis deals with Gaussian process (GP) based online learning. Two highly desirable, yet conflicting properties are identified, namely adaptability to a wide range of operational environments and scalability. The successful joint pursuit of these is elusive in the GP literature. To this end, a novel framework leveraging random feature kernel approximants and ensemble learning techniques is introduced. Algorithms for supervised learning are developed and associated performance analyses are provided. Finally, a novel unsupervised learning scheme extending the ubiquitous GP latent variable model is developed, capitalizing on the same key concepts. Tests on benchmark datasets highlight the benefits of the proposed approaches over state-of-the-art methods.enLearning nonlinear functions and graph topologiesThesis or Dissertation