Browsing by Subject "Sheaf theory"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Sheaf Representation Learning(2023-12) Gebhart, ThomasThe observed behaviors of a wide variety of processes are implicitly supported by a collection of spatial or relational constraints placed upon regions of the domain which generate the data under study. Often modeled discretely as a network-structured or combinatorial object, the topology dictating the pattern with which these domain elements interact influences the global character of these systems by restricting the space of possible data observations and biasing behavior to abide by the local topological constraints inherited by the global configuration. In short, the topology of a domain constrains the structure of its associated data. To properly and efficiently infer from data the parameters of functions which describe these processes, machine learning models must be imbued with structural inductive biases which align the function’s representations with the constraints imposed by the domain topology. Despite this crucial relationship between domain topology and observed data, a general theoretical framework for specifying and reasoning about these constraints within machine learning is largely lacking. This thesis provides a sheaf-theoretic framework for specifying and reasoning about model representational constraints imposed by the topology of the domain. Central to this framework is the observation that associating data to a topological space, a common modeling motif in modern machine learning, is an inherently functorial operation, inviting the reinterpretation of a fundamental machine learning paradigm in a category-theoretic language. Through this more general language, we gain the capacity to reason about model classes which satisfy the compositional properties imposed by the functorial association of data to topology. By specifying this functor to be a sheaf, the representational constraints implied by a cover on the domain may be operationalized by requiring functionally-consistent data assignments along intersecting regions of the domain. When the representation target of this functor is vector-valued, as is typically the case in machine learning, this sheaf-theoretic approach admits a linear-algebraic implementation. While this sheaf-theoretic approach to topologically-constrained representation learning inherits significant modeling power from its abstract formulation, we show that this approach integrates with and properly generalizes a number of existing graph embedding and signal processing algorithms by specifying simplistic assumptions regarding the complexity of the representational constraints imposed by the domain. In the graph signal processing setting, we introduce sheaf neural networks, a generalization of graph neural networks which provides better modeling expressivity, especially when node interactions imply functional interactions among topologically-connected signals. This sheaf-theoretic generalization of graph neural networks also clarifies, in a categorical language, the value of more complex relational inductive biases in terms of expressivity and alignment with domain-implied constraints. We also introduce sheaf embedding: an approach to encoding relational data in a new representation which permits the specification of complex typing schematics within the domain interactions while retaining global consistency objectives. We find this abstraction particularly apt within the knowledge graph reasoning domain, where these knowledge sheaves generalize a number of widely-applied knowledge graph embedding methods and, through their spectral properties, admit linear-algebraic extensions for both logical andinductive query reasoning.