Browsing by Subject "uniqueness"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Attractors and Attracting Neighborhoods for Multiflows(2019-05) Negaard-Paper, ShannonWe already know a great deal about dynamical systems with uniqueness in forward time. Indeed, flows, semiflows, and maps (both invertible and not) have been studied at length. A view that has proven particularly fruitful is topological: consider invariant sets (attractors, repellers, periodic orbits, etc.) as topological objects, and the connecting sets between them form gradient like flows. In the case of systems with uniqueness in forward time, an attractor in one system is related to nearby attractors in a family of other, "close enough" systems. One way of seeing that connection is through the Conley decomposition (and the Conley index) [2], [13]. This approach requires focusing on isolated invariant sets - that is, invariant sets with isolating neighborhoods. If there is an invariant set I, which has an isolating neighborhood N, we say that I is the invariant set associated to N, and N is an isolating neighborhood associated to I. When the invariant set in question is an attractor or a repeller, then the isolating neighborhood is called an attracting neighborhood or a repelling neighborhood, respectively. A more specialized case may be called an attractor block or a repeller block. This approach was expanded to discrete time systems which lack uniqueness in forward time, using relations, in [7] and [11]. Relations do not rely on uniqueness in forward time, but the graph of any map is a relation; thus they serve to generalize maps. Some of this is reviewed in the next few sections. In addition, I expanded on work done in [7] to show that in compact metric spaces, attractors for closed relations continue (see Section 6.1). On the continuous time side, more work needs to be done. This paper is a step toward a more systematic approach for continuous time systems which lack uniqueness in forward time. This work applies to Filippov systems [4] and in control theory [12]. In the following pages, we establish a tool (multiflows) for discussing the continuous time case and develop a framework for understanding attractors (and therefore stability) in these systems. A crucial part of this work was establishing attractor / attracting neighborhood pairs, which happens in Section 5.5.Item Constrained Matrix and Tensor Factorization: Theory, Algorithms, and Applications(2016-08) Huang, KejunThis dissertation studies constrained matrix and tensor factorization problems which appear in the context of estimating factor analysis models used in signal processing and machine learning. Factor analysis dates back to the celebrated principal component analysis (PCA), which provides the optimal low rank approximation to matrix data. PCA is a key dimensionality reduction technique, however it cannot be used for estimating the underlying generative matrix factors, which are generally non-orthogonal. On the other hand, it has been observed that imposing simple constraints on the latent factors, for example non-negativity (which in a way opposes orthogonality) can make these factors essentially unique -- although theoretical understanding of this phenomenon was very limited prior to this dissertation. Moving from two-way to higher-way tensor data, the so-called canonical polyadic decomposition (CPD) model can provide essentially unique factors under mild conditions, but it is also widely accepted that if correct priors are imposed on the latent factors, the estimation performance can be greatly enhanced. One of the main contributions of this dissertation is the development of a simple sufficiently scattered condition that turns out to underpin a great variety of factor analysis models -- including but not limited to non-negative matrix factorization (NMF), and other models relying on volume minimization, which are nowadays pervasive in signal processing, machine learning, and geoscience and remote sensing -- where it helps resolve a popular conjecture in multispectral imaging known as Craig's belief, that sources can be blindly identified via ``minimum volume transformation'' under certain conditions. In these and other applications, the sufficiently scattered condition provides the best identifiability results known to this date, and it has significant potential for further applications. Besides theoretical results on the essential uniqueness for factor analysis models under non-negativity constraints, a general algorithmic framework is proposed, which seamlessly and effortlessly incorporates many common types of constraints imposed onto the latent factors. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework. In addition to establishing essential uniqueness and providing effective computational algorithms for various matrix and tensor factorization models, we further study how well these models estimate the correct latent factors in noise. Towards this end, pertinent Cramer-Rao bounds (CRB) for constrained matrix and tensor factorization are derived. This turns out being a nontrivial task, mainly due to the presence of constraints, trivial ambiguities, and computational challenges as well. In particular, the Fisher Information Matrix (FIM) is always singular for the models considered (even without constraints) -- but modern CRB analysis shows that taking the pseudo-inverse still provides a valid (albeit potentially loose) bound. For big data analytics, however, the challenge is how to efficiently compute this pseudo-inverse. Towards this end we succeeded in identifying the null space of the FIM for several special cases, leading to very efficient algorithms for computing the CRB. Equipped with these results, we test-drive the performance of various algorithms and benchmark them against the CRB. Interestingly, our algorithms can approach this (optimistic) CRB under certain conditions.