Browsing by Subject "Probability"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Classification of formal methods use, type, sophistication, and subdiscipline in the journal Philosophical Studies, 1999, 2005, 2007, 2009, 2015, 2017, 2019(2021-11-08) Fletcher, Samuel C; Knobe, Joshua; Wheeler, Gregory; Woodcock, Brian A; scfletch@umn.edu; Fletcher, Samuel CThis data set contains bibliographic entries for articles published in the journal Philosophical Studies in the years 1999, 2005, 2007, 2009, 2015, 2017, and 2019, with classifications of which articles used formal methods. Those that did were further classified by what formal methods they used, the level of sophistication of those methods, and the subdiscipline(s) of philosophy to which they belong. The purpose of the data collection was to explore any trends in the use of formal methods over the time period indicated. The potential value of the data set for meeting this purpose lies in its potential to be representative of analytic Anglophone philosophy during the time period indicated. The data is now released because the study for which is was collected has concluded.Item Concentration of empirical distribution functions for dependent data under analytic hypotheses(2013-05) Kim, Ji HeeThe concentration property of empirical distribution functions is studied under the Levy distance for dependent data whose joint distribution satisfies analytic conditions expressed via Poincare-type and logarithmic Sobolev inequalities. The concentration results are then applied to the following two general schemes. In the first scheme, the data are obtained as coordinates of a point randomly selected within given convex bodies (and more generally -- when the sample obeys a log-concave distribution). In the second scheme, the data represent eigenvalues of symmetric random matrices whose entries satisfy the indicated analytic conditions.Item Convex Measures and Associated Geometric and Functional Inequalities(2015-07) Melbourne, JamesConvex measures represent a class of measures that satisfy a variant of the classical Brunn-Minkowski Inequality. Background on the associated functional and geometric inequalities is given, and the elementary theory of such measures is explored. A generalization of the Lovasz and Simonovits localization technique is developed, and some applications to large deviations are explained. In a more geometric direction, a modified Brunn-Minkowski Inequality is explored on some discrete spaces. The significance of such a notion is in its potential to serve as a definition for a lower Ricci curvature bound in non-smooth spaces.Item Melodies in space: neural processing of musical features(2013-03) Dumas, Roger EdwardWith modern digital technology, it is now possible to capture, store and describe the brain's response to musical stimuli with some degree of confidence. Increasing financial and materiel resources are being made available to music-brain researchers and, as a result, the number of music perception and cognition publications is expanding exponentially (Levitin & Tirovolas, 2011). Many neuro-musicologists have access to at least one of the two most popular brain-imaging technologies, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). EEG measures the electric brain signal passing through soft tissues, which becomes 'smeared' and difficult to separate from signals measured elsewhere on the scalp. fMRI measure the blood-oxygen level dependent (BOLD) signal, however this signal develops too slowly (2-5 seconds) to accurately capture the brain's swift processing of individual melodic notes. In contrast, magneto-encephalography (MEG) affords high temporal resolution (1 ms) and high fidelity (i.e. the clean, direct measurement of undistorted electromagnetic fluctuations in neural populations) and is therefore the most suitable method for matching the brain's dynamic, interacting sub-networks to the processing of melodies played at normal tempos. To explore my idea that this evolving process is both observable and quantifiable, I have performed a series of MEG experiments involving human subjects listening to melodic stimuli. This dissertation details my examination of the brain's response to melodic pitch, contour, interval distance and next-note probability.Item Relay Aided Networks and Distributed Computing: Bounds and Algorithms for Cooperative Decentralized Systems(2024-05) Jain, SarthakIn this work, we consider decentralized cooperative systems for the tasks of communication and distributed computing. In these systems, a central node uses a set of n cooperating decentralized nodes to perform an otherwise difficult/complex task. We primarily focus on two such systems: (i) a Gaussian half-duplex n relay network for communicating data from a source to a destination, and (ii) an adversarial distributed computing system for the computation of matrix-vector products. This work proposes bounds and efficient algorithms to improve the computational complexity of these systems. Our first major contributions lie in the area of wireless relay networks. The Shannon capacity of Gaussian relay networks is unknown. Therefore, approximate capacity has been proposed in the literature which approximates the Shannon capacity within a constant additive gap. However, for half-duplex relay networks, computing the approximate capacity and developing relaying schemes that can achieve this approximate capacity are problems of exponential complexity. Because of this, wireless network simplification approaches have been suggested in the literature, where, instead of operating the entire network in all its exponential number of states, the operation is simplified by considering either: (i) a subset of states, or (ii) a subset of relays. In this work, we provide significant advances in both these wireless simplification approaches.First, for the scenario of using a subset of states, we consider a single-source single-destination Gaussian half-duplex n-relay network with arbitrary topology, where the source communicates with the destination through a direct link and with the help of n half-duplex relays. For these networks, we characterize sufficient conditions under which operating the network in the n+1 energy-efficient states (out of the 2^n possible states) is sufficient to achieve the approximate capacity. Specifically, these n+1 energy-efficient states are those in which at most one relay is in transmit mode while the rest of the relays are in receive mode. Under such sufficient conditions, an efficient relaying scheme is proposed, and closed-form expressions for the scheduling and the approximate capacity are provided. Next, for the scenario of using a subset of relays, we consider the Gaussian half-duplex n-relay network with the diamond topology, where a source communicates with a destination by hopping information through one layer of n non-communicating relays that operate in half-duplex. The main focus consists of investigating the following question: What is the contribution of a single relay on the approximate capacity of the entire network? We answer this question by providing a tight fundamental bound on the ratio of the approximate capacity of the highest-performing single relay to the approximate capacity of the entire network, for any number n. Surprisingly, it is shown that this ratio guarantee is f = 1/(2+2 \cos(2 \pi /(n+2))), that is a decreasing sinusoidal function of n. The second decentralized cooperative system that we consider is the framework for distributed matrix-vector product, where the server distributes the task of matrix-vector product computation among n worker nodes, out of which L are compromised (but non-colluding) and may return incorrect results. Specifically, it is assumed that the compromised workers are unreliable, that is, at any given time, each compromised worker may return an incorrect and correct result with probabilities \alpha and 1-\alpha, respectively. Thus, the tests are noisy. This work proposes three probabilistic group testing schemes to identify the unreliable/compromised workers: (i) a noise-level-independent non-adaptive scheme, (ii) a noise-level-dependent non-adaptive scheme and (iii) a noise-level-dependent 2-stage adaptive scheme. Moreover, using the proposed group testing methods, sparse parity-check codes are constructed and used in the considered distributed computing framework for efficient encoding, decoding and identification of unreliable workers. Our scheme outperforms the existing works with respect to overall computational complexity. In the aforementioned distributed computing setup, we proposed probabilistic group testing techniques for efficient identification of unreliable workers. Related group testing techniques can also be utilized in another application referred to as the sparsity-constrained community-based group testing problem, where the goal is to identify infected families in a population. In particular, the population consists of F families, each with H members. A number k_f out of the F families are infected, and a family is said to be infected if k_h out of its H members are infected. Furthermore, the sparsity constraint allows at most \rho_T individuals to be grouped in each test. For this sparsity-constrained community model, we propose a probabilistic group testing scheme to identify the infected population with a vanishing probability of error and we provide an upper-bound on the number of tests. Our scheme outperforms existing results under many interesting regimes of (k_f, k_h, F, H). Moreover, our scheme can also be applied to the classical dilution model, where it outperforms existing noise-level-independent schemes in the literature.