Browsing by Subject "Bayesian inference"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item An Approach to Nonparametric Bayesian Analysis for High Dimensional Longitudinal Data Sets(2016-06) Shang, KanThe goal of this thesis is to develop a more powerful and flexible nonparametric method for the analysis of longitudinal data arising from high throughput biological assays, such as arise in next generation sequence analysis, proteomics and metabolomics, by expanding on an existing approach. The method compared 2 groups by testing for differences in the time to upcrossings and downcrossings for all possible levels using standard nonparametric statistical methods for testing for differences between event times that are subject to censoring. The main problem with nonparametric approaches is their lack of power relative to parametric alternatives, hence methods that aim to redress the shortcomings of nonparametric methods would provide researchers with an approach that greatly enhances their ability to analyze data sets that have a potential impact on human health. Hence in this thesis, we first develop a Bayesian counterpart to rank based tests using the Dirichlet process mixture (DPM) prior. Then we expand this approach to tie sets of distinct level crossing problems together via a hierarchical model to develop a more powerful test. While focusing on the first passage time is useful, such an approach ignores data beyond the first passage time. Hence, we also explore the analysis of recurrent event data from a Bayesian semi-parametric perspective and examine under what conditions the consideration of recurrent events leads to a more powerful procedure. There are not universally agreed upon methods for nonparametric longitudinal analysis, especially in a high dimensional context. As such the thesis research could help fill this gap in the field.Item Data-driven Channel Learning for Next-generation Communication Systems(2019-10) Lee, DonghoonThe turn of the decade has trademarked the `global society' as an information society, where the creation, distribution, integration, and manipulation of information have significant political, economic, technological, academic, and cultural implications. Its main drivers are digital information and communication technologies, which have resulted in a "data deluge", as the number of smart and Internet-capable devices increases rapidly. Unfortunately, establishing information infrastructure to collect data becomes more challenging particularly as communication networks for those devices become larger, denser, and more heterogeneous to meet the quality-of-service (QoS) for the users. Furthermore, scarcity in spectral resources due to an increased demand for mobile devices urges the development of a new methodology for wireless communications possibly facing unprecedented constraints both on hardware and software. At the same time, recent advances in machine learning tools enable statistical inference with efficiency as well as scalability in par with the volume and dimensionality of the data. These considerations justify the pressing need for machine learning tools that are amenable to new hardware and software constraints, and can scale with the size of networks, to facilitate the advanced operation of next-generation communication systems. The present thesis is centered on analytical and algorithmic foundations enabling statistical inference of critical information under practical hardware/software constraints to design and operate wireless communication networks. The vision is to establish a unified and comprehensive framework based on state-of-the-art data-driven learning and Bayesian inference tools to learn the channel-state information that is accurate yet efficient and non-demanding in terms of resources. The central goal is to theoretically, algorithmically, and experimentally demonstrate how valuable insights from data-driven learning can lead to solutions that markedly advance the state-of-the-art performance on inference of channel-state information. To this end, the present thesis investigates two main research thrusts: i) channel-gain cartography leveraging low-rank and sparsity; and ii) Bayesian approaches to channel-gain cartography for spatially heterogeneous environment. The aforementioned research thrusts introduce novel algorithms that aim to tackle the issues of next-generation communication networks. Potential of the proposed algorithms is showcased by rigorous theoretical results and extensive numerical tests.Item On-The-Fly Parameter Estimation Based on Item Response Theory in Item-based Adaptive Learning Systems(2020-11) Jiang , ShengyuAn online learning system has the capacity to offer customized content that caters to individual learner’s need and has seen growing interest from industry and academia alike in recent years. Noting the similarity between online learning and the more established adaptive testing procedures, research has focused on applying the techniques of adaptive testing to the learning environment. Yet due to the inherent difference between learning and testing, there exist some major challenges that hinder the development of adaptive learning systems. To tackle these challenges, a new online learning system is proposed which features a Bayesian algorithm that computes item and person parameters on the fly. The new algorithm is validated in two separate simulation studies and the results show that the system, while being cost-effective to build and easy to implement, can also achieve adequate adaptivity and measurement precision for the individual learner.Item STATISTICAL METHODS FOR ARM-BASED BAYESIAN NETWORK META-ANALYSIS(2020-06) Wang, ZhenxunNetwork meta-analysis (NMA) is a recently developed tool to combine and contrast direct and indirect evidence in systematic reviews of multiple treatments. Compared to traditional pairwise meta-analysis, it can improve statistical efficiency and reduce certain biases. Unlike the contrast-based NMA approach, which focuses on estimating relative effects such as odds ratios, the arm-based (AB) NMA approach can estimate absolute effects (such as the overall treatment-specific event rates), which are arguably more useful in medicine and public health, as well as relative effects. In AB-NMA, treatment-specific variances are needed to estimate treatment-specific overall effects, while accurate estimation of correlation coefficients is critical to allow borrowing information across treatments. However, partially due to the lack of information, the estimation of correlation coefficients and variances can be biased and unstable if we use the conjugate priors (e.g., the inverse-Wishart (IW) distribution) for the covariance matrix. To address the first challenge of accurately estimating correlation coefficients, several separation strategies (i.e., separate priors on variances and correlations) can be considered. To study the IW prior's impact on AB-NMA and compare it with separation strategies, we did simulation studies under different missing-treatment mechanisms. A separation strategy with appropriate priors for the correlation matrix (e.g., equal correlations) performs better than the IW prior and is thus recommended as the default vague prior in the AB approach. To address the second challenge of variance estimation, we can either assume the variances of different treatments share a common distribution with unknown hyper-parameters (variance shrinkage) or in AB-NMA, borrow information from single arm studies (variance extrapolation). We did simulation studies to evaluate the performance of the proposed approaches and to illustrate the importance of variance shrinkage or variance extrapolation when the number of clinical studies involving each treatment is relatively small. The above results are further illustrated by multiple case studies.