### Browsing by Subject "Monte Carlo"

Now showing 1 - 17 of 17

###### Results Per Page

###### Sort Options

Item Bayesian approach to Phase II statistical process control for time series(2013-04) Zhou, TianyangShow more In statistical process control (SPC) problems, in-control values of parameters are required by traditional approaches. However this requirement is not realistic. New methods based on the change point model have been developed to avoid this requirement. The existing change-point methods are restricted to independent identically distributed observations, ignoring the numerous settings in which process readings are serially correlated. Furthermore, these frequentist methods are unable to make use of prior imperfect information on the parameters. In my research, I propose a Bayesian approach to the online SPC based on the change point model in an ARMA process. This approach accommodates serially correlated data, and also provides a coherent way of incorporating prior information on parameters.Show more Item A Computational Evaluation Of Neutron Capture Efficiency In Plastic Scintillators(2016-10-05) Schmitz, Ryan; Poehlmann, David-Michael; Rogers, Hannah; Barker, D'Ann; Cushman, PriscillaShow more A Monte Carlo study using GEANT4 was performed on the neutron capture efficiency rates achieved by Gd-loaded plastic scintillators. A "deposition efficiency" parameter was defined as the percentage of incident neutrons which were captured in the Gd-loaded scintillator, and whose emitted gammas deposited energy above a certain threshold in a larger layer of plastic scintillator. Deposition efficiency curves were collected for varying thresholds and Gd concentrations, and the results are discussed here.Show more Item DNA confined in nanochannels and nanoslits(2014-05) Tree, DouglasShow more It has become increasingly apparent in recent years that next-generation sequencing (NGS) has a blind spot for large scale genomic variation, which is crucial for understanding the genotype-phenotype relationship. Genomic mapping methods attempt to overcome the weakesses of NGS by providing a coarse-grained map of the distances between restriction sites to aid in sequence assembly. From such methods, one hopes to realize fast and inexpensive de novo sequencing of human and plant genomes.One of the most promising methods for genomic mapping involves placing DNA inside a device only a few dozen nanometers wide called a nanochannel. A nanochannel stretches the DNA so that the distance between fluorescently labeled restriction sites can be measured en route to obtaining an accurate genome map. Unfortunately for those who wish to design devices, the physics of how DNA stretches when confined in a nanochannel is still an active area of research. Indeed, despite decades old theories from polymer physics regarding weakly and strongly stretched polymers, seminal experiments in the mid-2000s have gone unexplained until very recently.With a goal of creating a realistic engineering model of DNA in nanochannels, this dissertation addresses a number of important outstanding research topics in this area. We first discuss the physics of dilute solutions of DNA in free solution, which show distinctive behavior due to the stiff nature of the polymer. We then turn our attention to the equilibrium regimes of confined DNA and explore the effects of stiff chains and weak excluded volume on the confinement free energy and polymer extension. We also examine dynamic properties such as the diffusion coefficient and the characteristic relaxation time. Finally, we discuss a sister problem related to DNA confined in nanoslits, which shares much of the same physics as DNA confined in channels.Having done this, we find ourselves with a well-parameterized wormlike chain model that is remarkably accurate in describing the behavior of DNA in confinement. As such, it appears that researchers may proceed with the rational design of nanochannel mapping devices using this model.Show more Item Magnetic properties and potential applications of Fe16N2(2021-02) Hang, XudongShow more Fe16N2 is a magnetic material with giant saturation magnetization that has potential applications in the hard drive and permanent magnet industries. In this thesis, fundamental magnetic properties of Fe16N2 are studied experimentally on thin-film samples and the potential application of Fe16N2 as a rare-earth-free permanent magnet is investigated theoretically. For the experimental part, the sputtering growth of Fe16N2 thin films on nonmagnetic seed layers is reported first, which provides the foundation for determining the magnetic structure of Fe16N2. The magnetic structure of high-magnetization Fe16N2, solved using polarized neutron diffraction, is reported for the first time. The magnetic structure also helps understand the origin of the giant magnetization observed in Fe16N2. Using the techniques of polarized neutron reflectometry, transmission electron microscopy, and vibrating sample magnetometry, we clarified the origin of the interface enhanced magnetization and perpendicularly magnetized components in Fe16N2 thin films. For the theoretical part, Monte Carlo methods were applied to explore the possibilities of antiferromagnet-ferromagnet exchange- coupled composite magnets, using Fe16N2 as the ferromagnet as it has large saturation magnetization and a reasonably high magnetic anisotropy constant. A new Monte Carlo-sampling based algorithm for comparative analysis of coercivity is proposed. It is confirmed that, with proper choice of an antiferromagnetic material and an optimized microstructure, large coercivity can be achieved in antiferromagnet-Fe16N2 composite magnets and that the maximum energy product can be enhanced by up to 10% as compared to pure iron nitride magnets.Show more Item Markov Chains Meet Molecular Motors(2022-11) Shrivastava, RachitShow more Transportation of important cargoes inside the cells of living organism is critical for several cellular functions. Most form of cargo transportation inside the cells is accomplished by teams of nanometer sized proteins called molecular motors. These proteins walk on filamentous tracks inside the cells while carrying a common cargo from its source to destination. Malfunctions in this process could lead to several life-threatening maladies ranging from neurodegenerative diseases to cancers. Thus, fundamental understanding of how molecular motors coordinate the transport of a common cargo is of immense scientific importance. Due to small sizes and stochastic nature of these motor proteins, experiments often lack the spatial and temporal resolutions to investigate the intracellular cargo transport process with molecular details. Mathematical modeling provides a helping hand and can not only add to the information obtained experimentally, but also guide future experiment design, thereby helping us understand the genesis of diseases and aid to the discovery of cures. In this thesis, we have utilized the theory of Markov chains to mathematically model intracellular cargo transport process by teams of molecular motors. Backed by the mathematical theory, we developed a numerical framework which improves upon the existing methodologies and enables us to numerically compute the biologically important statistics of the cargo transport by teams of motor proteins in a more realistic setting on a regular desktop PC. In contrast, previous methodologies regularly employed supercomputing clusters to obtain these statistics. Thus our methods are more accessible to biophysicists studying molecular motors but lacking the access to appropriate computational infrastructure. Further, we develop toy models to mathematically analyze the cargo transport process by two molecular motors replicating the well known "tug of war" and "coordinated movement" type scenarios during multi-motor cargo transport. Our results show that the cargo transport velocities could display non-trivial characteristics and phase transitions depending on certain experimental parameters which is an unexpected phenomenon. The methods developed here are not only limited to modeling cargo transport by molecular motors but also serve as a stepping stone for modeling more general class of processes where a group of stochastic agents accomplishes a common goal.Show more Item Misspecification of the covariance matrix in the linear mixed model: a Monte Carlo simulation(2013-02) LeBeau, Brandon C.Show more The linear mixed model has become a popular method for analyzing longitudinal and cross sectional data due to its ability to overcome many of the limitations found using classical methods such as repeated measures analysis of variance or multivariate analysis of variance. Although the linear mixed model allows for flexible modeling of clustered data, the simulation research literature is not nearly as extensive as classical methods. This current study looks to add to this literature and the statistical properties associated with the linear mixed model under longitudinal data conditions. Historically when using the linear mixed model to analyze longitudinal data, researchers have allowed the random effects to solely account for the dependency due to repeated measurements. This dependency arises in this case, from repeated measurements on the same individual and measurements taken closer in time would be more correlated than measurements taken further apart in time. If measurements are taken close in time (i.e. every hour, daily, weekly, etc.), the random effects alone may not adequately account for the dependency due to repeated measurements. In this case serial correlation may be present and need to be modeled. Previous simulation work exploring the effects of misspecification of serial correlation have shown that the fixed effects tend to be unbiased, however evidence of bias show up in the variance of the random components of the model. In addition, some evidence of bias was found in the standard errors of the fixed effects. These simulation studies were done with all other model conditions being "perfect," including normally distributed random effects and larger sample size. The current simulation study looks to generalize to a wider variety of data conditions. The current simulation study used a factorial design with four simulation conditions manipulated. These included: covariance structure, random effect distribution, number of subjects, and number of measurement occasions. Relative bias of the fixed and random components were explored descriptively and inferentially. In addition, the type I error rate was explored to examine any impact the simulation conditions had on the robustness of hypothesis testing. A second smaller study was also conducted that explicitly misspecified the random slope for time to see if serial correlation could overcome the misspecification of that random effect. Results for the larger simulation study found no bias in the fixed effects. There was however evidence of bias in the random components of the model. The fitted and generated serial correlation structures as well as their interaction explained significant variation in the bias of the random components. The largest amounts of bias were found when the fitted structure was underspecified as independent. Type I error rates for the five fixed effects were just over 0.05, with many around 0.06. Many of the simulation conditions explained significant variation in the empirical type I error rates. Study two again found no bias in the fixed effects. Just as in study one, bias was found in the random components and the fitted and generated serial correlation structures as well as the interaction between the two explaining significant variation in the relative bias statistics. Of most concern were the severely inflated type I error rates for the fixed effects associated with the slope terms. The average type I error rate was on average twice what would be expected and ranged as high as 0.25. The fitted serial correlation structure and the interaction between the fitted and generated serial correlation structure explained significant variation in these terms. More specifically, when the serial correlation was underspecified as independent in conjunction with a missing random effect for time, the type I error rate can become severely inflated. Serial correlation does not appear to bias the fixed effects, therefore if point estimates are all that are desired serial correlation does not need to be modeled. However, if estimates of the random components or inference are concerned care needs to be taken to at least include serial correlation in the model when it is found in the data. In addition, if serial correlation is present and the model is misspecified without the random effect for time serious distortions of the empirical type I error rate occur. This would lead to rejecting many more true null hypotheses which would make conclusions extremely uncertain.Show more Item Modeling DNA electrophoresis in confined geometries.(2010-08) Laachi, NabilShow more Size-based DNA separation is at the heart of numerous biological applications. While gel electrophoresis remains widely utilized to fractionate DNA according to their size, the method has several shortcomings. Recent advances in micro- and nano-fabrication techniques engendered several microfabricated devices aimed at addressing some of the limitations of gel electrophoresis for DNA separations. In this thesis, we employ a combination of analytical and computational methods to characterize the electrophoretic motion of DNA molecules in microfabricated, confining geometries. In particular, we consider three situations: (i) the migration of short DNA in nanofilters (a succession of narrow slits, connecting deep wells) under a high electric field; (ii) the metastable unhooking of a long DNA chain wrapped around a cylindrical post; and (iii) the dynamics of long DNA chains in an array of spherical cavities connected by nanopores. We provide insights on the physical mechanisms underlying the transport of DNA in such geometries. Useful guidelines for the optimal design of new separation devices result from the fundamental understanding gained by the approach we propose.Show more Item Molecular Simulation of Adsorption in Zeolites(2014-08) Bai, PengShow more Zeolites are a class of crystalline nanoporous materials that are widely used as catalysts, sorbents, and ion-exchangers. Zeolites have revolutionized the petroleum industry and have fueled the 20th-century automobile culture, by enabling numerous highly-efficient transformations and separations in oil refineries. They are also posed to play an important role in many processes of biomass conversion. One of the fundamental principles in the field of zeolites involves the understanding and tuning of the selectivity for different guest molecules that results from the wide variety of pore architectures. The primary goal of my dissertation research is to gain such understanding via computer simulations and eventually to reach the level of predictive modeling. The dissertation starts with a brief introduction of the applications of zeolites and computer modeling techniques useful for the study of zeolitic systems. Chapter 2 then describes an effort to improve simulation efficiency, which is essential for many challenging adsorption systems. Chapter 3 studies a model system to demonstrate the applicability and capability of the method used for the majority of this work, configurational-bias Monte Carlo simulations in the Gibbs ensemble (CBMC-GE). After these methodological developments, Chapter 4 and 5 report a systematic parametrization of a new transferable force field for all-silica zeolites, TraPPE-zeo, and a subsequent, relatively ad-hoc extension to cation-exchanged aluminosilicates. The CBMC-GE method and the TraPPE-zeo force field are then combined to investigate some complex adsorption systems, such as linear and branched C6--C9 alkanes in a hierarchical microporous/mesoporous material (Chapter 6), the multi-component adsorption of aqueous alcohol solutions (Chapter 7) and glucose solutions (Chapter 8). Finally, Chapter 9 describes an endeavor to screen a large number of zeolites with the purpose of finding better materials for two energy-related applications, ethanol/water separation and hydrocarbon iso-dewaxing.Show more Item Molecular-level Insights into reversed-phase liquid chromatographic systems via Monte Carlo simulation.(2009-08) Rafferty, Jake LelandShow more Separations are of utmost importance in the feild of chemistry and reversed-phase liquid chromatography (RPLC) is among the most popular techniques for this purpose. Despite this popularity, and decades of research efforts, a fundamental understanding of RPLC at the molecular-level is lacking. To gain this detailed understanding, molecular simulations using advanced Monte Carlo algorithms and accurate force fields are applied to examine structure and retention in various realistic model RPLC systems. The simulations are able to afford quantitative agreement with experimental retention data and offer many new insights on stationary phase structure and the molecular mechanism of solute retention in RPLC.Show more Item Monte Carlo Likelihood Approximation for Generalized Linear Mixed Models(2016-01) Knudson, ChristinaShow more Frequentist likelihood-based inference for generalized linear mixed models is often difficult to perform. Because the likelihood cannot depend on unobserved data (such as random effects), the likelihood for a generalized linear mixed model is an integral that is often high-dimensional and intractable. The method of Monte Carlo likelihood approximation (MCLA) approximates the entire likelihood function using random effects simulated from an importance sampling distribution. The resulting Monte Carlo likelihood approximation can be used for any frequentist likelihood-based inference. Due to the challenge of finding an importance sampling distribution that works well in practice, very little publicly-available MCLA software existed prior to 2015. I present an importance sampling distribution to be used in implementing MCLA for generalized linear mixed models; establish its theoretical validity; implement it in the R package glmm; and demonstrate how to use the package to perform maximum likelihood, test hypotheses, and calculate confidence intervals.Show more Item Monte Carlo Studies of Microheterogeneous Fluids and Solvation Environments(2018-09) Xue, BaiShow more Microheterogenous systems play an essential role in many aspects of chemistry. For example, understanding the bubble or droplet nucleation processes enable us to investigate atmospheric chemistry such as rain and cloud formation. The interfacial eects often involve enrichment, thus can be used in adsorption and separation. Decreasing interfacial tension has the application of surfactant. The heterogeous solvation environment studies may lead us to understand the most important process in life such as protein folding and membrane formation. Because of their fundamental importance, many methods of both experiment and theory are developed to understand their mechanism. However, experiments are very dicult to conduct in very extreme conditions, for example, the high temperature and high pressure condition in the oil reservoir and complicated environment in atmosphere. On the other hand, most of the theoretical methods still need empiricism thus are dicult to provide physical insight. Hence, molecular level simulations provide a promising alternative approach to study complex heterogenous systems. The Monte Carlo and molecular dynamics simulations have been employed in this thesis to study many important applications including bubble nucleation, water/alkane phase equilibria, micro-solvation environment of a chromophore, and interfacial tension of water/ oil. Results show that these simulations can yield accurate prediction of macroscopic properties meanwhile reveal molecular level structure, which demonstrate that molecular simulations are indeed powerful new tools to study complex heterogeneous systems.Show more Item Output Analysis Of Monte Carlo Methods With Applications To Networks And Functional Approximation(2020-02) Nilakanta, HaemaShow more The overall objective of the Monte Carlo method is to use data simulated in a computer to learn about complex systems. This is a highly flexible approach and can be applied in a variety of settings. For instance, Monte Carlo methods are used to estimate network properties or to approximate functions. Although the use of these methods in such cases is common, little to no work exists on assessing the reliability of the estimation procedure. Thus, the contribution of this work lies in further developing methods to better address the reliability of Monte Carlo estimation, particularly with respect to estimating network properties and approximating functions. In network analysis, there exist many networks which can only be studied via sampling methods due to the scale or complexity of the network, access limitations, or the population of interest is hard to reach. In such cases, the application of random walk-based Markov chain Monte Carlo (MCMC) methods to estimate multiple network features is common. However, the reliability of these estimates has been largely ignored. We consider and further develop multivariate MCMC output analysis methods in the context of network sampling to directly address the reliability of the multivariate estimation. This approach yields principled, computationally efficient, and broadly applicable methods for assessing the Monte Carlo estimation procedure. We also study the Monte Carlo estimation reliability in approximating functions using Importance Sampling. Although we focus on approximating difficult to compute density and log-likelihood functions, we develop a general framework for constructing simultaneous confidence bands that could be applied in other contexts. In addition, we propose a correction to improve the reliability of the log-likelihood function estimation using the Monte Carlo Likelihood Approximation approach.Show more Item Phonon and Thermal Dynamics of Kitaev Quantum Spin Liquids(2022-01) Feng, KexinShow more Quantum spin liquids (QSL) is a novel state of magnetic material, where magnetic order is absent down to very low temperature. This novel state has massive quantum entanglement, and can host emergent quasiparticle excitations, which carry fractionalized quantum numbers and display anionic statistics. In this thesis, I focus on the phonon and thermal dynamics of the Kitaev QSL, explore detectable signatures of QSL phase. More specifically, I will new propose observables, including sound attenuation coefficient, Hall viscosity and Fano effects in the optical phonon Raman spectroscopy, which are shown to encode information of the fractionalized excitations, namely Z2 gauge fluxes and itinerant fermions. The key technique to deal with spin-phonon couplings largely relies on symmetry considerations, which involves group and representation theory. The main numerical technique to simulate flux thermodynamics are Markov Chain Monte Carlo and stratified Monte Carlo. The latter is a new efficient algorithm which I designed specifically for the Kitaev model, based on my phenomenological study of the Z2 flux thermodynamics.Show more Item Planned Missingness: A Sheep in Wolf's Clothing(2021-06) Zhang, CharleneShow more There has been an extensive body of methodological literature supporting the effectiveness of planned missingness (PM) designs for reducing survey length. However, in industrial/organizational (I/O) psychology, it is still rarely applied. Instead, when there is a need to reduce survey length, the standard practice is to either reduce the number of constructs measured or to use short forms rather than full measures. The former is obviously unideal. The latter requires prioritizing the measurement of some items over that of others and can also quickly become time and labor intensive, as not all measures have established short forms. This dissertation presents three studies that compare the relatively unused methodology of PM against the common practice of using short forms. First, the two approaches are compared in three archival datasets, finding that PM consistently yields more accurate correlational estimates than short forms. Second, a Monte Carlo simulation is conducted to explore how this comparison may be affected by data characteristics, including the number of constructs, construct intercorrelations, sample size, amount of missingness, as well as different types of short forms. Average of all conditions simulated, short forms produce slightly more accurate estimates than PM when empirically developed short forms are readily available for use. When a part of the sample needs to be used to first develop short forms, the two approaches perform equivalently. When the selection of items for short forms strays from being purely empirical, PM outperforms short forms. Lastly, a qualitative survey exploring social science researchers’ knowledge about PM finds that most are not familiar with PM or have an inaccurate understanding of the concept despite working with surveys frequently. A number of research contexts are identified for which PM may not be suitable. Overall, the findings of this dissertation demonstrate that PM designs are technically effective in producing accurate estimates. Its effectiveness, along with its convenience, makes it a valuable survey design tool. It is apparent that the road to popularizing this technique within the I/O field will require much education in its understanding and application, and this dissertation serves as a first step in doing so.Show more Item Sour Gas Sweetening and Ethane/Ethylene Separation(2018-05) Shah, Mansi SShow more Chemical separations are responsible for nearly half of the US industrial energy consumption. The next generation of separation processes will rely on smart materials to greatly relieve this energy expense. This thesis research focuses on two very energy-intensive and large-scale industrial separations: sour gas sweetening and ethane/ethylene separation. Traditionally, gas sweetening has been achieved through amine-based absorption processes to selectively remove H2S and CO2 from CH4. Ethane/ethylene is an even harder mixture since the two molecules have very similar sizes, shapes, and self-interaction strengths. Despite their low relative volatility (1.2-3.0), cryogenic distillation is the most commonly used technique for this separation. Compared to absorption and cryogenic distillation, adsorption allows for better performance control by choosing the right adsorbent. Crystalline materials such as zeolites, that have precisely defined pore structure, exhibit excellent molecular sieving properties. Performance is closely linked to structure; identifying top zeolites from a large pool of available structures (~300) is thus crucial for improving the separation. In this thesis research, molecular modeling is used to identify optimal materials for these two separations. Since the accuracy of predictive molecular simulations is governed by the underlying molecular models, the first objective of this thesis research was to develop improved molecular models for H2S, ethane, and ethylene. A wide variety of properties such as vapor-liquid and solid-vapor equilibria, critical and triple points, vapor pressures, mixture properties, relative permittivities, liquid structure, and diffusion coefficients were studied using molecular simulations to parameterize transferable molecular models for these molecules. These models are designed to strike a very good balance between accuracy of predictions and efficiency of simulations. For some of the zeolites for which experimental data existed in the literature, purely predictive adsorption isotherms agreed quantitatively with the available experiments. A computational screening was then performed for over 300 zeolite structures using tailored molecular simulation protocols and high-performance supercomputers. Optimal zeolites for each of the two applications were identified for a wide range of temperatures, pressures, and mixture compositions. Finally, a brief literature survey of the zeolites that have been synthesized in their all-silica form is presented and syntheses for two of the important target framework types is discussed.Show more Item Three-dimensional dosimetry around small distributed high-Z materials(2016-05) Warmington, LeightonShow more Patients are increasingly undergoing radiotherapy procedures, in which small metals are implanted in the body for target localization for IGRT or targeted therapies. Previous, interface dosimetry studies focused high-Z materials irradiated by low energy beams where the dose enhancement is large. In the majority of the cases, they used one or two dimensional detectors. Therapeutic beams, however, are mostly 6 MV and higher with significantly less dose enhancement. Over the last decade, significant improvements in polymer gel dosimetry have been made allowing for improved 3D dose measurements. The purpose of this study was to better understand the dose around distributed high-Z materials irradiated by high energy photon beams and investigate the feasibility of 3D dose measurements. A Monte Carlo code was used to determine the effect of various foil configurations. The dosimetric effect of foil thickness, separation, energy and other factors were investigated. Software tools were also developed to process the data. These results were used to help identify suitable experimental setups. The dose around two foils was compared to the dose resulting from adding the dose of two single foils. The dose around a single foil was also compared to the dose around a fiduciary marker. Later on, we looked at how distributing the thickness of the high-Z foil over a wider area affected dose and how that compared to a to the dose around a single foil. Finally, we looked at the effect of pair production and how it affected the distribution of dose in select configurations. Several polymer gel dosimeter (PGD) were evaluated and two were selected for further study. Various formulations were investigated and procedures developed to meet the needs of the project. Materials compatibility studies were performed to ensure that there were no reactions between the PGD and inserted materials within the time frame of the studies. PGDs were manufactured and thin lead foils with the configuration determined earlier were inserted into the polymer gel. The PGDs was irradiated with 18 MV photons and the dose was quantified using MRI with a multiple spin echo technique for the measurement of the spin-spin relaxation rate (R2). The measured dose data were compared to theoretical data obtained from Monte Carlo experiments. The dose profiles around the foils from the PGD were in agreement with dose values from simulation. This project demonstrated that it is feasible to use polymer gel dosimetry to measure the fine dosimetric structures around a small metallic object. We also determined that material, foil thickness, separation and photon energy had the largest effect on the dose in-between a two foil configuration. When the foils were close, we found that the dose around the two foils was larger but not significantly different from the combined dose of two single foils with the same separation. We also found that the dose upstream and downstream of a distributed foil is less that the upstream and downstream dose around a single foil of equivalent thickness.Show more Item Toward Simulation of Complex Reactive Systems: Development and Application of Enhanced Sampling Methods(2018-03) Fetisov, EvgeniiShow more redictive modeling of fluid phase and sorption equilibria for reacting systems presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions representing the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. This thesis showcases some of my efforts in developing and applying advanced simulation methods to a variety of important systems. Chapters 2 and 3 describe how a novel Monte Carlo method (reactive first principles Monte Carlo or RxFPMC) can be used to overcome some limitations of existing methods for simulation of reactive systems. Chapter 4 shows how advanced sampling techniques in combination with sophisticated interatomic potentials can be used to elucidate nucleation pathways. Chapters 5 and 6 manifest how first principles simulations can be leveraged to understand liquid structure of novel complex solvents as well as reactive processes in such solvents. Finally, the last chapter discusses the use of smart sampling algorithms to study chemisorption of mixed ligands on nanoparticles.Show more