Browsing by Subject "Simulation"
Now showing 1 - 20 of 62
- Results Per Page
- Sort Options
Item Accessibility, Network Structure, and Consumers’ Destination Choice: A GIS Analysis of GPS Travel Data and the CLUSTER Simulation Module for Retail Location Choice(Intelligent Transportation Systems Institute, Center for Transportation Studies, University of Minnesota, 2012-10) Huang, Arthur; Levinson, DavidAnecdotal and empirical evidence has shown strong associations between the built environment and individuals’ travel decision. To date, data about individuals’ travel behavior and the nature of the retail environment have not been linked at the fine-grained level for verifying such relationships. GPS and GIS have revolutionized how we measure and monitor land use and individual travel behavior. Compared with traditional travel survey methods, GPS technologies provide more accurate and detailed information about individuals’ trips. Based the GPS travel data in the Twin Cities we analyze the impact of individuals’ interactions with road network structure and the destinations’ accessibility on individuals’ destination choice for home-based non-work retail trips. The results reveal that higher accessibility and diversity of services make the destination more attractive. Further, accessibility and diversity of establishments in a walking zone are often highly correlated. A destination reached via a more circuitous or discontinuous route dampens its appeal. In addition, we build an agent-based simulation tool to study retail location choice on a supply chain network consisting of suppliers, retailers, and consumers. The simulation software illustrates that the clustering of retailers can emerge from the balance of distance to suppliers and the distance to consumers. We further applied this tool in the Transportation Geography and Networks course (CE 5180) at the University of Minnesota. Student feedback reveals that it is a useful active learning tool for transportation and urban planning education. The software also has the potential of being extended for an integrated regional transportation-land use forecasting model.Item Aircraft Simulation Baseline 2014 v1(2014-07-10) Taylor, BrianItem Analysis of the bump problem in BSIM3 using NOR gate circuit and implementation of techniques in order to overcome them(2014-12) Sankaralingam, SubramaniamIn this paper we will be analyze the bump problem in BSIM3 using "Killer" NOR gate circuit. We refer to the NOR gate circuit as "Killer" gate because it kills our simulation results. This problem is witnessed in all the models employing quasi-static approximation. Quasi-static approximation and Non quasi-static approximation will be explained in detail to give better insight into the bump problem. Finally some techniques will be proposed and analyzed with the help of waveforms in order to overcome the problem.Item Biopolymer Simulations: From Next-Generation Genomics to Consumer Products(2018-04) Li, XiaolanBiopolymers have many unique properties which play an essential and pervasive role in everyday life, thus making them attractive for engineering applications. Understand- ing how the particular properties of biopolymers give rise to important applications in technology remains a long-standing challenge. Although biopolymers can have different chemistries, they share some common physical properties: high molecular weights, stiff backbones, and complex internal structures. Computer simulation, therefore, plays quite an important role since it provides a way to study a generic model that, by changing the parameters appearing in the model, permits studying a wide variety of biopolymers. Specifically, we focus on two such biopolymers: DNA and methylcellulose. This thesis focuses on studying the universal properties of the two aforementioned biopolymers using novel molecular simulation techniques. DNA attracts particularly strong interest not only because of its fascinating double- helix structure but also because DNA carries biological information. Genomic mapping is emerging as a new technology to provide information about large-scale genomic structural variations. In this context, the conformation and properties of the linearized DNA are only beginning to be understood. With a Monte Carlo chain growth method known as pruned-enriched Rosenbluth method, we explore the force-extension relationship of stretched DNA. In this scenario, external forces and confinement are two fundamental and complementary aspects. We begin by stretching a single DNA in free solution. This allows separation of restrictions imposed by forces from that by walls. This work shows that the thickness of DNA plays an important role in the force-extension behavior. The key outcome is a new expression that approximates the force-extension behavior with about 5% relative error for all range of forces. We then analyze slit-confined DNA stretched by an external force. This work predicted a new regime in the force-extension behavior that features a mixed effect of both sensible DNA volume and sensible wall effects. We anticipate such a complete description of the force-extension of DNA will prove useful for the design of new genomic mapping technologies. The dissertation also involves another biopolymer, methylcellulose, which has an extremely wide range of commercial uses. Methylcellulose is thermoresponsive polymer that undergoes a morphological transition at elevated temperature, forming uniform diameter fibrils. However, mechanisms behind the solution-gel transition are poorly understood. Following the computational studies by Huang et al. [1], we apply Langevin dynamics simulations to a coarse-grained model that produces collapsed ring-like structures in dilute solution with a radius close to the fibrils observed in experiments. We show that the competition between the dihedral potential and self-attraction causes these collapsed states to undergo a rapid conformational change, which helps the chain to avoid kinetic traps by permitting a transition between collapsed states. We expect our findings from computational studies of biopolymers will not only provide a deep understanding of semiflexible polymer physics but also inspire novel engineering applications relying on the properties of biopolymers.Item Boosting Max-Pressure Signal Control Into Practical Implementation: Methodologies And Simulation Studies In City Networks(2023-08) Xu, TeThis dissertation presents innovative modifications to the Max-Pressure (MP) control policy, an adaptive traffic signal control strategy tailored to various urban traffic conditions. The max-pressure control offers two pivotal advantages that underscore its significance for in-depth research and future implementation: Firstly, MP operates on a decentralized basis, enabling real-time solutions. Secondly, MP control guarantees maximum stability, implying it can accommodate as much given demand as any alternative signal timing strategy. Initially, the MP control policy was adapted to transit signal priority (MP-TSP). It delivered enhanced bus travel times, outperforming both fixed-time signal controls with TSP and other adaptive signal controls in efficiency. Subsequently, the pedestrian-friendly max-pressure signal controller (Ped-MP) was developed. This marked a pioneering effort in crafting an MP control to boost pedestrian access without compromising vehicle throughput. The Ped-MP, backed by analytical proof for maximum stability, illustrated an inverse relation between pedestrian delay and tolerance time during simulations on the Sioux Falls network. This suggests the potential for urban spaces that are more pedestrian-oriented, even in areas of elevated pedestrian traffic. The third innovation addressed the practical feasibility of the position-weighted back-pressure (PWBP) controller. Although the initial PWBP controller was effective in simulations, it was found to be impractical due to its need for density information from everywhere of the road link. This observation paved the way for the approximate position-weighted back-pressure (APWBP) control, which significantly reduces sensor requirements by utilizing only two loop detectors per link (one downstream and one upstream). A comparative analysis revealed that the APWBP's efficacy closely paralleled the original PWBP, validating its practicality. Finally, recognizing the MP controller’s deficit in coordinated phase selection, the Smoothing-MP approach was conceptualized. Incorporating signal coordination, this novel strategy not only maintained its maximum stability properties but also amplified traffic flow efficiency, as confirmed by mathematical proofs and numerical studies in both the Grid Network and the Downtown Austin Network.Item Cluster Weather Vanes: Radio Galaxies as Indicators of Galaxy Cluster Dynamics(2020-07) Nolting, ChrisWe report the results of multiple three-dimensional magnetohydrodynamic (MHD) simulation studies focusing on the dynamics of Active Galactic Nuclei (AGN) jets. The goal of these studies is to understand the interactions of such jets with dynamical features of galaxy cluster media such as winds and shock waves, and to use the recognizable observed features as diagnostic tools when examining real radio sources in clusters to better understand the state of the environment surrounding radio galaxies (RG). We describe in detail the physics of jet propagation in a wind, including the rate of advancement of the jet head in the presence of a head or tail wind, the bending of a jet by ram pressure due to a cross wind, and the combination of the two effects when there is some intermediate alignment. We show simulation results that confirm analytic predictions for the jet head advancement and bending, as well as the effects on a RG jet due to an encounter with a shock wave at various angles and shock strengths. In a perfectly aligned case, a sufficiently strong shock can cause a jet to be stripped of its cocoon material, its forward progression slowed, and in some cases the jet can be reversed entirely. Shocks propagating through low density cocoons produce vorticity which can result in a vortex ring, possibly disrupting the jet and creating a distinct ring structure. These rings can remain visible for significant time due to magnetic field amplification, but if they are overlapping active jets in projection, the emission from the fresh cosmic ray electrons (CRe) in the jets may overwhelm the aged electrons in the ring. The dynamics of vortex rings is discussed, and theoretical predictions are confirmed in simulation. Re-energization of aged CRe by shocks is examined, and adiabatic compression is sufficient to explain the brightened and spectrally flattened sources, while Diffusive Shock Acceleration is not required. Lastly, looking to future simulations, we outline an ongoing comparison study of two methods for solving the MHD equations, a 2nd order Total Variation Diminishing (TVD) method and a 5th order Weighted Essentially Non-Oscillating (WENO) method. Appendices include other work on numerical methods used in the completion of this dissertation. We find that RG jets make excellent "weather-vanes" for understanding the dynamics of their surrounding media, and useful tools for understanding the motions in galaxy clusters.Item A comparative study of item-level fit Indices in item response theory.(2009-07) Davis, Jennifer PaigeItem-level fit indices (IFI) in item response theory (IRT) are designed to assess the degree to which an estimated item response function approximates an observed item response pattern. There are numerous IFIs whose theoretical sampling distributions are specified; however, in some cases little is known regarding the degree to which these indices follow their theoretical distributions in practice. If an IFI departs substantially from its theoretical distribution, degree of misfit will be misestimated, and test developers will have very little idea of whether their models provide accurate depictions of true item response behavior. Therefore, a Monte Carlo simulation study was conducted to assess the degree to which many available IFIs follow their theoretical distributions. The IFIs examined in this study were (1) Infit (VI) and Outfit (VO), two IFIs commonly used for the Rasch model; (2) Yen’s (1981) c2 (Q1) and Orlando and Thissen’s (2000) c2 (QO); (3) three Langrange multiplier statistics [LM(a), LM(b), and LM(ab)] proposed by Glas (1999); and (4) Dragow, Levine, and Williams’ (1985) person fit Lz modified by Reise (1990) to assess item fit. The primary research objective of this study was to determine how a number of factors (listed below) affect Type I error rates and empirical sampling distributions of IFIs. The relationship between IFIs and item parameters was also examined. The crossed between-subjects conditions were: IRT model (1-, 2-, and 3 parameter); data noise, operationalized as strictly unidimensional vs. essentially unidimensional data; item discrimination (high and low); test length (n = 15 and n = 75); and sample size (N = 500 and N = 1,500). There were also two crossed within-subjects factors to capture the impact of item and person parameter estimation error. The dependent variables in this study were IFI Type I error rates and empirical sampling distribution moments across 18,750 replicated items. Data were analyzed and summarized using ANOVA, Pearson correlations, and graphical procedures. The Kolmogorov-Smirnov test was used to directly assess distributional assumptions. The results of the study indicated that QO was the only statistic to adhere closely to its theoretical sampling distribution across all study conditions. For VI, VO, Lz, and Q1 statistics, sampling distributions were strongly influenced by test length, parameter estimation error, and, to a lesser degree, sample size. In the absence of parameter estimation error, all statistics more closely approximated their theoretical sampling distributions and were affected little by other study conditions. The presence of person parameter estimation error tended to have an inflationary effect on sampling distribution means whereas the presence of item parameter estimation error tended to have a deflationary effect on sampling distribution variances. VI, VO, and Lz functioned very similarly to one another, with Type I error rates tending to be grossly inflated for n = 15 and deflated for n = 75 when both person and item parameter error were present. Q1 Type I error rates were also grossly inflated for n = 15, but were near nominal levels for n = 75. Finally, the LM statistics generally exhibited inflated Type I error rates and were moderately influenced by IRT model and discrimination; only for LM(b) did empirical sampling distributions tend to approach theoretical distributions, primarily when discrimination was lower or for the 3-parameter model at both levels of discrimination.Item A computer simulation of school district economics: modeling allocation effects of choice programs(2015-02) Kirwin, Peter CarlEvaluators can use cost-effectiveness analysis to help policy makers choose a course for improving student achievement. However, existing cost estimates of one such course, school choice programs, ignore significant non-linear allocation effects created by these policies. These effects are too complex for description by systems of equations; a computer simulation is required to create a more accurate and portable method of estimating these costs. Although the specific numeric findings are specific to a single state for a single year, the general behavior of the system indicates that allocation costs do exist for school choice programs, and they cannot be considered cost-neutral.Item Cooperation in Games(2019-05) Damer, StevenThis dissertation explores several problems related to social behavior, which is a complex and difficult problem. In this dissertation we describe ways to solve problems for agents interacting with opponents, specifically (1) identifying cooperative strategies,(2) acting on fallible predictions, and (3) determining how much to compromise with the opponent. In a multi-agent environment an agent’s interactions with its opponent can significantly affect its performance. However, it is not always possible for the agent to fully model the behavior of the opponent and compute a best response. We present three algorithms for agents to use when interacting with an opponent too complex to be modelled. An agent which wishes to cooperate with its opponent must first identify what strategy constitutes a cooperative action. We address the problem of identifying cooperative strategies in repeated randomly generated games by modelling an agent’s intentions with a real number, its attitude, which is used to produce a modified game; the Nash equilibria of the modified game implement the strategies described by the intentions used to generate the modified game. We demonstrate how these values can be learned, and show how they can be used to achieve cooperation through reciprocation in repeated randomly generated normal form games. Next, an agent which has formed a prediction of opponent behavior which maybe incorrect needs to be able to take advantage of that prediction without adopting a strategy which is overly vulnerable to exploitation. We have developed Restricted Stackelberg Response with Safety (RSRS), an algorithm which can produce a strategy to respond to a prediction while balancing the priorities of performance against the prediction, worst-case performance, and performance against a best-responding opponent. By balancing those concerns appropriately the agent can perform well against an opponent which it cannot reliably predict. Finally we look at how an agent can manipulate an opponent to choose actions which benefit the agent. This problem is often complicated by the difficulty of analyzing the game the agent is playing. To address this issue, we begin by developing a new game, the Gift Exchange game, which is trivial to analyze; the only question is how the opponent will react. We develop a variety of strategies the agent can use when playing the game, and explore how the best strategy is affected by the agent’s discount factor and prior over opponents.Item Developing and Validating a Model of Left-Turn Crashes to Support Safer Design and Operations(Center for Transportation Studies, University of Minnesota, 2018-09) Davis, Gary; Gao, Jingru; Mudgal, AbhisekThis report documents work done to advance the state of art in crash simulation. This includes: (1) A field study to collect data on drivers’ left-turn gap acceptance and turning times, and development of statistical models that can be incorporated into a crash simulation model; (2) The use of Markov Chain Monte Carlo computational tools to quantify uncertainty in planar impact reconstruction of two-vehicle crashes; (3) A method for combing the results from planar impact reconstruction with event data recorder pre-crash data to estimate descriptive features of actual left-turn crashes. This is applied to several left-turn crashes from the National Highway Traffic Safety Administration’s NASS/CDS database; (4) A left-turn crash simulation model incorporating the above results. Initial model checking is performed using estimates from the reconstructed NASS/CDS cases as well as results from a previous study on left-turn crash risk. Also described is a method for simulating crash modification effects without having to first simulate crashes as rare outcomes in very large numbers of gap acceptances.Item Development and Validation of a Turbulence Wall Model for Compressible Flows with Heat Transfer(2016-08) Komives, JeffreyThe computational cost to model high Reynolds number flows of engineering interest scales poorly with problem size and is excessively expensive. This fact motivates the development of turbulence wall models to lessen the computational burden. These models aim to provide accurate wall flux quantification on computational meshes that would otherwise be unable to accurately estimate these quantities. The benefit of using such an approximation is that the height of the wall-adjacent computational elements can be increased by one to two orders of magnitude, allowing for comparable increases in stable explicit timestep. This increase in timestep is critically necessary for the large eddy simulation of high Reynolds number turbulent flows. To date, most research in the application of wall models has focused on incompressible flows or flows with very weak compressibility. Very few studies examine the applicability of wall models to flows with significant compressibility and heat transfer. The present work details the derivation of a wall model appropriate for compressible flows with heat transfer. The model framework allows for the inclusion of non-equilibrium terms in the determination of wall shear and heat transfer. The model is applied to a variety of supersonic and hypersonic flows, and is studied in both Reynolds-averaged simulations and large eddy simulations. The impact of several modeling approaches and model terms is examined. The wall-modeled calculations show excellent agreement with wall-resolved calculations and experimental data. For time accurate calculations, the use of the wall model allows for explicit timesteps more than 20 times larger than that of the wall-resolved calculation, significantly reducing both the cost of the calculation and the time required converge the solution.Item Development of Freeway Operational Strategies with IRIS-in-Loop Simulation(Minnesota Department of Transportation, 2012-01) Kwon, Eil; Park, ChongmyungThis research produced several important tools that are essential in managing and operating freeway corridors. First, a computer-based off-line process was developed to automatically estimate a set of traffic measures for a given freeway corridor using the historical detector data. Secondly, a prototype on-line estimation procedure was designed to calculate selected traffic measures in real time to assist operators in identifying abnormal traffic patterns. Third, the IRIS-in-loop simulation system was developed by linking IRIS, the freeway control system developed by MnDOT, to a microscopic simulation software through a data communication module, so that new operational strategies can be directly coded into IRIS and evaluated under the realistic simulation environment. Finally, two new freeway operational strategies, variable speed limit control and a density-based adaptive ramp metering strategy, were developed and evaluated with the IRSI-in-Loop simulation system.Item Development of novel schemes for treating subsystem boundaries and electrostatic potentials in simulations of complex systems(2014-01) Wang, BoFragmentation schemes provide a powerful strategy for calculating the potential energy surfaces of complex systems. The combined quantum mechanical and molecular mechanical (QM/MM) method, the electrostatically embedded many-body (EE-MB) method, and the molecular tailoring approach (MTA) are three examples. Two critical issues to be addressed in these methods are the treatment of the boundary between the subsystems when it passes between bonded atoms and the inclusion of the electrostatic potential of one subsystem in the Hamiltonian of another. This thesis involves the development and application of new schemes to treat both issues. The first part focuses on the development of a tuned pseudoatom scheme with a balanced redistributed charge algorithm to accurately model the QM-MM boundary that passes through a covalent bond, especially a polar covalent bond. Various redistribution schemes and ways of tuning the boundary treatments are tested and compared for the QM/MM method and the EE-MTA method. The second part of this thesis involves the development of screened charge models to include charge penetration and screening effects in generating electrostatic potentials for use in various methods, including QM/MM and EE-MB methods. The screened charge models are also used to derive partial atomic charges by fitting electrostatic potentials.Item Differential item functioning in computerized adaptive testing: can CAT self-adjust enough?(2014-04) Piromsombat, ChayutTwo issues related to differential item functioning (DIF) in the context of computerized adaptive testing (CAT) were addressed in this study: 1) the effect of DIF in operational items on the accuracy of the ability estimate (θ ̂_"CAT" ) and 2) the accuracy of detecting DIF in pretest items when DIF occurred in operational items and examinees were matched on the number-correct score (NCS), the ability estimate obtained from nonadaptive computer-based testing (θ ̂_"CBT" ), and θ ̂_"CAT" . To investigate the first issue, a series of simulations were conducted by varying the level of DIF magnitude (0, .4, 1, and 1.6); DIF type (uniform and nonuniform); DIF contamination or the number of DIF items (6, 15, and 24 items out of the 30-item test); and DIF occurrence (first, middle, last, and across stages of CAT). For the latter issue, test impact (μ_R-μ_F = 0 and 1) and sample size ratio (NR:NF = 1:1 and 9:1 ) were also added to the simulation. It was found in the first simulation that CAT could adjust for the effect of DIF in operational items if DIF occurred in the early stages of CAT, with some restrictions though. Specifically, CAT successfully adjusted for the effect of DIF at the earlier stages if the number of DIF items and the magnitude of DIF were moderate. In other situations, CAT seemed to reduce the effect of DIF as seen in the trend of SEs which increased when DIF items were delivered and decreased after CAT administered a new DIF-free item. However, the self-adjustment of CAT was not enough to recover θ ̂_"CAT" from DIF effects. The results from another simulation suggested that matching examinees on θ ̂_"CAT" did not provide impressive advantages over the NCS and θ ̂_"CBT" in most of the simulation conditions. Overall, when operational items were contaminated with moderate DIF magnitude, the three matching variables yielded comparable results of DIF detection in pretest items. However, when the level of DIF contamination in operational items increased, matching examinees on θ ̂_"CAT" led to the worst situation of detecting DIF in pretest items, especially when large-uniform DIF items were used in the operational test. It was also evident that DIF in operational items, especially CAT items, led to false identification of DIF type. Specifically, pretest items exhibiting uniform DIF were mistakenly identified as nonuniform DIF if the matching variable was obtained from nonuniform-DIF operational items.Item Effects of high-fidelity human patient simulation on self-efficacy, motivation and learning of first semester associate degree nursing students.(2009-06) Kuznar, Kathleen A.One of the newest methodologies in nursing education is high-fidelity human patient simulation (HPS). Many nursing educators have embraced the method as it offers a strategy to facilitate cognitive, affective, and psychomotor outcomes. Despite their popularity, however, HPS systems are costly and, in an era of cost containment and tuition increases, research must be employed to determine its effectiveness and guide its utilization. The purpose of this study is to determine how associate degree nursing students' self-efficacy, motivation, and learning in the simulated environment compare to nursing educational experiences without simulation. The mixed-method, quasi-experimental design was chosen for the study with a sample of first-semester associate degree nursing students at 2 technical colleges, 54 in the experimental group and 30 in the comparison group. Results indicated measures of self-efficacy and motivation increased throughout the semester for both groups. The simulation group had a statistically significant increase in general self-efficacy but no significant increase in nursing-specific academic and clinical self-efficacy. In contrast, the comparison groups had a significant increase in nursing academic self-efficacy but not in clinical or more general self-efficacy. Motivation measures were relatively consistent between the groups with only the measure of extrinsic motivation declining for the experimental group. When comparing the two groups on differences between pretest and posttest measures of self-efficacy and motivation, there were no significant differences. The experimental group scored significantly higher on the posttest knowledge examination. Results of interviews (n = 16) revealed specific themes, some unique to the simulation group and some common to members of both groups. The simulation students reported the importance of comprehensive skill practice, risk-free practice, group participation, and debriefing and instructor feedback. They were often able to identify a specific learning experience in the simulation lab that had impact on their practice. Technical skill knowledge was highly important for both groups. Students in both groups related the importance of a variety of courses in the first semester curriculum as increasing their nursing knowledge, self-efficacy and motivation. Simulation was found to be an acceptable learning strategy for novice associate degree nursing students.Item Elastic transmission of identical particles through a strongly correlated Bose-Einstein condensate(2008-12) Lutsyshyn, YaroslavAtomic transmission experiments on superfluid helium-four may provide information about its structure. It was proposed in the past that a transmission channel is possible in which the impinging atoms couple directly to the condensate fraction in helium-II. Such a mechanism would provide an important direct probe of the off-diagonal long-range order in helium-II. We have successfully developed a method based on the diffusion Monte Carlo technique to simulate elastic transmission of atoms through a slab of helium-four at zero temperature. The scattering process is presented as a sum of appropriate standing wave scattering states. The phase factors for each scattering state are determined by matching the diffusion Monte Carlo results with correct energy of the scattering state. The scattering states effectively set the boundary conditions for the problem and in this way determine a phase factor and momentum of the incoming particle. Diffusion Monte Carlo is then performed in its fixed-node flavor. Our results suggest a possibility of complete transparency of small unbound helium films for a broad range of incoming particle's energy. Wavepacket analysis of the computed transmission coefficient's phase dependence on the incoming particle's wavevector was used to obtain times of the transmission process. Time delay analysis suggests the presence of anomalously fast transmission. Such results are strongly supportive of the original condensate mediated transmission hypothesis.Item Establishing a Repeatable Method for Presenting Nontraditional Traffic Treatments to Maximize Stakeholder Support(Minnesota Department of Transportation, 2023-08) Morris, Nichole L.; Schwieters, Katelyn R.; Craig, Curtis M.; Tian, DisiA novel infrastructure design known as the J-turn intersection reduces the risk of serious and fatal crashes at thru-STOP intersections through decreasing points of conflict at an intersection by restricting crossing movements from the minor road. Despite their demonstrated safety efficacy, J-turns have not been met with uniformly positive support. In this research, we first examine novice driver baseline attitudes and driving behaviors on J-turns using a driving simulator study. Results demonstrate that critical errors are decreased with driving exposure to the J-turn; however, attitudes toward J-turns are not improved by exposure alone. A series of studies then evaluates the efficacy of various messaging strategies and educational materials on improving attitudes toward J-turns. The findings from these studies identify that the use of both educational materials and persuasive and customized messaging strategies is an effective method for increasing acceptance of J-turns across diverse resident populations (i.e., rural, suburban, and urban) and among stakeholders in Minnesota. This work demonstrates the importance of the role of proactive educational programs and community initiatives in promoting the acceptance and buy-in toward novel roadway treatments, such as J-turns, among diverse drivers, communities, and stakeholder groups.Item Estimating a noncompensatory IRT model using a modified metropolis algorithm.(2009-12) Babcock, Benjamin Grant EugeneTwo classes of dichotomous multidimensional item response theory (MIRT) models, compensatory and noncompensatory, are reviewed. After a review of the literature, it is concluded that relatively little research has been conducted with the noncompensatory class of models. A monte-carlo simulation study was conducted exploring the estimation of a 2-parameter noncompensatory IRT model. The estimation method used was a modification of the Metropolis-Hastings algorithm that used multivariate prior distributions to help determine whether or not a newly sampled value was retained or rejected. Results showed that the noncompensatory model required a sample size of 4,000 people, 6 unidimensional items per dimension, and latent traits that are not highly correlated, for acceptable item parameter estimation using the modified Metropolis method. It is then argued that the noncompensatory model might not warrant further research due to the great requirements for acceptable estimation. The multidimensional interactive IRT model (MIIM) is proposed, which is more flexible than previous multidimensional models and explicitly accounts for correlated latent traits by using an interaction term within the logit. Item response surfaces for the MIIM model can be shaped either like compensatory or noncompensatory IRT model response surfaces.Item Evaluating the performance of two competing models of school suspension under simulation - the zero-inflated negative binomial and the negative binomial hurdle(2013-05) Desjardins, Christopher DavidIn many educational settings, count data arise that should not be considered realizations of the Poisson model. School days suspended represents an exemplary case of count data that may be zero-inflated and overdispersed relative to the Poisson model after controlling for explanatory variables. This study examined the performance of two models of school days suspended - the zero-inflated negative binomial and the negative binomial hurdle. This study aimed to understand whether the conditions considered would elicit comparable and/or disparate performance between these models. Additionally, this study aimed to understand the consequences of model misspecification when the data-generating mechanism was improperly specified. This study found that the negative binomial hurdle performed better in both simulation studies. Based on the conditions considered here, it is recommend that researchers consider the negative binomial hurdle model over the zero-inflated negative binomial model especially if the structural zero/zero parameters are to be treated as nuisance parameters or the presence of structural zeros is unknown. If structural zeros are expected, and interest is in these parameters, then the zero-inflated negative binomial should still be considered. Additionally, if interest is in the non-structural zero/count parameters, the results here suggest model misspecification has little effect on these parameters, and a researcher may select a model based on the parameters they are interested in interpreting.Item Exploring learning during a business ethics simulation.(2011-04) Revoir, Richard LeonardThe purpose of this study was to explore a simulation incorporating online collaborative technologies in a business ethics course to examine whether it affects student learning. A qualitative case study method of inquiry was used to develop an in-depth description and analysis of student learning during a business ethics simulation using data collected through a questionnaire, student ratings of the simulation, focus groups, and a review of videos by the researcher. The results of this study provide insight into themes that may affect students' moral sensitivity and judgment. Three key themes emerged during data analysis: 1) working in groups, 2) watching YouTube videos, and 3) experiencing less nervousness. Working in groups appeared to affect moral sensitivity because the students were exposed to more perspectives from classmates who helped them interpret the case simulation and identify ethical issues. The students reported being able to rewind and review the YouTube videos was helpful to learning. The videos also provided more perspectives and multiple approaches for reasoning which may have affected students' moral sensitivity in their effort to interpret the simulations and identify ethical issues. Students reported being less nervous while recording their YouTube video than if they had to complete the assignment in-class in front of their peers. In addition, students came to class with their YouTube video completed, they had time in class to reflect on other students' performances during class time, rather than focus on their impending performance. The findings of this study add to the literature in the area of business ethics by describing how the integration of technology for ethical simulations may affect student learning. With the three themes identified, the results of this study have implications for college instructors who are teaching business ethics courses.