Repository logo
Log In

University Digital Conservancy

University Digital Conservancy

Communities & Collections
Browse
About
AboutHow to depositPolicies
Contact

Browse by Subject

  1. Home
  2. Browse by Subject

Browsing by Subject "Reliability"

Now showing 1 - 20 of 61
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Attack-Resistance and Reliability Analysis of Feed-Forward and Feed-Forward XOR PUFs
    (2019-05) Avvaru, Satya Venkata Sandeep
    Physical unclonable functions (PUFs) are lightweight hardware security primitives that are used to authenticate devices or generate cryptographic keys without using non-volatile memories. This is accomplished by harvesting the inherent randomness in manufacturing process variations (e.g. path delays) to generate random yet unique outputs. A multiplexer (MUX) based arbiter PUF comprises two parallel delay chains with MUXs as switching elements. An input to a PUF is called a challenge vector and comprises of the select bits of all the MUX elements in the circuit. The output-bits are referred to as responses. In other words, when queried with a challenge, the PUF generates a response based on the uncontrollable physical characteristics of the underlying PUF hardware. Thus, the overall path delays of these delay chains are random and unique functions of the challenge. The contributions in this thesis can be classified into four main ideas. First, a novel approach to estimate delay differences of each stage in MUX-based standard arbiter PUFs, feed-forward PUFs (FF PUFs) and modified feed-forward PUFs (MFF PUFs) is presented. Test data collected from PUFs fabricated using 32 nm process are used to learn models that characterize the PUFs. The delay differences of individual stages of arbiter PUFs correspond to the model parameters. This was accomplished by employing the least mean squares (LMS) adaptive algorithm. The models trained to learn the parameters of two standard arbiter PUF-chips were able to predict responses with 97.5% and 99.5% accuracy, respectively. Additionally, it was observed that perceptrons can be used to attain 100% (approx.) prediction accuracy. A comparison shows that the perceptron model parameters are scaled versions of the model derived by the LMS algorithm. Since the delay differences are challenge independent, these parameters can be stored on the server which enables the server to issue random challenges whose responses need not be stored. By extending this analysis to 96 standard arbiter PUFs, we confirm that the delay differences of each MUX stage of the PUFs follow a Gaussian probability distribution. Second, artificial neural network (ANN) models are trained to predict hard and soft-responses of the three configurations: standard arbiter PUFs, FF PUFs and MFF PUFs. These models were trained using silicon data extracted from 32-stage arbiter PUF circuits fabricated using IBM 32 nm HKMG process and achieve a response-prediction accuracy of 99.8% in case of standard arbiter PUFs, approximately 97% in case FF PUFs and approximately 99% in case of MFF PUFs. Also, a probability based thresholding scheme is used to define soft-responses and artificial neural networks were trained to predict these soft-responses. If the response of a given challenge has at least 90% consistency on repeated evaluation, it is considered stable. It is shown that the soft-response models can be used to filter out unstable challenges from a randomly chosen independent test-set. From the test measurements, it is observed that the probability of a stable challenge is typically in the range of 87% to 92%. However, if a challenge is chosen with the proposed soft-response model, then its portability of being stable is found to be 99% compared to the ground truth. Third, we provide the first systematic empirical analysis of the effect of FF PUF design choices on their reliability and attack resistance. FF PUFs consist of feed-forward loops that enable internally generated responses to be used as select-bits, making them slightly more secure than a standard arbiter PUFs. While FF PUFs have been analyzed earlier, no prior study has addressed the effect of loop positions on the security and reliability. After evaluating the performance of hundreds of PUF structures in various design configurations, it is observed that the locations of the arbiters and their outputs can have a substantial impact on the security and reliability of FF PUFs. Appropriately choosing the input and output locations of the FF loops, the amount of data required to attack can be increased by 7 times and can be further increased by 15 times if two intermediate arbiters are used. It is observed adding more loops makes PUFs more susceptible to noise; FF PUFs with 5 intermediate arbiters can have reliability values that are as low as 81%. It is further demonstrated that a soft-response thresholding strategy can significantly increase the reliability during authentication to more than 96%. It is known that XOR arbiter PUFs (XOR PUFs) were introduced as more secure alternatives to standard arbiter PUFs. XOR PUFs typically contain multiple standard arbiter PUFs as their components and the output of the component PUFs is XOR-ed to generate the final response. Finally, we propose the design of feed-forward XOR PUFs (FFXOR PUFs) where each component PUF is an FF PUF instead of a standard arbiter PUF. Attack-resistance analysis of FFXOR PUFs was carried out by employing artificial neural networks with 2-3 hidden layers and compared with XOR PUFs. It is shown that FFXOR PUFs cannot be accurately modeled if the number of component PUFs is more than 5. However, the increase in the attack resistance comes at the cost of degraded reliability. We also show that the soft-response thresholding strategy can increase the reliability of FFXOR PUFs by about 30%.
  • Loading...
    Thumbnail Image
    Item
    Baldr Flight 1
    (2014-12-06) Venkataraman, Raghu
  • Loading...
    Thumbnail Image
    Item
    Between-person and within-person subscore reliability: comparison of unidimensional and multidimensional IRT models
    (2013-06) Bulut, Okan
    The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores. This study employs a new reliability approach that allows the evaluation of between-person subscore reliability as well as within-person subscore reliability. Using this approach, the unidimensional IRT (UIRT) and multidimensional IRT (MIRT) models are compared in terms of subscore reliability in simulation and real data studies. Simulation conditions in the simulation study are subtest length, correlations among subscores, and number of subtests. Both unidimensional and multidimensional subscores are estimated with the maximum a posteriori probability (MAP) method. Subscore reliability of ability estimates are evaluated in light of between-person reliability, within-person reliability, and total profile reliability. The results of this study suggest that the MIRT model performs better than the UIRT model under all simulation conditions. Multidimensional subscore estimation benefits from correlations among subscores as ancillary information, and it yields more reliable subscore estimates than unidimensional subscore estimation. The subtest length is positively associated with both between-person and within-person reliability. Higher correlations among subscores improve between-person reliability, while they substantially decrease within-person reliability. The number of subtests seems to influence between-person reliability slightly but it has no effect on within-person reliability. The two estimation methods provide similar results with real data as well.
  • Loading...
    Thumbnail Image
    Item
    CMOS Reliability Characterization Techniques and Spintronics-Based Mixed-Signal Circuits
    (2015-09) Choi, Won Ho
    Plasma-Induced Damage (PID) has been an important reliability concern for equipment vendors and fabs in both traditional SiO2 based and advanced high-k dielectric based processes. Plasma etching and ashing are extensively used in a typical CMOS back-end process. During the plasma steps, the metal interconnect, commonly referred to as an “antenna,” collects plasma charges and if the junction of the driver is too small to quickly discharge the node voltage, extra traps are generated in the gate dielectric of the receiver thereby worsening device reliability mechanisms such as Bias Temperature Instability (BTI) and Time Dependent Dielectric Breakdown (TDDB). The foremost challenge to an effective PID mitigation strategy is in the collection of massive TDDB or NBTI data within a short test time. In this dissertation, we have developed two array-based on-chip monitoring circuits for characterizing latent PID including (1) an array-based PID-induced TDDB characterization circuit and (2) a PID-induced BTI characterization circuit using the 65nm CMOS process. As the research interest on analog circuit reliability is increasing recently, a few studies analyzed the impact of short-term Vth shift, not a permanent Vth shift, on a Successive Approximation Register (SAR) Analog-to-Digital Converter (ADC) and revealed that even short-term Vth shifts in the order of 1mV by short stress pulse (e.g., 1μs) on the comparator input transistors may cause to degrade the resolution of the SAR ADC even for a fresh chip (no experimentally verified). In this dissertation, we quantified this effect through test-chip studies and propose two simple circuit approaches that can be used to mitigate short-term Vth instability issues in SAR ADCs. The proposed techniques were implemented in 10-bit SAR ADC using the 65nm CMOS process. Spintronic circuits and systems have several unique properties including inherent non-volatility that can be uniquely exploited for achievable functional capabilities not obtainable in conventional systems. Magnetic Tunnel Junction (MTJ) technology has matured to the point where commercial spin transfer torque MRAM (STT-MRAM) chips are currently being developed. This work aims at leveraging and complimenting on-going development efforts in MTJ technology for non-memory mixed-signal applications. In this dissertation, we developed two spintronics-based mixed-signal circuit designs: (1) an MTJ-based True Random Number Generator (TRNG) and (2) an MTJ-based ADC. The proposed TRNG and ADC have the potential to achieve a compact area, simpler design, and reliable operation as compared to their CMOS counterparts.
  • Loading...
    Thumbnail Image
    Item
    A Comparison and Validation of Traditional and Three-Dimensional Anthropometric Methods for Measuring the Hand through Reliability, Precision, and Visual Analysis
    (2020-12) Seifert, Emily
    This study examines the reliability and precision of three (3) different tools for collecting anthropometric data of the hand, traditional anthropometric tools (caliper and tape measure) and two (2) full-color hand-held three-dimensional scanners (Occipital Structure Sensor and Artec Leo). A visual analysis of the three-dimensional models provided from the two (2) full-color hand-held three-dimensional scanners (Occipital Structure Sensor and Artec Leo) took place during the post-processing stage to determine the three-dimensional visual reliability and precision. Twelve (12) three-dimensional hand scans, from a more extensive database taken by the Human Dimensioning Lab at the University of Minnesota, were three-dimensionally printed. Eight (8) defined measurements were analyzed for Anthropometric Tool Reliability Analysis and Anthropometric Tool Precision Analysis. This study found that the Artec Leo scanner was more reliable than traditional methods (caliper and tape measure) and the Occipital Structure Sensor. The Occipital Structure Sensor was more reliable than traditional methods (caliper and tape measure) and less reliable than the Occipital Structure Sensor. Within the Anthropometric Tool Precision Analysis, the Artec Leo captured comparable measurements to those collected using traditional methods (caliper and tape measure). The Occipital Structure Sensor captured comparable measurements, except for Index Finger Length and Index Finger Circumference at the Distal Interphalangeal Joint measurements compared to traditional methods (caliper and tape measure) and the Artec Leo. The Anthropometric Tool Precision Analysis included independent identification of landmarks at Fingertips of Digit 2 and 3 for six (6) out of twelve (12) Occipital Structure scans, which impacted two (2) measurements, Hand Length and Index Finger Length. Due to this, a Secondary Anthropometric Tool Precision Analysis took place for the six (6) participants with complete landmarks. During the Secondary Anthropometric Tool Precision Analysis, no statistical significance was found when comparing scans that did not require independent landmark identification. The scans provided by the two (2) three-dimensional scanners (the Occipital Structure Sensor and Artec Leo) were analyzed during the post-processing stage for the Three-Dimensional Visual Reliability Analysis and Three-Dimensional Visual Precision Analysis using a Post-Processing Visual Analysis Likert Scale (Juhnke, Pokorny, and Griffin, 2021). Three-Dimensional Visual Reliability and the Three-Dimensional Visual Precision Analysis found that the Occipital Structure Sensor and Artec Leo are comparable for all locations, except for the Visibility of Landmark location. This study validates the Artec Leo for use in further anthropometric data collection for the hand. The results provided by the Occipital Structure Sensor were promising compared to those collected using traditional methods (caliper and tape measure) when visible landmarking is used. The use of visual analysis as a form of evaluation for the validation of three-dimensional scanners was crucial to understanding where the scan’s quality might affect the data collection outcomes and should be considered within future studies.
  • Loading...
    Thumbnail Image
    Item
    Economics of Water Pollution: Permit Trading, Reliability of Pollution Control, and Asymmetric Information
    (2017-06) Wang, Zhiyu
    This dissertation analyzes three aspects of the economics of water pollution and is organized in three essays. The first essay examines permit trading in water pollution where pollution is different in the persistence of environmental damage. The second essay examines the problem of reliably meeting a water quality standard under environmental uncertainty. The third essay considers the problem of reliably meeting a water quality standard under asymmetric information. The first essay analyzes how to properly design water pollution permit trading with pollutants which are non-uniformly mixed across space and have different persistence in environmental damages. The efficient solution to water pollution abatement involves integrating the difference in the environmental persistence caused by pollutants and setting trading ratios in permit trading accordingly. The second essay analyzes the problem of meeting a water quality standard with a certain degree of reliability given environmental stochasticity, where the distribution of environmental stochasticity is unknown. The essay develops the use of a reliability target that caps the probability of not attaining the target in any period at α, where 1− α is the level of reliability. A single-tailed version of Chebyshev’s inequality is used that measures the maximum probability of being in the right tail of the probability distribution. The essay also examines a margin of safety in Total Maximum Daily Loads (TMDL) and concludes that if a given level of reliability is desirable, the margin of safety should vary with the level of TMDL. The third essay considers the problem of reliably achieving a water quality standard where water pollution is generated by multiple sources and there is asymmetric information. Asymmetric information comes from privately observable actions like fertilizer application and private information on profits. This essay develops a Vickery-Clark-Groves (VCG) subsidy auction and incorporates a fine/reward scheme based on whether the water quality standard is met. This subsidy auction can achieve an efficient solution to the problem of achieving a reliability standard under asymmetric information.
  • Loading...
    Thumbnail Image
    Item
    Functional Magnetic Resonance Imaging of Goal Maintenance in Schizophrenia: Activation, Functional Connectivity, and Reliability
    (2016-06) Poppe, Andrew
    Cognitive deficits are some of the most debilitating and difficult to treat symptoms of schizophrenia. Goal maintenance is a facet of cognitive control that has been shown to be impaired in schizophrenia patients as well as their unaffected first-degree relatives. Previous fMRI activation studies found less activation in dorsolateral prefrontal cortex (dlPFC) in schizophrenia patients compared with healthy controls during the completion of a goal maintenance task. This dissertation consisted of a series of studies employing a large, multisite retest dataset of schizophrenia patients and healthy control subjects. These studies sought to replicate previous activation findings using a newer goal maintenance task, to use group independent component analysis (ICA) to determine if schizophrenia patients also exhibited dysfunctional functional connectivity or functional network connectivity (FNC) compared with healthy controls during the performance of that task, and to evaluate the test-retest reliability of each of these metrics, directly compare them, and assess the influence of subject group and data collection site on reliability estimates. It replicated previous activation study findings of reduced dlPFC activity during goal maintenance. It additionally found that the temporal association between a frontoparietal executive control network and a salience network was stronger in healthy controls than in schizophrenia patients and that the strength of this relationship predicted performance on the goal maintenance task. It also found that the task-modulation of the relationship between left- and right-lateralized executive control networks was stronger in healthy controls than in schizophrenia patients and that the strength of this task-modulation predicted goal maintenance task performance in healthy controls. Finally, reliability estimates found that ICA and tonic FNC had acceptable overall reliability and that they minimized site-related variance in reliability compared with dynamic FNC and general linear model. These results indicate that ICA and tonic FNC may provide better tools for group contrast fMRI studies examining schizophrenia, especially those that incorporate a multisite design.
  • Loading...
    Thumbnail Image
    Item
    Human Factors of Vehicle-Based Lane Departure Warning Systems
    (Minnesota Department of Transportation, 2015-06) Edwards, Christopher; Cooper, Jennifer; Ton, Alice
    Run-off-road (ROR) crashes are a concern for two-lane rural and urban roadways throughout Minnesota due to the frequency by which they contribute to fatal crashes (Minnesota Crash Facts, 2013). Mitigating the severity of the ROR events is an on-going research goal in order to help reduce the number of ROR crashes. Examining countermeasures that may reduce ROR crashes is important to determine the most efficient and effective method of warning. Behavioral responses were examined through the use of an in-vehicle haptic-based lane departure warning system (LDWS) using a driving simulator. The study incorporated systematic variation to both the reliability of the warning and sequence of treatment conditions. An additional analysis examined the presence of behavioral adaptation after repeated exposure to the system. Severity of a ROR event was measured as the total time out of lane (TTL) and maximum lane deviation (MLD). Covariates (e.g. road shape) were examined to determine the influence they may have on the severity of a ROR. The results reveal overall LDWS efficacy. TTL was significantly longer when no system was active compared to when it was active. LDWS led to shorter duration of ROR events. Greater velocity was found to be highly predictive of longer TTL. MLD was also greater for baseline drives compared to treatment drives. No behavioral adaptation or system overreliance was detected, suggesting long term benefits of the LDWS. Drivers who actively engaged in a distraction task were at far greater risk of traveling greater and more dangerous distances out of lane.
  • Loading...
    Thumbnail Image
    Item
    Incorporation of Reliability in Minnesota Mechanistic-Empirical Pavement Design
    (Minnesota Department of Transportation, 1999-07) Timm, David H.; Newcomb, David E.; Birgisson, Bjom; Galambos, Theodore V.
    This report documents the research that incorporated reliability analysis into the existing mechanistic-empirical (M-E) flexible pavement design method for Minnesota. Reliability in pavement design increases the probability that a pavement structure will perform as intended for the duration of its design life. The report includes a comprehensive literature review of the state-of-the-art research. The Minnesota Road Research Project (Mn/ROAD) served as the primary source of data, in addition to the literature review. This research quantified the variability of each pavement design input and developed a rational method of incorporating reliability analysis into the M-E procedure through Monte Carlo simulation. Researchers adapted the existing computer program, ROADENT, to allow the designer to perform reliability analysis for fatigue and rutting. A sensitivity analysis, using ROADENT, identified the input parameters with the greatest influence on design reliability. Comparison designs were performed to check ROADENT against the 1993 AASHTO guide and the existing Minnesota granular equivalency methods. Those comparisons showed that ROADENT produced very similar design values for rutting. However, data suggests that the fatigue performance equation will require further modification to accurately predict fatigue reliability.
  • Loading...
    Thumbnail Image
    Item
    Long term mechanical performance of MEMS in liquid environments
    (2009-04) Ali, Shaikh Mubassar
    Micro-electro-mechanical-systems (MEMS) are exposed to a variety of liquid environments in applications such as chemical and biological sensors, and microfluidic devices. Environmental interactions between the liquids and micron sized structures can lead to unpredictable long-term performance of MEMS in liquid environments. The present understanding of long-term mechanical performance of MEMS is based on studies conducted in air or vacuum. The objective of this study was to extend the present understanding of long-term mechanical performance of MEMS to liquid environments. Two broad categories of long-term mechanical failures reported in the literature were experimentally investigated: operational failures and structural fatigue failures. Typically operational failures are observed to occur at low stress levels, while fatigue failures are reported at higher stress levels. In order to investigate these failure modes, two different designs of test specimens and experimental techniques were developed. Low stress level (0-5 MPa) tests to investigate operational failures of MEMS in liquids were performed on microcantilever test specimens. Higher stress level (~ 0.2 GPa) tests were conducted on MEMS tensile specimens for investigating fatigue failures in liquids. Microcantilever specimens were made of silicon and silicon nitride. In addition, performance of silicon microcantilevers coated with common MEMS coating materials such as Titanium and SU-8 was also investigated. Microcantilever specimens were tested in liquids such as de-ionized water, saline, and glucose solution and compared with results in air. The microcantilevers were subjected to long term cyclic actuation (10e8 to 10e9 cycles) in liquid filled enclosures. Mechanical performance of the microcantilevers was evaluated by periodically monitoring changes in resonant frequency. Any unpredictable change in resonant frequency was deemed to constitute an operational failure. Despite low stress levels, mechanical performance of microcantilever test specimens was affected to a varying degree depending on environmental interactions between the structural/ coating material and the liquid environment. The changes in resonant frequency, often to the extent of ~1%, were attributed to factors such as mineral deposition, corrosion fatigue, water absorption, and intrinsic stresses. Tensile-tensile fatigue tests (high stress level) were performed on aluminum MEMS tensile specimens, in air and saline solution. Fatigue life was observed to range between 1.2 x 10e6 to 2.2 x 10e6 cycles at mean and alternating stresses of 0.13 GPa. The effect of saline environment on fatigue failures of aluminum tensile specimens was inconclusive from the experiments performed in this study. In conclusion, experimental results indicate subtle operational failures to be a potential critical failure mode for MEMS operating in liquid environments. Long-term mechanical failures in MEMS are expected to depend on the particular combination of material, stress level, and environment.
  • Loading...
    Thumbnail Image
    Item
    Methodologies for Statistical Characterization of Circuit Reliability in Advanced Silicon Processes
    (2012-07) Jain, Pulkit
    Rising electric fields and imperfections due to atomic level scaling create non-ideal and stochastic electrodynamics inside a transistor.These appear as reliability mechanisms such as Bias Temperature Instability (BTI), Time Dependent Dielectric Breakdown (TDDB) and Random Telegraph Noise (RTN) at transistor level, and as a convolved statistical manifestation in performance and functionality, at a circuit level. Compounded by shrinking operating margins with process variability and power constraints, these reliability issues have been propelled from device research arena to the forefront of chip design.The first part of my thesis will explore these different reliability issues in three dedicated test chips. While device level probing has been de-facto estimation method for reliability engineers due to legacy and simplicity, the approach has become cumbersome due to time and effort needed to cover the required statistics. Conversely, we demonstrate circuit based reliability monitors which are a more scalable and representative alternative. The latter also enable superior timing resolution which is critical to record phenomenon such as BTI and RTN without measurement noise. For example, leveraging on-chip methods and intelligent timing control, we demonstrate a SRAMreliability macro with BTI estimation at three order smaller measurement times than possible using conventional approaches. On-chip logic could also be used to control test on large number of blocks resulting in a large experiment time speedup which is the basis for our TDDB macro.The second part of my thesis will focus on 3D integration, a reakthrough technology for reducing interconnects delays and chip form factors. In particular, we measure the impact of chip stacking on power delivery and propose schemes to mitigate it through a statistical framework, fabricated in an actual 3D technology.Overall, the ideas here can pave the way for not only accurate empirical modeling and robust guard-banding for pre-silicon phase but also post-silicon adaptive tuning. And thus we can better reap the benefits of these new silicon technologies.
  • Loading...
    Thumbnail Image
    Item
    A Mixed Methods Evaluation of the Reliability and Validity of an Entrustable Professional Activities-Based System for Assessing the Clinical Competence of Medical Students
    (2024) Gauer, Jacqueline
    Introduction: The question of how to assess the clinical competence of medical students has posed a challenge throughout the history of the field. Entrustable Professional Activities (EPAs) describe discrete activities that can be assessed in the workplace by a preceptor, who indicates the degree to which they “entrust” the student to perform the activity. Recently, the University of Minnesota Medical School (UMMS) implemented a system based on the Core Entrustable Professional Activities for Entering Residency (Core-EPAs) to assess the clinical skills of third-year medical students during their clinical clerkships. The purpose of this study was to evaluate the validity and reliability of this system, using quantitative data, qualitative data, and the integration of the two. Method: This study employed a two-phase, sequential explanatory mixed methods design to obtain evidence regarding the reliability, predictive validity, construct validity, and face validity of the Core-EPA system. Quantitative EPA assessment data from AY 22-23 were analyzed via interrater agreement analysis, linear regression modeling to predict scores on an Objective Structured Clinical Examination (OSCE), and growth curve modeling. A purposive sample of eight students was selected from the quantitative data to describe their experiences via semi-structured interviews. The qualitative data were analyzed using a Reflexive Thematic Analysis framework. Results: The interrater reliability analysis found that levels of interrater agreement were acceptable given the complexity of the clinical context (47.35% or 69.40% depending on the definition of interrater agreement), but with room for improvement. The linear regression analysis did not find convincing evidence that EPA ratings are predictive of OSCE scores. The growth curve analysis found that growth curves aligned with those expected by learning curve theory. The qualitative analysis generated five themes and two subthemes describing students’ experiences with the Core-EPA system, their perspectives on its validity and reliability including factors that contribute to those dimensions, and comparisons between the Core-EPA system and OSCEs. Discussion: The findings of this study indicate that the Core-EPA system holds promise as a tool for assessing the clinical competence of medical students. Recommendations, limitations of the study, and ideas for next steps are described.
  • Loading...
    Thumbnail Image
    Item
    The nomological network of self-efficacy and psychometric properties of its general and specific measures
    (2013-02) Seltzer, Benjamin K.
    Since its proposal in 1977, self-efficacy (SE) has been applied to almost every behavioral undertaking imaginable. Over 30,000 studies have been conducted on SE since its introduction in 1977, and even meta-analyses exist in abundance. Unfortunately, the self-efficacy literature tends to suffer from several common oversights: 1) neglecting measurement properties of self-efficacy scales; 2) inappropriate compartmentalization of self-efficacy by domain; and 3) inappropriate categorization of criteria/outcomes of interest. Accordingly, the goal of the present research was to address the criticisms raised above through meta-analyses of five distinct areas: 1) the reliability of scores from SE scales; 2) the convergence of SE scales within and across behavioral domains; 3) the potentially differential relationships between SE scales and personality traits; 4) the potentially differential correlations between SE scales and cognitive ability; and 5) the potentially differential correlations between SE scales and outcomes. General and specific SE scales were examined for potentially differing relationships with variables of interest. Scales of self-efficacy exceeded basic standards of internal consistency reliability (though these scales were most consistent when at least 5 - 8 items in length) and displayed strong relationships with one another, even at differing levels of specificity and across behavioral domains. Additionally, self-efficacy scales demonstrated similar patterns of relationships with personality across domains. While measures of self-efficacy displayed more variable patterns of relationships with specific criteria, most scales - even those not tailored for the specific criterion - still functioned as acceptable predictors of academic and organizational performance.
  • Loading...
    Thumbnail Image
    Item
    A Novel Method for Assessing Leg Compartmental Body Composition Using Dual Energy X-Ray Absorptiometry
    (2016-05) Raymond, Christiana
    PURPOSE: Investigate the validity and reliability of a novel lateral dual energy X-ray absorptiometry (DXA) scanning method for total, fat, and lean tissue mass quantification of the anterior and posterior thigh compartments. METHODS: Twenty-one (11 female; X̅age=20.3±1.3 yrs) college athletes participated, with segmentation of anterior/posterior thigh compartments completed via laterally-positioned DXA scans. Three technicians created custom regions of interest (ROIs) using bony landmarks for each scan via enCoreTM software. Paired t-tests (lateral vs. standard frontal position) evaluated the validity of this novel method while intra-class correlations (ICCs) and coefficients of variation (CV) examined intra-/inter-rater reliability. RESULTS: Total, fat, and lean mass comparisons between frontal and lateral DXA scans were non-significant (p-values: 0.15-0.74). High ICCs were observed between-/within-raters (0.983-0.999 and 0.954-0.999, respectively), with low variation across all measures (CVs: <5%). CONCLUSION: DXA measures using lateral positioning and custom ROIs for tissue mass quantification are valid and reliable versus standard frontal positioning.
  • Loading...
    Thumbnail Image
    Item
    Reliability assessment for low-cost unmanned aerial vehicles
    (2014-11) Freeman, Paul Michael
    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis.The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable.Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
  • Loading...
    Thumbnail Image
    Item
    Reliability-aware and variation-aware CAD techniques.
    (2009-11) Kumar, Sanjay V.
    Technology scaling into the sub-100nm domain implies that the effects of process, voltage, and temperature variations have a resounding effect on the performance of digital circuits. An increase in complexity has resulted in challenges in design and manufacturing of these circuits, as well as guaranteeing their accurate and reliable performance. Two key challenges in present-day circuit design are to ensure the long term reliability of circuits and to accurately estimate the arrival times and margins of the various paths in the circuit during timing analysis, under the presence of variations, at all operating conditions. Bias Temperature Instability (BTI), a long-term transistor degradation mechanism has escalated into a growing threat for circuit reliability; hence its exact modeling and estimation of its effects on circuit performance degradation have become imperative. Techniques to mitigate BTI and ensure that the circuits are robust over their lifetime are also becoming vital. Similarly, the impact of process variations has prompted new areas of research, to determine the timing and power estimates of the circuit, as accurately as possible. Since the effects of variations can no longer be ignored in high-performance microprocessor design, sensitivity of timing slacks to parameter variations has become a highly desirable feature to allow designers to quantify the robustness of the circuit at any design point. Further, the effect of varying on-chip temperatures, particularly in low voltage operation, has led to inverted temperature dependence (ITD) in which the circuit delay may actually decrease with increase in temperature. This thesis addresses some of these major issues in present day design. We propose a model to estimate the long term effects of Negative Bias Temperature Instability (NBTI). We initially present a simple model based on an infinitely thick gate-oxide in transistors, to compute the asymptotic threshold voltage of a PMOS transistor, after several cycles of stress and recovery. We then augment the model to handle the effects of finite-oxide thickness, and justify the findings of several other experimental observations, particularly during the recovery phase of NBTI action. Our model is robust and can be efficiently used in a circuit analysis setup to determine the long-term asymptotic temporal degradation of a PMOS device, over several years of operation. In the next chapter, we use this model to quantify the effect of NBTI, as well as Positive Bias Temperature Instability (PBTI), a dual degradation mechanism in PMOS devices, on the temporal degradation of a digital logic circuit. We provide a technique for gaging the impact of signal probability on the delay degradation numbers, and investigate the problem of determining the maximal temporal degradation of a circuit under all operating conditions. In this regard, we conclude that the amount of overestimation using a simple pessimistic worst-case model is not significantly large. We also propose a method to handle the effect of correlations, in order to obtain a more accurate estimation of the impact of aging on circuit delay degradation. The latter part of this chapter proposes several circuit optimization techniques that can be used to ensure reliable operation of circuits, despite temporal degradation caused by BTI. We first present a procedure of combating the effects of NBTI during technology mapping in synthesis, thereby guardbanding our circuits with a minimal area and power overhead. An adaptive compensation scheme to overcome the effects of BTI through the use of adaptive body bias (ABB) over the lifetime of the circuit is explored. A combination of adaptive compensation and BTI-aware technology mapping is then used to optimally design circuits that meet the highest target frequency over their entire lifetime, and with a minimal overhead in area, average active power, and peak leakage power consumption. Lastly, we present a simple cell-flipping technique that can mitigate the impact of NBTI on the static noise margin (SNM) of SRAM cells. In the final chapter of this thesis, we first present a framework for block based timing sensitivity analysis, where the parameters are specified as ranges - rather than statistical distributions which are hard to know in practice. The approach is validated on circuit blocks extracted from a commercial 45nm microprocessor design. While the above approach considers process and voltage variations, we also show that temperature variations, particularly in the low voltage design space can cause an anomalous behavior, i.e., the circuit delays can decrease with temperature. Thus, it is now necessary to estimate the maximum delay of the circuit, which can occur at any operating temperature, and not necessarily at a worst case corner setting. Accordingly we propose a means to efficiently compute the maximum delay of the circuit, under all operating conditions, and compare its performance with an existing pessimistic worst-case approach.
  • Loading...
    Thumbnail Image
    Item
    Scalable methods for reliability analysis in digital circuits using physics-based device-level models
    (2012-10) Fang, Jianxin
    As technology has scaled aggressively, device reliability issues have become a growing concern in digital CMOS very large scale integrated (VLSI) circuits. There are three major effects that result in degradation of device reliability over time, namely, time-dependent dielectric breakdown (TDDB), bias-temperature instability (BTI), and hot carrier (HC) effects. Over the past several years, considerable success has been achieved at the level of individual devices to develop new models that accurately reconcile the empirical behavior of a device with the physics of reliability failure. However, there is a tremendous gulf between these achievements at the device level and the more primitive models that are actually used by circuit designers to drive the analysis and optimization of large systems. By and large, the latter models are decades old and fail to capture the intricacies of the major advances that have been made in understanding the physics of failure; hence, they cannot provide satisfactory accuracy. The few approaches that can be easily extended to handle new device models are primarily based on simulation at the transistor level, and are prohibitively computational for large circuits. This thesis addresses the circuit-level analysis of these reliability issues from a new perspective. The overall goal of this body of work is to attempt to bridge the gap between device-level physics-based models and circuit analysis and optimization for digital logic circuits. This is achieved by assimilating updated device-level models into these approaches by developing appropriate algorithms and methodologies that admit scalability, resulting in the ability to handle large circuits. A common thread that flows through many of the analysis approaches involves performing accurate and computationally feasible cell-level modeling and characterization, once for each device technology, and then developing probabilistic techniques to utilize the properties of these characterized libraries to perform accurate analysis at the circuit level. Based on this philosophy, it is demonstrated that the proposed approaches for circuit reliability analysis can achieve accuracy, while simultaneously being scalable to handle large problem instances. The remainder of the abstract presents a list of specific contributions to addressing individual mechanisms at the circuit level. Gate oxide TDDB is an effect that can result in circuit failure as devices carry unwanted and large amounts of current through the gate due to oxide breakdown. Realistically, this results in catastrophic failures in logic circuits, and a useful metric for circuit reliability under TDDB is the distribution of the failure probability. The first part of this thesis develops an analytic model to compute this failure probability, and differs from previous area-scaling based approaches that assumed that any device failure results in circuit failure. On the contrary, it is demonstrated that the location and circuit environment of a TDDB failure is critical in determining whether a circuit fails or not. Indeed, it is shown that a large number of device failures do not result in circuit failure due to the inherent resilience of logic circuits. The analysis begins by addressing the nominal case and extends this to analyze the effects of gate oxide TDDB in the more general case where process variations are taken into account. The result shows derivations that demonstrate that the circuit failure probability is a Weibull function of time in the nominal case, while has a lognormal distribution and at a specified time instant under process variations. This is then incorporated into a method that performs gate sizing to increase the robustness of a circuit to TDDB effect. Unlike gate oxide TDDB, which results in catastrophic failures, both BTI and HC effects result in temporal increases in the transistor threshold voltages, causing a circuit to degrade over time, and eventually resulting in parametric failures as the circuit violates its timing specifications. Traditional analyses of the HC effects are based on the so-called lucky electron model (LEM), and all known circuit-level analysis tools build upon this model. The LEM predicts that as device geometries and supply voltages reduce to the level of today's technology nodes, the HC effects should disappear; however, this has clearly not been borne out by empirical observations on small-geometry devices. An alternative energy-based formulation to explain the HC effects has emerged from the device community: this thesis uses this formulation to develop a scalable methodology for hot carrier analysis at the circuit level. The approach is built upon an efficient one-time library characterization to determine the age gain associated with any transition at the input of a gate in the cell library. This information is then utilized for circuit-level analysis using a probabilistic method that captures the impact of HC effects over time, while incorporating the effect of process variations. This is combined with existing models for BTI, and simulation results show the combined impact of both BTI and HC effects on circuit delay degradation over time. In the last year or two, the accepted models for BTI have also gone through a remarkable shift, and this is addressed in the last part of the thesis. The traditional approach to analyzing BTI, also used in earlier parts of this thesis, was based on the reaction-diffusion (R-D) model, but lately, the charge trapping (CT) model has gained a great deal of traction since it is capable of explaining some effects that R-D cannot; at the same time, there are some effects, notably the level of recovery, that are better explained by the R-D model. Device-level research has proposed that a combination of the two models can successfully explain BTI; however, most work on BTI has been carried out under the R-D model. One of the chief properties of the CT model is the high level of susceptibility of CT-based mechanisms to process variations: for example, it was shown that CT models can result in alarming variations of several orders of magnitude in device lifetime for small-geometry transistors. This work therefore develops a novel approach for BTI analysis that incorporates effect of the combined R-D and CT model, including variability effects, and determines whether the alarming level of variations at the device level are manifested in large logic circuits or not. The analysis techniques are embedded into a novel framework that uses library characterization and temporal statistical static timing analysis (T-SSTA) to capture process variations and variability correlations due to spatial or path correlations.
  • Loading...
    Thumbnail Image
    Item
    Striking A Balance Between Psychometric Integrity and Efficiency for Assessing Reinforcement Learning and Working Memory in Psychosis-Spectrum Disorders
    (2021-06) Pratt, Danielle
    Cognitive deficits are well-established in psychosis-spectrum disorders and are highly related to functional outcomes for those individuals. Therefore, it is imperative to measure cognition in reliable and replicable ways, particularly when assessing for change over time. Notably, despite revolutionizing our measurement of specific cognitive abilities, parameters from computational models are rarely psychometrically assessed. Cognitive tests often include vast numbers of trials in order to increase psychometric properties, however long tests cause undue stress on the participant, limit the amount of data that can be collected in a study, and may even result in a less accurate measurement of the domain of interest. Thus, balancing psychometrics with efficiency can lead to better assessments of cognition in psychosis. The goal of this dissertation is to establish the psychometric properties and replicability of reinforcement learning and working memory tasks and determine the extent to which they could be made more efficient without sacrificing the psychometric integrity. The results provide support that these tests of reinforcement learning are appropriate for use in studies with only one time point but may not currently be appropriate for retest studies due to the inherent learning that occurs during the first time performing the task. The working memory tasks are ready for use in intervention studies, with the computational parameters of working memory appearing slightly less reliable than observed measures, but potentially more sensitive to detecting group differences. Lastly, these reinforcement learning and working memory tasks can be made 25%-50% more efficient without sacrificing reliability and optimized by focusing on items yielding the most information. Altogether, this dissertation provides guidance for using reinforcement learning and working memory tests in studies of cognition in psychosis in the most appropriate, efficient, and effective ways.
  • Loading...
    Thumbnail Image
    Item
    Time series analysis of cardiometabolic parameters: reliability and energy drink response
    (2013-12) Nelson, Michael T.
    Cardiometabolic data is currently analyzed primarily by the use of averages. While this method can provide some data, further analysis by time series (variability) methods can provide more physiologic insights. Historically, time series analysis has been performed primarily using heart rate data in the form of heart rate variability (HRV) analysis. This was done to determine the status of the autonomic nervous system via changes in parasympathetic and sympathetic output. Researchers have used different methods of analysis, but a lack of reproducibility studies raises questions about the validity of these methods when applied to heart rate (HR) data. Currently in the literature, these methods have not applied to metabolic data such as the respiratory exchange ratio (RER). This dissertation will investigate the reliability of time series assessments of caridiometaoblic parameters. We hypothesize that in healthy individuals, HRV analysis performed on the same RR intervals but by two different measurement systems, are indeed interchangeable. We further hypothesize that the time series analysis of metabolic data such as the RER will be stable and repeatable over two trials conducted under the same conditions. Lastly, we hypothesize that under conditions of physical stress (e.g. ride time-to-exhaustion) and biochemical stress (e.g. energy drink), resting HR and HR variability preexercise will be altered and the ride time-to-exhaustion will be increased after subjects consume an energy drink (standardized to 2.0mg/kg caffeine) compared to a taste-matched placebo. The results of this dissertation will provide further insight into the repeatability of these time series analyses, which could be utilized for future research to determine metabolic flexibility.
  • Loading...
    Thumbnail Image
    Item
    Time series analysis of cardiometabolic parameters: reliability and energy drink response
    (2013-12) Nelson, Michael Thomas
    Cardiometabolic data is currently analyzed primarily by the use of averages. While this method can provide some data, further analysis by time series (variability) methods can provide more physiologic insights. Historically, time series analysis has been performed primarily using heart rate data in the form of heart rate variability (HRV) analysis. This was done to determine the status of the autonomic nervous system via changes in parasympathetic and sympathetic output. Researchers have used different methods of analysis, but a lack of reproducibility studies raises questions about the validity of these methods when applied to heart rate (HR) data. Currently in the literature, these methods have not applied to metabolic data such as the respiratory exchange ratio (RER). This dissertation will investigate the reliability of time series assessments of caridiometaoblic parameters. We hypothesize that in healthy individuals, HRV analysis performed on the same RR intervals but by two different measurement systems, are indeed interchangeable. We further hypothesize that the time series analysis of metabolic data such as the RER will be stable and repeatable over two trials conducted under the same conditions. Lastly, we hypothesize that under conditions of physical stress (e.g. ride time-to-exhaustion) and biochemical stress (e.g. energy drink), resting HR and HR variability preexercise will be altered and the ride time-to-exhaustion will be increased after subjects consume an energy drink (standardized to 2.0mg/kg caffeine) compared to a taste-matched placebo. The results of this dissertation will provide further insight into the repeatability of these time series analyses, which could be utilized for future research to determine metabolic flexibility.
  • «
  • 1 (current)
  • 2
  • 3
  • 4
  • »

UDC Services

  • About
  • How to Deposit
  • Policies
  • Contact

Related Services

  • University Archives
  • U of M Web Archive
  • UMedia Archive
  • Copyright Services
  • Digital Library Services

Libraries

  • Hours
  • News & Events
  • Staff Directory
  • Subject Librarians
  • Vision, Mission, & Goals
University Libraries

© 2025 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer.
Policy statement | Acceptable Use of IT Resources | Report web accessibility issues