Browsing by Subject "Optimization"
Now showing 1 - 20 of 54
- Results Per Page
- Sort Options
Item Adjoint-based Data Assimilation for Ocean Waves(2022-05) Wu, JieThe exchange between the atmosphere and oceans, which regulates weather and climate processes and upper ocean dynamics, critically depends on the spatial and temporal evolution of ocean surface waves. In reality, directly measuring the entire wave state is an expensive and difficult task. Firstly, we propose a method for the reconstruction and prediction of nonlinear wave fields from coarse-resolution measurement data. We adopt the adjoint-based data assimilation framework to search for the optimal wave states to match the given measurement data and governing physics. The performance of our method is assessed by considering a variety of wave steepness and noise levels for the nonlinear irregular wave fields. It is found that our method shows significantly improved performance in the reconstruction and prediction of instantaneous surface elevation, surface velocity potential, and high-order wave statistics. Secondly, we propose a method to infer coastal bathymetry from the spatial variations of surface waves by combing a high-order spectral method for wave simulation and an adjoint-based variational data assimilation method. The proposed bottom detection method is applied to a realistic coastal environment involving complex two-dimensional bathymetry, non-periodic incident waves, and nonlinear broadband multi-directional waves. We also address issues related to surface wave data quality, including limited sampling frequency and noise. Both laboratory-scale and field-scale bathymetries with monochromatic and broadband irregular waves are tested, and satisfactory detection accuracy is obtained. Last, we investigate the impact of oceanic internal wave and surface wave parameters on the surface roughness signature based on the two-fluid solver. We use the ratio of the mean surface slope between the rough and smooth bands, which are identified in the simulated surface field, to systematically investigate their response to the internal wave forcing across all our simulation cases. We find that the strongest surface heterogeneity is caused by varying upper-lower layer density ratios. Our results also show that it is necessary to include the wave steepness statistics to account for the internal wave-induced surface wave modulation.Item Algorithm Optimization of non-DMSO Cryopreservation Protocols to Improve Mesenchymal Stem Cell Post-Thaw Function(2016-09) Pollock, KathrynMesenchymal stem cells (MSCs) are a common transfusion cell therapy that have been used in over 300 clinical trials to treat over 2000 patients with diseases ranging from Crohn’s disease to heart failure. These cells are frequently cryopreserved to better coordinate the timing of cell administration with patient care regimes and to accommodate transport of samples between different sites of collection, processing, and administration. However, cryopreservation with DMSO (the current gold standard) can result in poor cell function post-thaw and adverse reactions upon infusion. We hypothesize that non-DMSO cryopreservative molecules, including sugars, sugar alcohols, amino acids, and other small molecule additives, can be used in combination to protect cell viability and function post-thaw. This research demonstrates that some combinations of non-DMSO cryopreservatives preserve cell functionality better than others, and these effects are dependent not on osmotic or physical changes in solution, but on biological changes that affect the cell during the freezing process. We observe that there is likely a sweet spot concentration combination that produces maximum recovery for each combination of molecules, and demonstrate that an evolutionary algorithm can be used to identify optimized combinations of molecules that yield high cell recovery post-thaw. Additionally, we demonstrate that these novel solutions maintain MSC functionality when evaluated using surface markers, attachment, proliferation, actin alignment, RNA expression, and DNA hydroxymethylation. These advances in cryopreservation can improve cell therapy, and ultimately patient care.Item Big Temporally-Detailed Graph Data Analytics(2015-06) Gunturi, Venkata Maruti ViswanathIncreasingly, temporally-detailed graphs are of a size, variety, and update rate that exceed the capability of current computing technologies. Such datasets can be called Big Temporally-Detailed Graph (Big-TDG) Data. Examples include temporally-detailed (TD) roadmaps which provide typical travel speeds experienced on every road segment for thousands of departure-times in a typical week. Likewise, we have temporally-detailed (TD) social networks which contain a temporal trace of social interactions among the individuals in the network over a time window. Big-TDG data has transformative potential. For instance, a 2011 McKinsey Global Institute report estimates that location-aware data could save consumers hundreds of billions of dollars annually by 2020 by helping vehicles avoid traffic congestion via next-generation routing services such as eco-routing. However, Big-TDG data presents big challenges for the current computer science state of the art. First, Big-TDG data violates the cost function decomposability assumption of current conceptual models for representing and querying temporally-detailed graphs. Second, the voluminous nature of Big-TDG data can violate the stationary ranking-of-candidate-solutions assumption of dynamic programming based techniques such Dijsktra's shortest path algorithm. This thesis proposes novel approaches to address these challenges. To address the first challenge, this thesis proposes a novel conceptual model called, "Lagrangian Xgraphs," which models non-decomposability through a series of overlapping (in space and time) relations, each representing a single atomic unit which retains the required semantics. An initial study shows that Lagrangian Xgraphs are more convenient for representing diverse temporally-detailed graph datasets and comparing candidate travel itineraries. For the second challenge, this thesis proposes a novel divide-and-conquer technique called "critical-time-point (CTP) based approach," which efficiently divides the given time-interval (over which over non-stationary ranking is observed) into disjoint sub-intervals over which dynamic programming based techniques can be applied. Theoretical and experimental analysis show that CTP based approaches outperform the current state of the art.Item Changeability Planning Of Modular Fixtures In A Robotic Assembly System To Manage Product Variety(2016-07) Tohidi, HosseinChangeability in Manufacturing Systems has been implemented in various industries over the last forty years to meet the challenge of change in product demand and design. Changeability is an umbrella paradigm and consists of different manufacturing system characteristics, such as changeover-ability, reconfigurability, flexibility, transformability and agility. On the system level, flexibility is enabled particularly by implementing modular equipment designs such as modular fixtures that can hold a variety of product geometries. On the operational level, optimizing the changeability plan of those modular fixtures improves the performance of the manufacturing system. This thesis considers a hole-pattern modular fixture to increase the changeability of an automated assembly system. In this assembly setting, a robot that is located on top of a conveyor belt loop places different parts on the modular fixture and secure them by inserting four pegs around each part. The more peg replacements are occurred, the longer fixture setup time is required. Hence, the researchers of this study have developed five different mathematical models to effectively determine the best parts and pegs locations on the fixture to minimize the number of pegs replacements considering different conditions and constraints. At this point, researchers presented a new fixture design that improves the fixture modularity to hold more products with different geometries. By making a few modifications, the five previous models have been extended to be used for this new fixture design. Subsequently, researchers proposed three case studies with different parameters sizes to investigate the models’ efficiency. The results show that the different models are able to significantly reduce fixture setup time.Item Control and optimization with dimensionality constraints(2008-10) Takyar, Mir ShahrouzThe purpose of this thesis is to develop synthesis tools for control design with dimensionality constraints. In particular, given a model for a physical process, the goal is to characterize all possible controllers of a certain dimension which satisfy given performance criteria. In classical feedback design, the complexity of controller adversely affects robustness of the regulatory mechanisms of the feedback and adds to the fragility of the system. The complexity is often due to the difficulty in imposing performance specifications in a natural mathematical context. Typically, this is done using "weight functions" which encapsulate the specifications, and then introducing those in a suitable optimization problem. A contribution of this work is to address a certain type of optimization problem and the choice of weight functions. More precisely, we develop a new design approach which leads to a controller achieving both requirements, the performance specifications and low complexity, at the same time. Further, this thesis generalizes the previous methods for multivariable systems by developing analogous theory and techniques. The main contribution in multivariable analytic interpolation is to characterize a family of minimal McMillan degree solutions by a choice of spectral-zero dynamics. In addition to application of this theory for model-matching in control design, we show how to use the same techniques for maximum power transfer in circuit theory, and for spectral estimation in signal analysis. Also in this thesis we give new results on implementation of controllers with some very specific elements. One such, which is hard to simulate on a digital computer, is what could be described as "half capacitor". It implements a "fractional integration" and can be used to a great advantage in classical feedback design, providing gain but without introducing time-lag.Item Control Paradigms For Distributed Energy Resources (Ders) To Engineer Inertial And Primary Frequency Response In Power Grids(2020-02) Guggilam, SwaroopPower networks have to withstand a variety of disturbances that affect system frequency, and the problem is compounded with the increasing integration of intermittent renewable generation. Following a large-signal generation or load disturbance, system frequency is arrested, leveraging primary frequency control provided by governor action in synchronous generators. This work proposes a framework for distributed energy resources (DERs) deployed in transmission and distribution networks to provide (supplemental) inertial and primary frequency response. This work is divided into two parts, transmission and distribution level design process. Particularly, we demonstrate how synthetic inertia and power-frequency droop slopes for individual DERs can be designed so that the system frequency conforms to prescribed transient and steady-state performance specifications at the transmission level and the distribution feeder presents guaranteed frequency-regulation characteristics at the feeder head. Furthermore, the droop and inertial coefficients are engineered such that injections of individual DERs conform to a well-defined fairness objective that does not penalize them for their location on the distribution feeder. At the transmission level, our approach is grounded in a second-order lumped-parameter model that captures the dynamics of synchronous generators, and frequency-responsive DERs endowed with inertial and droop control. A key feature of this reduced-order model is that its parameters can be related to those of the originating higher-order dynamical model. This allows one to systematically design the DER inertial and droop-control coefficients leveraging classical frequency-domain response characteristics of second-order systems. At the distribution level, the approach we adopt leverages an optimization-based perspective and suitable linearizations of the power-flow equations to embed notions of fairness and information regarding the physics of the power flows within the distribution network into the droop slopes. This optimization-based method is centered around classical economic dispatch. The accuracy of the model-reduction method and demonstration of how DER controllers can be designed to meet steady-state-regulation and transient-performance specifications for an illustrative network composed of a combined transmission and distribution network with frequency-responsive DERs is also presented.Item Convection-Enhanced Evaporation: Modeling and Optimal Control for Modular Cost-Effective Brine Management(2022-11) Kaddoura, MustafaThis dissertation proposes mathematical modeling and novel cost-optimal control methods for Convection-Enhanced Evaporation (CEE) systems. CEE is the approach of evaporating water from saline films (brine) on packed evaporation surfaces by air convection, and actively controlling the operation variables to minimize the process cost. The developed approach represents a modular, cost-effective solution for brine management in decentralized and/or small scale desalination plants and industrial processes which currently lack safe and effective brine management options. Forced convection across packed, wetted evaporation surfaces is induced either by means of a fan, natural wind, or a combination of both (hybrid approach). As air flows over the liquid films, the difference in vapor pressure between the air and liquid surfaces induces evaporation. The work contains three major parts. The first part develops a generalized mathematical model of CEE systems to simulate the heat and mass transfer associated with convection-driven evaporation of saline films. The model is derived from the fundamental conservation equations of mass and energy, solved numerically using the finite difference method to predict the evaporation rate and the spatial distribution of humidity, temperature and salinity along the evaporation surfaces based on ambient condition, liquid (brine) inlet condition, and design configuration. The model-predicted performance is in good agreement with experimental pilot CEE system performance and with values published in the literature. The developed model is used to explore and compare the performance of three design aspects: (1) the liquid-air flow configuration (cross-flow vs parallel-flow), (2) the alignment and wetting of the surfaces (vertically aligned with double-sided wet surfaces vs horizontally aligned with single-sided wet surfaces), and (3) hybrid wind-fan operation, a novel operation model aimed at reducing the electrical energy demand of the fan by harnessing the natural drying power of the wind. The second part of this dissertation focuses on cost optimization. It proposes a method for formulating objective functions using cost ratios to generalize the optimization results to applications with varying material and energy prices and scenarios. The problem of identifying the cost-optimal operating settings was then solved as a multi-objective optimization using the genetic algorithm. The optimization revealed and characterized two distinct operation modes: "all-electric" mode, and "heating" mode. Finally, the last part of this dissertation proposes a data-driven optimal control method. The controller is based on a large dataset consisting of Pareto fronts, obtained in advance by solving a set of optimization problems. The method allows three optimal operation strategies: (1) real-time selection of operating variables, (2) predictive scheduled operation, and (3) hybrid wind-fan operation. The effectiveness of the proposed strategies was assessed through two case studies with distinct geographical locations and weathers. The results showed significant costsaving potential relative to static operation. The presented control strategies enable CEE to adjust its operation under various weather conditions. The models and methods developed in this dissertation are conducive to study and control of other configurations of CEE systems. They have the potential to be applied to other desalination and renewable energy systems, particularly those involving a trade-off between thermal and electric energy demand.Item Deterministic Integrated Production and Maintenance Scheduling Models with Cost Coupling(2019-12) Shutts, Callen; Zhang, QiItem Developing an Intelligent Decision Support System for the Proactive Implementation of Traffic Safety Strategies(Intelligent Transportation Systems Institute, Center for Transportation Studies, 2013-03) Chen, Hongyi; Chen, Fang; Anderson, ChrisThe growing number of traffic safety strategies, including the Intelligent Transportation Systems (ITS) and lowcost proactive safety improvement (LCPSI), call for an integrated approach to optimize resource allocation systematically and proactively. While most of the currently used standard methods such as the six-step method that identify and eliminate hazardous locations serve their purpose well, they represent a reactive approach that seeks improvement after crashes happen. In this project, a decision support system with Geographic Information System (GIS) interface is developed to proactively optimize the resource allocation of traffic safety improvement strategies. With its optimization function, the decision support system is able to suggest a systematically optimized implementation plan together with the associated cost once the concerned areas and possible countermeasures are selected. It proactively improves the overall traffic safety by implementing the most effective safety strategies that meet the budget to decrease the total number of crashes to the maximum degree. The GIS interface of the decision support system enables the users to select concerned areas directly from the map and calculates certain inputs automatically from parameters related to the geometric design and traffic control features. An associated database is also designed to support the system so that as more data are input into the system, the calibration factors and crash modification functions used to calculate the expected number of crashes will be continuously updated and refined.Item The development of a power management strategy for a hydraulic hybrid passenger vehicle(2014-07) Meyer, Jonathan JamesThe amount of energy being consumed is increasing each year, with the highest sector being the transportation industry. Within the transportation sector, the highest area of oil consumption is in the small and lightweight vehicle category. With increasing oil prices and decreasing supply, methods of reducing oil consumption have been studied. One is by developing a hybrid vehicle, which combines the internal combustion engine with an additional power source. For lightweight vehicles, electric hybrid vehicles have been thoroughly studied. While hydraulic hybrids have been studied for larger applications such as delivery trucks and buses, little research has been done in the area of small, lightweight vehicles. Hydraulics have a higher power density than electronics, so hydraulic hybrids can get better performance than electric hybrids while reducing fuel consumption.In this research, a series and power-split architecture is studied for a passenger vehicle. Because of the additional hydraulic power source along with energy storage, the optimal way to control these vehicles is not known. Therefore, an energy management strategy must be developed to determine the optimal strategy for splitting the power between the engine and the hydraulics.Three different methods are used to develop the energy management strategy - a rule-based strategy based on dynamic programming results, stochastic dynamic programming, and model predictive control. An experimental hardware-in-the-loop setup is used to replicate a series hybrid in which the different energy management strategies are tried. Through simulation and experimentation, it was found that not one strategy works best in all scenarios, and variables such as knowledge of duty cycle and energy storage must be taken into account when developing the strategy.An input-coupled power-split hybrid was also studied, which combines the mechanical efficiency of the parallel hybrid with the engine management of a series hybrid. Through a series of simulations, a strategy that declutched the engine from the drivetrain while the vehicle is stopped gave a significant reduction in fuel consumption. Another advantage of the power-split architecture is the ability to operate the vehicle in different modes by declutching the engine and removing hydraulic units by the use of valves. By using this strategy, the fuel economy can be almost doubled over a baseline strategy which operates only in power-split mode. Finally, the size of the accumulator can have an effect on the fuel consumption, with a smaller accumulator leading to less fuel consumed; however, if the accumulator is too small, the performance starts to degrade with a downsized engine.The results of this research can be used to develop a toolbox that can be used for developing energy management strategies by having the user enter a model, objective function, and duty cycle for a system. By using other information, such as knowledge of duty cycle, the toolbox can determine the best method of developing the control strategy, reducing the amount of time and resources for developing an optimal control strategy.Item Development of a semivolatile aerosol dichotomous sampler(2008-12) Kim, Seung WonThis dissertation consists of three main sections which report the procedures used to develop the semivolatile aerosol dichotomous sampler (SADS) and their results. The first section describes the theoretical background of SADS and the validation of its performance using numerical simulations and experimental data. SADS was proposed as an alternative method to overcome some of the problems of existing personal sampling methods such as evaporative loss during filter sampling. The main difference between virtual impactors and the SADS was the inverted flow ratio between the major and the minor flow. Sampling in the SADS settings gave a lower cutsize in both numerical simulations and experimental results. The second section reports the results of an optimization procedure for SADS and experimental confirmation with the optimized sampler. Using numerical modeling, the relationships between four major design and operating parameters significantly affecting the performance of the SADS and four performance parameters were expressed in polynomial equations. Utilizing an optimization procedure, values for the major parameters giving the best performance were determined and used as the base model for optimizing minor parameters. Five minor parameters were then investigated for their possible contribution to better performance of the SADS. Experimental tests confirmed that the performance of the new sampler was improved although not as much as expected from the numerical simulation. In the third section, the sampling performance of SADS was compared with existing vapor and particle sampling methods. Seven different test fluids were used to generate test droplets and the concentrations and composition in each phase were evaluated using gas chromatography. Combined vapor and particle concentrations for each test aerosol were not statistically different from one another as a function of test method. However, the particle concentrations estimated using the SADS were statistically higher than those from the other methods. In the tests of a chemical mixture and oil mists, a similar pattern of vapor/particle concentration ratio to the individual compounds was observed. SADS worked better than a filtration method and measured higher particle concentrations than other methods.Item Efficient Methods for Distributed Machine Learning and Resource Management in the Internet-of-Things(2019-06) Chen, TianyiUndoubtedly, this century evolves in a world of interconnected entities, where the notion of Internet-of-Things (IoT) plays a central role in the proliferation of linked devices and objects. In this context, the present dissertation deals with large-scale networked systems including IoT that consist of heterogeneous components, and can operate in unknown environments. The focus is on the theoretical and algorithmic issues at the intersection of optimization, machine learning, and networked systems. Specifically, the research objectives and innovative claims include: (T1) Scalable distributed machine learning approaches for efficient IoT implementation; and, (T2) Enhanced resource management policies for IoT by leveraging machine learning advances. Conventional machine learning approaches require centralizing the users' data on one machine or in a data center. Considering the massive amount of IoT devices, centralized learning becomes computationally intractable, and rises serious privacy concerns. The widespread consensus today is that besides data centers at the cloud, future machine learning tasks have to be performed starting from the network edge, namely mobile devices. The first contribution offers innovative distributed learning methods tailored for heterogeneous IoT setups, and with reduced communication overhead. The resultant distributed algorithm can afford provably reduced communication complexity in distributed machine learning. From learning to control, reinforcement learning will play a critical role in many complex IoT tasks such as autonomous vehicles. In this context, the thesis introduces a distributed reinforcement learning approach featured with its high communication efficiency. Optimally allocating computing and communication resources is a crucial task in IoT. The second novelty pertains to learning-aided optimization tools tailored for resource management tasks. To date, most resource management schemes are based on a pure optimization viewpoint (e.g., the dual (sub)gradient method), which incurs suboptimal performance. From the vantage point of IoT, the idea is to leverage the abundant historical data collected by devices, and formulate the resource management problem as an empirical risk minimization task --- a central topic in machine learning research. By cross-fertilizing advances of optimization and learning theory, a learn-and-adapt resource management framework is developed. An upshot of the second part is its ability to account for the feedback-limited nature of tasks in IoT. Typically, solving resource allocation problems necessitates knowledge of the models that map a resource variable to its cost or utility. Targeting scenarios where models are not available, a model-free learning scheme is developed in this thesis, along with its bandit version. These algorithms come with provable performance guarantees, even when knowledge about the underlying systems is obtained only through repeated interactions with the environment. The overarching objective of this dissertation is to wed state-of-the-art optimization and machine learning tools with the emerging IoT paradigm, in a way that they can inspire and reinforce the development of each other, with the ultimate goal of benefiting daily life.Item Efficient numerical algorithms for virtual design in nanoplasmonics(2017-03) Ortan, AlexandraNanomaterials have given rise to many devices, from high-density data storage to optical bio-sensors capable of detecting specific biochemicals. The design of new nanodevices relies increasingly on numerical simulations, driving a need for efficient numerical methods. In this work, integral equations are used to efficiently solve the electromagnetic transmission problem at the interface of a dielectric and a periodic metal nanostructure. Derivative-free trust-region methods are then used to optimize the geometry of the nanostructure.Item Empirical Analysis of Optimization and Generalization of Deep Neural Networks(2022-03) Li, XinyanDeep neural networks (DNNs) have gained increasing attention and popularity in the past decade. This is mainly due to their tremendous success in numerous commercial, scientific, and societal tasks. Despite the success of DNNs in practice, several aspects of the optimization dynamics and generalization are still not well understood. In practice, DNNs are usually heavily over-parameterized with far more parameters than training samples, making them easier to memorize all the training examples without learning. In fact, Zhang et al. have shown that DNNs indeed can fit training data perfectly. Training DNNs also requires first-order optimization methods such as gradient descent (GD) and stochastic gradient descent (SGD) to solve a highly non-convex optimization problem. The fact that such heavily over-parameterized DNNs trained by simple GD/SGD are still able to learn and generalize well deeply puzzles the deep learning community. In this thesis, we explore the optimization dynamics and generalization behavior of over-parameterized DNNs trained by SGD from two unique directions. First, we focus on studying the topology of the loss landscape of those DNNs through the analysis of the Hessian of the training loss (with respect to the parameters). We empirically study the second-moment matrix $M_t$ constructed by the outer product of the stochastic gradients (SGs), as well as the Hessian of the loss $H_f(\theta_t)$. With the help of existing tools such as the Lanczos method and the R-operator, we can compute the eigenvalues and the corresponding eigenvectors of both the Hessian matrix and the second-moment matrix efficiently. This allows us to reveal the relationship between the Hessian of the loss and the second moment of SGs. Besides, we discover the ``low-rank'' structure in both the eigenvalue-spectrum of the Hessian and in the stochastic gradients themselves. Such observations directly lead to the development of a new PAC-bayes generalization bound which considers the structure of the Hessian at minima obtained from SGD, as well as a novel noisy truncated stochastic gradient descent (NT-SGD) algorithm, aiming to tackle the communication bottleneck in the large-scale distributed setting. Next, we dive into the debate on whether it is sufficient to explain the success of DNNs in practice by their behavior in the infinite-width limit. On one hand, there has been a rich literature understanding of the infinite width limit of DNNs. Such analysis simplifies the learning dynamics of very wide neural networks by a linear model obtained from the first-order Taylor expansion around its initial parameters. As a result, the DNN training occurs in a ``lazy'' regime. On the other hand, both theoretical and empirical evidence has been presented, pointing out the limitations of lazy training. Those results suggest that training DNNs with gradient descent actually occurs in a ``rich'' regime which captures much richer inductive biases and the behavior of such models cannot be fully described by their infinite-width kernel equivalence. As an empirical complement of the recent work studying the transition from the lazy regime to the rich regime, we study generalization and optimization behaviors of commonly used DNNs, focusing on varying the width and to some extent the depth, and show what happens in typical DNNs used in practice in the rich regime. We also extensively study the smallest eigenvalues of the Neural Tangent Kernel, a crucial element that appeared in many recently theoretical analyses related to both the training and generalization of DNNs. Hopefully, our empirical study could provide fodder for new theoretical advances on understanding generalization and optimization in both rich and lazy regimes.Item Enabling Distributed Renewable Energy and Chemical Production through Process Systems Engineering(2018-12) Allman, WilliamNew renewable energy technologies offer the promise of preserving a sustainable energy supply for future generations. Coupling chemical production with renewable energy production can help to address many of the challenges associated with the large-scale implementation of renewable energy, including short time scale variability of electricity production, potential mismatch of supply and demand, and energy stranded in areas of low population density. However, such co-production systems necessitate considering chemical production at scales smaller than what is typically seen in today's infrastructure due to the geospatially dispersed nature of renewable energy resources. This small scale production is economically prohibitive due to economies of scale, which promote building large scale facilities whenever possible. These economic challenges motivate the use of process systems engineering to develop decision making frameworks which minimize the costs of building and operating new renewable energy and chemical production systems. The application of process systems engineering methods to systems producing renewable energy and chemicals presented in this thesis centers around three major themes. First, decomposition, an approach which breaks down large optimization problems into smaller, easier to solve subproblems, is used to solve a challenging problem which finds the optimal design of a combined biorefinery and microgrid system. By doing so, hydrogen production is identified as a critical cost bottleneck in the combined system design. A method for automatically finding subproblems for decomposition for a broad class of optimization problems is also presented. Second, a framework is proposed for considering where new facilities should be built within an existing chemical supply chain. Here, policies and market conditions, such as a carbon tax, are identified that can have a strong effect on reducing emissions from ammonia production. Finally, the connection between the optimal design and operation of renewable energy and chemical systems is analyzed. Here, a framework which determines operating strategies which minimize cost is developed, and the optimal operation of a wind-powered ammonia system in different electricity market structures is analyzed. This framework is used to generate correlations between system design and operating cost which are embedded in a design optimization problem to improve solution efficiency.Item Equitable Loss Allocation in Distribution Systems(2017-12) Santhanam, SukumarThis thesis discusses how losses can be equitably allocated to photovoltaic inverters in a residential system. One possible strategy discussed here is to optimize the active-power injections to equitably allocate losses to these inverters in distribution networks based on their kVA ratings. To achieve this, we first obtain closed form approximation of nonlinear power-flow solution. A circuit-theoretic method called power tracing then uses this power-flow solution to determine a set of loss coefficients to allocate losses to the inverters present in the network. Finally, these loss coefficients are used in an equitable-loss allocation method to allocate losses fairly in the distribution network. To discuss the benefits of our method, the losses allocated to the inverters by this method are compared with the losses that are allocated by power tracing which uses a power-flow solution obtained from a nonlinear power-flow method.Item Expanding the Success of Salt-Tolerant Roadside Turfgrasses through Innovation and Education(Minnesota Department of Transportation, 2020-02) Watkins, Eric; Trappe, Jon; Moncada, Kristine; Bauer, Sam; Reyes, JonahOur project was based on the need to water new roadside installations more efficiently to ensure that the turfgrasses, especially the new salt-tolerant mixes, establish more successfully with a predictable and uniform amount of water during the establishment period. The first objective of this project was to do a preliminary investigation of alternative means of irrigating new installations of salt-tolerant seed and sod mixtures. We completed the testing of four drip-tape-style irrigation systems placed both above and below sod, two above-ground sprinkler system configurations, and eight water truck nozzles. We then evaluated these new irrigation methods compared to current practices. We also developed an online voluntary training and education program for installers of roadside turf. And finally, we developed online maintenance training for homeowners to maintain new roadside turf installations. Based on our research, we recommend the use of 18-inch (45.7-cm) irrigation tape laid above the germination blanket (when seeding), or above sod when using a hydrant adapter with a programmable irrigation system as this system is easier and cheaper to install, can be removed and possibly reused after establishment, and results in reduced water use.Item A high performance framework for coupled urban microclimate models(2014-11) Overby, Matthew CharlesUrban form modifies the microclimate and may trap in heat and pollutants. This causes a rise in energy demands to heat and cool building interiors. Mitigating these effects is a growing concern due to the increasing urbanization of major cities. Researchers, urban planners, and city architects rely on sophisticated simulations to investigate how to reduce building and air temperatures. However, the complex interactions between urban form and the microclimate are not well understood. Many factors shape the microclimate, such as solar radiation, atmospheric convection, longwave interaction between nearby buildings, and more. As science evolves, new models are developed and existing ones are improved. More accurate and sophisticated models often impose higher computational overhead.This paper introduces QUIC EnvSim (QES), a scalable, high performance framework for coupled urban microclimate models. QES allows researchers to develop and modify such models, in which tools are provided to facilitate input/output communications, model interaction, and the utilization of computational resources for efficient simulations. Common functionality of urban microclimate modeling is optimally handled by the system. By employing Graphics Processing Units (GPUs), simulations within QES can be substantially accelerated. Models for computing view factors, surface temperatures, and radiative exchange between urban materials and vegetation have been implemented and coupled into larger, more sophisticated simulations.These models can be applied to complex domains such as large forests and dense cities. Visualizations, statistics, and analysis tools provide a detailed view of experimental results. Performance increases with additional GPUs and hardware availability. Several diverse examples have been implemented to provide details on utilizing the features of QES for a wide range of applications.Item Imposing Physical Structure within Input-Output Analysis of Fluid Flows: Methods and Applications(2023-07) Mushtaq, TalhaInput-output (I/O) methods have recently been proposed as simulation-free methods for identifying and quantifying fluid flow instabilities. Recent developments in I/O methods have focused on imposing additional physical structure within the I/O framework by (i) accounting for the structure of the nonlinear terms (i.e., structured I/O), or (ii) promoting sparsity in the identified instability mechanisms (i.e., spatially-localized modal analysis). This dissertation contributes to the state of the art by formulating and applying I/O analysis algorithms that are both computationally efficient and impose this physical structure in a mathematically consistent manner. First, we propose algorithms for performing the structured I/O analysis, which involves computing structured singular value (SSV) bounds (worst-case gains) and mode shapes by exploiting the underlying mathematical structure of the convective nonlinearity in the incompressible Navier-Stokes equations (NSE). The analysis yields physical insights of the global flow mechanisms, which are useful for identifying flow instabilities. We demonstrate the analysis on a laminar channel flow, and a turbulent channel flow over riblets. For both models, we identify various relevant flow mechanisms that are consistent with the ones predicted in high-fidelity numerical simulations, e.g., Kelvin-Helmholtz (KH) vortices, lift-up effects and Near-Wall (NW) cycles. Second, we propose computationally efficient algorithms for spatially-localized modal analysis. Unlike state of the art methods that promote sparsity, the methods proposed here work to solve a cardinality-constrained optimization problem. The solution to the optimization are sparse modes that highlight most pertinent flow quantities for triggering instabilities. We demonstrate the analysis on a laminar channel flow, where the sparse modes identify various spatially-localized flow mechanisms that contribute to the kinetic energy growth of the flow, e.g., the lift-up effect and Tollmien-Schlichting instabilities.Item Improving Sustainability Management Decisions with Modified Life Cycle Assessment Methods(2016-12) Pelton, RylieAcross the globe, organizations and institutions have publicly committed to reducing the environmental impacts associated with their operations. Despite these commitments, progress in integrating sustainability measurement information into core business processes and decision criteria has been limited. To avoid many of the ecological tipping points that society currently faces, it is essential that sustainability measurement systems and decision support tools are improved to be more practically incorporated into business operations and decisions. This dissertation explores the specific sustainability challenges and information gaps that are faced by core business operations in three separate case studies, thereby expanding the sustainability operations management literature in the areas of environmentally preferable procurement, sustainable manufacturing, and green supply chain management. Modified life cycle assessment (LCA) methods are demonstrated in the methodological designs of the respective case studies, which help reduce the complexity of environmental information and the costs of assessment, and increase the specificity to organizations by considering spatial and temporal aspects of operations. Altogether, methods enable optimal management options and trade-offs to be identified, priorities to be set, and increases overall understanding of and ability to manage risks. These elements greatly enhance the actionability of sustainability information to organizational decision makers, helping to lower the barriers for integrating into core organizational processes for reducing the environmental burden of production and consumption systems.
- «
- 1 (current)
- 2
- 3
- »