Browsing by Subject "Scalability"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Adopting Markov Logic Networks for Big Spatial Data and Applications(2020-01) Sabek, IbrahimMarkov Logic Networks (MLN) have become a de-facto statistical learning and inference framework to perform efficient and user-friendly analysis on massive data, with many applications in knowledge base construction, data cleaning, among others. Meanwhile, large-scale spatial data analysis has gained much interest in recent years due to the need for extracting insights from spatial data. However, analyzing spatial data using existing solutions typically cannot satisfy the scalability requirement of most applications as these solutions were not originally designed for the huge spatial data being generated at the moment. Unfortunately, none of these existing solutions exploits the power of the MLN framework to boost the usability, scalability, and accuracy of spatial analysis applications. The main goal of this thesis is to provide the first research effort to combine the two worlds of MLN and spatial data analysis. We address the two main challenges that face any spatial analysis application when using MLN. The first challenge is how to modify the core processing and functionalities of MLN to make it aware with the distinguished features of spatial data. The core of MLN is composed of two main components, namely, grounding using factor graphs and inference using Gibbs sampling. The factor graph is used as the main data structure for learning and inferring the weights of the MLN features, while Gibbs sampling infers the values of model variables and computes their associated probabilities using the weighted MLN features. The second challenge is how to efficiently represent spatial analysis problems (e.g., spatial regression) using MLN. This requires to find an equivalent first-order logic representation for any input spatial analysis problem that makes sure that the input problem can be appropriately executed using MLN. This thesis makes the following contributions. First, we present Sya; the first spatial probabilistic knowledge base construction system based on the spatial-aware MLN framework. We show our spatial extensions to the different MLN layers, including language, grounding and inference, implemented inside Sya. We then introduce three scalable spatial analysis systems, namely, TurboReg, RegRocket, and Flash, that are equipped with efficient first-order logic representations for different spatial analysis problems using MLN.Item Advanced Simulation Techniques for Evaluating Emerging Magnetoresistive Random Access Memory Technologies for Next Generation Non-Volatile Memory(2020-08) Song, JeehwanMagnetoresistive random access memory (MRAM) has various features such as nonvolatility, zero static power consumption, CMOS compatibility, and high endurance, which enable it to be a potential candidate for the next generation non-volatile memory (NVM) technology. The MRAM basically stores the data in a magnetic tunnel junction (MTJ) device which consists of a free ferromagnetic layer, an oxide barrier, and a fixed ferromagnetic layer, and the intrinsic properties of MTJ device have critical roles in write and read operations considering thermal fluctuation. Due to the importance of the MTJ device, the academic and industrial groups have researched the MTJ device models for reliable MRAM applications, however, there is still no standard model to be commonly utilized in design process. Moreover, diverse types of MRAM have been researched for the last few decades. For example, spin-transfer torque (STT)-MRAM, voltage-controlled magnetic anisotropy (VCMA)-MRAM, and spin-Hall effect (SHE)-MRAM have been evaluated in order to commercialize more effective MRAM application. STT-MRAM, which utilizes bidirectional current flow for switching, has almost reached commercialization with mass production. SHE-MRAM consists of MTJ with spin Hall metal (SHM) to generate efficient spin current, whereas the VCMA MRAM utilizes a VCMA effect to lower the energy barrier of magnetization for faster switching and lower energy consumption. This thesis has focused on different modeling approaches such as a SPICE-based compact model and a Fokker-Planck model for representative MRAM types. Using the simulation models, we provide practical analyses of the MRAM applications as well as comparison of the models. Firstly, SPICE-based MTJ compact model is introduced for the VCMA-MRAM application. For this study, we developed a physics-based SPICE model that includes various VCMA parameters such as VCMA coefficient, energy barrier time constant, and external magnetic field. Using realistic material and device parameters, we evaluate the operating margin and switching probability of the VCMA-MRAM. Based on the Monte-Carlo simulation, the highest switching probabilities were 94.9, 84.8, and 53.5 %, for VCMA coefficient values of 33, 105, and 290 fJ·V-1·m-1, respectively. For the practical memory applications, their switching probability must be improved by incorporating different physics. Secondly, the Fokker-Planck (FP) numerical model is utilized for an efficient analysis of STT-MRAM application, which allows for parametric variation and evaluates its impact on switching. We analyzes the impact of MTJ material and geometric parameter variations such as saturation magnetization (MS), magnetic anisotropy (HK), damping factor (α), spin polarization efficiency factor (η), oxide thickness (tOX), free layer thickness (tF), tunnel magnetoresistance (TMR), and cross-sectional area of free layer (AF) variations on Write Error Rate (WER) and Read Disturbance Rate (RDR) for reliable write and read operations. Both WER and RDR are analyzed with a wide range of MTJ diameters between 90nm and 30nm to evaluate the scalability of MRAM devices. Even though the effect of material and geometric parameter variations on WER is decreased as MTJ scales down, the variation effect can be still considerable with small MTJ diameter and the most significant influential variation is η, MS, HK, and α in that order. On the other hand, the impact of the parameter variations on RDR increases in MTJ scaling, and the negative variations of HK and MS could be major problems in 30nm and 40nm MTJ diameters. The efficient FP numerical model-based study puts emphasis on the need of WER and RDR analyses by considering the parameter variations in MTJ scaling for practical STT-MRAM development. Thirdly, MRAM applications have been also expected to replace embedded cache memories in the near future. For the MRAM-based embedded cache memory, precedent research considering MRAM’s high write current, scaling challenges, and variation issues should be studied. In this work, a physics based MTJ model are utilized to evaluate the scalability and variability of the MRAM based cache memory. Through the studies, we investigate STT and SHE MRAM based cache memory applications considering device, circuit, layout, and architecture level details.Item Creating scalable, efficient and namespace independent routing framework for future networks.(2011-06) Jain, SourabhIn this thesis we propose VIRO -- a novel and paradigm-shifting approach to network routing and forwarding that is not only highly scalable and robust, but also is namespace- independent. VIRO provides several advantages over existing network routing architectures, including: i) VIRO directly and simultaneously addresses the challenges faced by IP networks as well as those associated with the traditional layer-2 technologies such as Ethernet -- while retaining its "plug-&-play" feature. ii) VIRO provides a uniform convergence layer that inte- grates and unifies routing and forwarding performed by the traditional layer-2 (data link layer) and layer-3 (network layer), as prescribed by the conventional local-area/wide-area network di- chotomy and layered architecture. iii) Perhaps more importantly, VIRO decouples routing from addressing, and thus is namespace-independent. Hence VIRO allows new (global or local) ad- dressing and naming schemes (e.g., HIP or flat-id namespace) to be introduced into networks without the need to modify core router/switch functions, and can easily and flexibly support inter-operability between existing and new addressing schemes/namespaces. In the second part of this thesis, we present Virtual Ethernet Id Layer, in short VEIL, a practical realization of VIRO routing protocol to create a large-scale Ethernet networks. VEIL is aimed at simplifying the management of large-scale enterprise networks by requiring minimal manual configuration overheads. It makes it tremendously easy to plug-in a new routing-node or a host-device in the network without requiring any manual configuration. It builds on top of a highly scalable and robust routing substrate provided by VIRO, and supports many advanced features such as seamless mobility support, built-in multi-path routing and fast-failure re-routing in case of link/node failures without requiring any specialized topologies. To demonstrate the feasibility of VEIL, we have built a prototype of VEIL, called veil-click, using Click Modular Router framework, which can be co-deployed with existing Ethernet switches, and does not require any changes to host-devices connecting to the network.Item Scaling Up The Performance of Distributed Key-ValueStores Using Emerging Technologies for Big DataApplications(2021-08) Eldakiky, HebatallaThe explosion in the amount of data with the development of the internet and cloud computing prompted much research to develop systems that are able to store and process this data efficiently. As data is generated by different sources with un-unified structures, NoSQL databases emerged as a solution due to their flexibility and high performance. Key-value stores, one of the NoSQL databases categories, are widely used in many big data applications. This wide usage is for its efficiency in handling data in key-value format, and flexibility to scale out without significant database redesign. In key-value stores, with such huge amount of data, data cannot be stored in a single storage server. Thus, this data has to be partitioned across multiple storage instances. Key-value queries have to access the information of these partitions to locate the target key-value pairs, and be directed to the right storage node that physically holds the data. This scenario introduces further forwarding steps in the path to the target storage node. These additional forwarding steps affect the query response time. Recently, the power and flexibility of software-defined networks with the evolution of the programmable switches lead to a programmable network infrastructure where in-network computation can help accelerate the performance of applications. This can be achieved by offloading some computational tasks to the network to improve data access performance when applications access storage through network. However, what kind of computational tasks should be delegated to the network to accelerate applications performance? To solve the partition management problem in key value stores, we developed TurboKV, an in-switch coordination model, which utilizes the programmable switches as partition management nodes and monitoring stations to scale up the performance of the distributed key-value stores. Our in-switch coordination model removes the load of routing the requests from storage nodes without introducing any additional forwarding steps in the path to the target storage node. Moreover, some key-value stores omit the transaction concepts because of their effect on the scalability and decreasing the performance of key-value stores, which are the key targets of any existing key-value store system. This effect is due to the complexity, locking, starvation introduced by transactions and the interference with the non-transaction operations. In order to provide efficient support for the transactions in key-value stores, we propose TransKV, an extension to our first work TurboKV, which introduces a networking support for transaction processing in distributed key-value stores. TransKV utilizes the programmable switches as a transaction coordinator who can decide whether the transaction can proceed to be processed by the storage nodes or just aborted from the network. On the storage node side, Seagate developed a new drive called "Kinetic drive". The Kinetic drive is an independent active disk accessible by Ethernet connection. This enables applications to directly connect to the drive via IP address, and retrieve a piece of data. Kinetic drive can also carry out key value pair operations on its own. So, in large scale data management, a set of Kinetic drives can be used to exploit parallelism in satisfying user requests, and solve the bottleneck caused by queuing of requests in the storage server which manages multiple HDDs/SDDs. On the other hand, Kinetic drive has a limited bandwidth and capacity. Therefore, a careful allocation scheme is needed to allocate key-value pairs to a set of Kinetic drives taking into account each drive's limited bandwidth and capacity. To this extent, we developed a key-value pair allocation strategy for Kinetic drives. This strategy takes into consideration the data popularity, the limited capacity and the bandwidth of Kinetic drive to avoid queuing on the level of the drive.