Edge computing is one method of pushing the applications, data and computation components away from the centralized system. Each edge computing component has its own computation ability and storage capacity. And they also can communicate with the center server through the cloud and Internet to send the pre-computed data to the center. With increasing the interests of Internet of Things (IoT), more and more edge computing nodes will be attached to the cloud and will build a distributed edge computing infrastructure. In the infrastructure, the center server with storage systems is capable of managing the data and storing the captured data into its storage systems. Moreover, tens or hundreds of edge computing nodes are attached to the infrastructure cloud. Each edge computing node has the computation ability to pre-compute some data captured by the back-end sensors and sends the processed data to the center server via the infrastructure cloud. In this thesis, three aspects of the distributed edge computing infrastructure are investigated, the low hardware cost design for edge computing node, performance evaluation for the central server and the reliability of the central server. First, for each edge computing component, the hardware cost is an extremely important factor due to the limited power supply and computation ability. Stochastic computing is a promising technology to achieve the low power and area designs. Taking neural networks as the applications at edge computing nodes, we proposed different arithmetic operations in stochastic domain for neural networks to achieve the low hardware cost design for those edge computing components. Second, with adding more and more edge computing nodes, the network and storage traffic will be increased tremendously in the central server side. Therefore, it is significant to know how much workloads (number of edge computing nodes) the central server can tolerate for system designers. In our work, we proposed replayer tools which are capable of replaying traces to the target system in order to measure the performance of the target system. By doing so, it will be clear to know the ability of the system whether it can tolerate the workload or not and guide the system designer to add more resource to the cenral server such as more storage devices and changing higher frequency CPUs. Finally, the central server in the infrastructure receives the data sent from each edge computing node and stores them into its storage system. Therefore, it is important to protect the data and to achieve the high performance. In this thesis, we proposed new RAID-6 codes to improve the write and degraded read performance while keeping the reliability of the central server.
University of Minnesota Ph.D. dissertation. August 2018. Major: Electrical/Computer Engineering. Advisor: David Lilja. 1 computer file (PDF); xi, 133 pages.
Distributed Edge Computing Infrastructure with Low Hardware Cost, Performance Evaluation and Reliability.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.