Li, Peng2013-08-272013-08-272013-06https://hdl.handle.net/11299/155955University of Minnesota Ph.D. dissertation. June 2013. Major: Electrical Engineering. Electrical Engineering. Advisor: David J. Lilja. 1 computer file (PDF); x, 111 pages.Most digital systems operate on a positional representation of data, such as binary encoding. An alternative is to operate on random bit streams where the signal value is encoded by the probability of obtaining a one versus a zero. This representation is much less compact than the binary encoding. However, complex operations can be performed with very simple logic. Furthermore, since the representation is uniform, with all bits weighted equally, it is highly tolerant of soft errors (i.e., bit flips). Complex algorithms, such as artificial neural networks (ANN), low-density parity-check (LPDC) error-correcting coding, and kernel density estimation (KDE)-based image segmentation, can be implemented using stochastic encoding with much lower hardware cost and higher fault-tolerance. For example, the hardware area of the stochastic implementation of the KDE-based image segmentation is only 1.2% of the corresponding deterministic implementation, and it can tolerate more than 30% soft errors. Compared to conventional fault-tolerant techniques, such as triple-module redundance (TMR), the stochastic implementations of the complex algorithms normally consumes equivalent or less energy and have better fault-tolerance. For example, the TMR implementation of the KDE-based implementation can only tolerate up to 10% soft errors, but consumes the same energy as the stochastic implementation. In addition, thanks to the simple construction of the stochastic computing elements, it makes the rounting much easier, which is a big issue for very large scale integrated circuit (VLSI) deterministic implementations of these complex algorithms. Both combinational and sequential constructs have been proposed for operating on stochastic bit streams. Prior work has shown that combinational logic can implement multiplication and scaled addition effectively while finite-state machines (FSMs) can implement complex functions such as exponentiation and tanh effectively. Although the combinational logic-based stochastic computing elements had been well studied, they are inefficient for complex operations. The FSM-based stochastic computing elements are very efficient for complex operations. However, only three FSM-based stochastic computing elements were proposed by prior work, which limits the applications of stochastic computing. To implement more applications and functions stochastically, this dissertation focuses on the FSM-based stochastic computing. We first analyze the FSM-based stochastic computing elements proposed by prior work, which had largely been validated empirically. In this dissertation, we provide a rigorous mathematical treatment of the FSM-based stochastic computing elements. This gives us intuition about how to construct arbitrary functions stochastically using the FSM. Then, based on the existing stochastic computing elements, we implement five digital image processing algorithms as case studies. So far as we know, this is the first time these digital image processing algorithms are implemented stochastically. For all the five algorithms, the stochastic implementation has much less hardware cost and better fault-tolerance than the corresponding deterministic implementation. Last but not the least, we present a general method to synthesize a given target function stochastically using FSMs. We proposed three FSM topologies and discuss how to use these FSMs to synthesize the given target functions. The trade-offs among these different FSM topologies are introduced. Based on this synthesis method, more applications can be implemented stochastically to achieve lower hardware cost and better fault-tolerance.en-USFault-toleranceLogic synthesisSequential logicStochastic computingAnalysis, design, and logic synthesis of finite-state machine-based Stochastic computingThesis or Dissertation