As the end draws near for Moore’s law, the search for low-power alternatives to CMOS technology is intensifying. Among the various post-CMOS candidates, spintronic devices have gained special attention due to its unique features such as zero static power, compact size, and instant wakeup, while enabling an entirely new class of architectures such as processor-in-memory, logic-in-memory, and neuromorphic computing. However, traditional spintronics research has been mainly limited to the materials and single device level, so the main aim of this dissertation is to clearly describe spin-based logic and memory technologies by exploring the trade-off points across different levels of design abstraction (i.e. device, circuit, and architecture). For spin-based logic, we benchmark the system-level capability of spin-logic technology using a hypothetical spintronic-based Intel Core i7 as a test vehicle. We describe how spin-based components are integrated into a computing system and the advantages that result. Even with early promises such as zero static power, lower device count, and lower supply voltage, technical barriers associated with spin devices such as low spin injection, limited spin diffusion length, and intrinsically high activity factor result in higher active power than CMOS. For spin-based memory, a key aspect of technology evaluation is the development of a reliable MTJ model, so we first propose a technology-agnostic MTJ model specifically designed for evaluating the scalability and variability of STT-MRAM circuits. Using the proposed model, we evaluate the circuit-level scalability of MTJ technologies providing the detailed scaling methods and projection scenarios down to 7nm. For use in high speed on-chip cache applications, we also explore the feasibility of non-traditional MRAMs such as spin-Hall effect (SHE) MRAM which provides superior switching efficiency. In addition to the spintronics research, a logic-compatible eflash-based neuromorphic core is designed to provide a highly efficient architecture for neural computing. We use a logic-compatible embedded flash memory to store synaptic weights to provide a simple implementation of restricted Boltzmann machine (RBM) which is a well-known neural algorithm for digit recognition. With the proposed current-based architecture, a neuron operation can be accomplished by simply comparing two currents corresponding to excitatory and inhibitory weights without large digital neuron circuits used in previous works.