Browsing by Subject "CNN (Convolutional Neural Network)"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Non-Volatile Neuromorphic Computing based on Logic-Compatible Embedded Flash Memory Technology(2020-07) KIM, MINSUDeep neural networks (DNNs) contain multiple computation layers each performing a massive number of multiply-and-accumulate (MAC) operations between the input data and trained weights. Due to the huge amount of data processing and computation need, the performance and energy-efficiency of DNN chips can be limited by available memory bandwidth and MAC engines. A promising approach to alleviate this issue is the compute-in-memory (CIM) paradigm where the computation occurs where the data is stored, with massively parallelized analog MAC engines. In this thesis, we realized the CIM using the analog MAC engines based on the logic-compatible embedded flash (eflash) memory which has a non-volatile characteristic and multi-level ability using program-verify operation. First, we demonstrated a neuromorphic core using the analog MAC engines based on the eflash in a 65nm standard CMOS process. A carefully-designed program-verify sequence along with a bitline voltage regulation scheme allows the individual cell currents to be programmed precisely. This makes it possible to enable a large number of rows in parallel without impacting the current summation accuracy. Our design stores excitatory and inhibitory weights in adjacent bitlines whose voltage levels are regulated for accurate current programming and measurement. Output spikes are generated by comparing the excitatory and inhibitory bitline currents. Our logic-compatible eflash-based spiking neuromorphic core achieves a 91.8% handwritten digit recognition accuracy which is close to the accuracy of the software model with the same number of weight levels. Second, we demonstrated a convolutional neural network (CNN) core that can be readily mapped to a 3D NAND flash array using eflash NAND cell in a standard 65nm CMOS process. Logic-compatible embedded flash memory cells were used for storing multi-level synaptic weights while a bit-serial architecture enables 8bit multiply-and-accumulate operation. A novel back-pattern tolerant program-verify scheme reduces the cell current variation. Positive and negative weights are stored in eflash cells in adjacent bitlines, generating a differential output signal for eNAND 16 stack string design. Additionally, we challenged merging the positive and negative weights in one bitline for eNAND 128 stack string design. The eNAND 16 stack-based neural network represents the first physical demonstration of an embedded NAND Flash-based neuromorphic chip in a standard logic process. Lastly, we explored the security area, physical unclonable function (PUF) with electromigration using SRAM and metal fuse. For security, we realized a novel hybrid SRAM and metal fuse cell which is demonstrated in a 65nm CMOS technology. The novel hybrid cell with a perfectly symmetric schematic and layout ensures that the PUF output is unbiased. Experimental data from a 65nm test chip shows no errors and a near-ideal intra-chip Hamming distance of 0.497. We also evaluated the program time of metal fuse structures with different metal lead configurations.