Learning-capable edge artificial intelligence (AI) systems require both inference and learning capabilities combined with energy efficiency. No current memory technology fully meets these needs. Memristor arrays are well-suited for AI inference but have limited endurance and high programming energy. Ferroelectric capacitors (FeCAPs) are good for learning but have data-destructive reads, making them unsuitable for inference. The same hafnium-based device can be optimized to function either as a FeCAP or as a memristor, depending on its operating conditions.
This thesis developed such a dual-use device by integrating a 10 nm silicon-doped hafnium oxide film with a titanium oxygen-scavenging layer between two metal layers in a 130 nm CMOS process. An application-specific integrated circuit using this hybrid memory was validated, combining FeCAPs and memristors in a back-end-of-line 130 nm CMOS array. Based on this array, an on-chip learning solution is proposed and validated, which, without batching, performs competitively with floating-point-precision software models across several benchmarks. Additionally, a second, more flexible circuit was developed in 22 nm CMOS to explore the benefits of combining ferroelectric and resistive memories for inference and training in binarized neural networks.
This technology offers possibilities for applications requiring on-chip adaptive local training, allowing tailored tuning of neural network parameters.