University of Illinois at Urbana-Champaign
Last Reviewed: 03/10/2017
Electronic design automation must evolve in response to increasingly ambitious goals for low power and high performance, which are accompanied by a decreasing design cycletime. There is an unmet need for models, methods and tools that enable fast and accurate design and veriﬁcation while protecting intellectual property. A behavioral approach to systems modeling will meet these objectives. CAEML will pioneer the application of emerging machine-learning techniques to microelectronics and micro-systems modeling. Existing methods fall short when applied to systems with many ports, which contain reliability hazards, have non-linear responses, and have variability. This problem will be addressed jointly with our microelectronics industry partners whose diverse products include electronic design automation tools, integrated circuits, and mobile systems. Close engagement with these industry partners will ensure that the Center provides models and tools that will facilitate communications between customers and suppliers across the entire industry value chain while protecting the proprietary information of all parties. This will lead to more efficient and reliable production, and better yields.
CAEML will develop new domain-speciﬁc machine learning algorithms to extract models using limited training data. Designers’ prior knowledge is utilized to speed-up learning and to impose physical constraints on the models.
Modular Machine Learning for Behavioral Modeling of Microelectronic Circuits and SystemsThis project focuses on theoretical foundations and modular algorithmic solutions for ML-driven design, simulation, and verification of high-complexity, multifunctional electronic systems. Behavioral system modeling provides a systematic approach to reconciling the variety of physics-based and simulation-based models, expert knowledge, and other possible means of component description commonly introduced in electronic systems modeling. In complex electronic systems, each component model comes with its own sources of errors, uncertainty, and variability, and the same applies to the way components and subsystems are connected and interact with each other in the integrated system. The modularity offered by the behavioral approach will be leveraged to develop mathematical tools for assessing the performance and minimal data requirements for learning a low-complexity representation of the system behavior, one component or subsystem at a time, from measured and simulated data, even in highly complex and uncertain settings. We will develop and implement the full ML algorithmic pipeline and quantify its end-to-end performance in applications pertinent to multifunctional electronic system design, simulation, and verification.
Behavioral Model Development for High-Speed LinksHigh-speed links consist of driver and receiver circuits connected to each other through interconnections in the chip, package, and printed circuit board. Over several decades, as the speed of the channel has increased, the driver and receiver circuits have become quite complex to compensate for any shortcomings of the channel; e.g., they contain pre-distortion, pre-emphasis, adaptive control, and equalization circuitry. This project’s goal is to apply machine-learning methods to systematically develop a hierarchy of behavioral models of the driver/receiver circuits that have the same accuracy as the transistor-level models, but require 25–50X less CPU time and memory. The behavioral models will be suitably parameterized to include a range of channel conditions that can be used for design verification and optimization. We will use three approaches for developing the behavioral models: 1) using time domain data obtained directly from the transistor-level models, 2) using X-parameters of the transistor-level circuits, and 3) building receiver models using system identification and surrogate modeling. We will compare the three approaches as part of this project.
Design Rule Checking with Deep NetworksIn this seed project, we will investigate the feasibility of training a deep convolutional network to perform Design Rule Checking (DRC). By replacing DRC with a recognition network, we hope to greatly speed it up. After showing initial feasibility, in following years, we will investigate tying DRC to interactive layout tools.
Optimization of Power Delivery Networks for Maximizing Signal IntegrityPower distribution is a system-level problem in which the contributions from the chip, package, and printed circuit board are equally important. This, when combined with signal lines, can lead to models that can take a long time to simulate. With optimization being an integral part of design, co-optimization of the signal and power delivery network becomes necessary. As the number of control parameters increases, this co-optimization process can be very time-consuming. The objective of this project is to explore and develop machine learning (ML) based software to optimize the output response of the system based on a large set of input (or control) parameters. The focus is on using expert ML methods that allow for fast convergence with little data (rather than big data). We will focus on two key applications as part of this project: 1) DDR4 and other emerging memory channels where the speed is being increased beyond 3 GHz, the voltage is being scaled below 1.2 V, and timing margins less than 100 ps are required; and 2) High Bandwidth Memory (HBM) integrated in close proximity to the processor through use of 3D technology, where temperature gradients on the PDN affect signal integrity.
Intellectual Property Reuse Through Machine LearningDemonstrate a tool ﬂow that will permit IP blocks to be migrated from one technology node to another and re-optimized for the new technology and application. This is a constrained optimization problem that is suitably addressed using goal programming techniques. High Dimensional Model Representation (HDMR) approaches will be developed to manage the large number of design dimensions.
Models to Enable System-level Electrostatic Discharge AnalysisThis project seeks to develop accurate but computationally efficient models so that simulation may be used to assess the ESD response of different combinations of integrated circuits, on-board protection elements, and circuit board designs. Simulation will be used to predict if any component within the system will be driven outside its safe operating area or whether there is a high likelihood of soft failures. A system identification approach will be used to learn the model from data. Acquisition of suitable training data for ESD model learning requires significant time and expertise; therefore, active learning will be exploited to minimize the amount of training data needed and focus the data collection on regions of the input space that are most relevant to ESD conditions. The behavioral model used to represent an IC pin’s transient voltage response to an incoming ESD current pulse will contain multiple ports to account for the multiple return paths among the many supply and ground pins and the influence of the board-level power delivery network (PDN) on the pin’s I-V characteristic. Stability of the on-chip power supply is similarly affected by the board-level PDN and has a major impact on the occurrence of soft failures. Behavioral models of the on-chip supply that extend to ESD current/voltage levels (where the power supply clamps are activated) will be developed. Methods to obtain a probabilistic description of the ESD soft failure occurrence will be investigated.
Georgia Tech has deep and long-standing experience in electronics design as well as micro- and nanofabrication. Examples include silicon and compound semiconductor devices, design of silicon and non-silicon integrated circuits, micro-electromechanical systems, photovoltaic, and electronic systems packaging. This extensive activity is now further strengthened with the formation of the Institute for Electronics and Nanotechnology (IEN), a cluster of 9 research centers each topically focused on key enabling areas of electronics and nanotechnology. The 9 research centers under the IEN include Center for Compound Semiconductors (CCS), Center for MEMS and Microsystems Technologies (CMMT), Georgia Electronic Design Center (GEDC), Georgia Tech Quantum Institute (GTQI), Georgia Tech Research Institute on Microelectronics and Nanotechnology (GTRI), Center for Co-Design of Chip, Package, System (C3PS), Epitaxial Graphene Science and Engineering Center (MRSEC), Packaging Research Center (PRC) and University Center of Excellence for Photovoltaics (UCEP). IEN provides not only a fabrication and facilities infrastructure that supports all of these topical centers, but also an intellectual infrastructure that supports faculty efforts in resource acquisition, discovery, innovation, and realization of relevant technologies. Finally, the IEN provides a common platform to enhance interdisciplinary interaction across all of electronics and nanotechnology, and a common intersection point for representing efforts in this area to both internal and external stakeholders. CAEML will make use of the measurement capabilities at some of these centers for validating the models developed. Some of the capabilities include RF characterization up to 325GHz, 40Gbps digital measurements, automatic and scan based testing and direct probing of ICs and substrates.
Georgia Tech also maintains a strategic investment in a comprehensive HPC environment called the Partnership for an Advanced Computing Environment (PACE). Via the PACE program, the executive leadership of Georgia Tech invests in data center infrastructure, technical services, systems administration and procurement assistance. PACE maintains an extensive HPC support infrastructure which includes high-performance scratch storage, networking, file backups and software licenses for common tools. Through PACE, there is access to a shared pool of existing capacity, including GPUs, so that researchers can begin working without the delays associated with acquiring equipment. Faculty can also augment the shared pool with equipment from their own research funding, which are prioritized for their use. Faculty contributions can be run as exclusively dedicated resource, but which retain many of the benefits of the shared PACE infrastructure. By participating in PACE, faculty benefit from the efficient acquisition, careful deployment, proper maintenance, and thoughtful management of HPC resources, which are critical factors for successful utilization. Furthermore, the Institute has invested in an HPC resource called the "FoRCE Research Computing Environment" - commonly known as the "FoRCE". The FoRCE began with an initial Institute investment of approximately 1,600 CPU cores, including some nVIDIA Tesla based GPU nodes. Over time it has grown to become a diverse and heterogeneous resource. The FoRCE also includes a small subset of nodes which can serve as a development sandbox for use in debugging codes before execution on the full cluster. Nodes in the FoRCE conform to a baseline configuration that specifies minimum processor/memory/networking ratios that allow for some amount of predictability in a heterogeneous environment. The use of the test environment is open to all PACE participants, but the use of FoRCE is determined by a faculty governance committee. Faculty can request access to the FoRCE for specific projects and courses via a lightweight proposal process. When purchasing nodes, faculty have the option to share the unused computation time of their equipment in exchange for access to the idle time on other shared resources, including the FoRCE. The contributing faculty member, and users authorized by the same, enjoy a high scheduling priority on their contributions that far exceeds that of users from other research groups. The faculty sharing their resources have the ability to run jobs that are larger than their own investment. In addition to the shared 2,332 CPU cores in the FoRCE cluster, faculty have purchased over 6,000 CPU cores to contribute to the shared pool. PACE manages approximately 1,200 nodes comprising nearly 30,000 CPU cores, 90 terabytes of memory, 2 Petabytes of online commodity storage, and 215 terabytes of high-performance scratch storage. The High Performance Computing (HPC) datacenter facilities at Georgia Tech (GT) include two 5,000 square foot computer rooms in the Rich Computer Center and a 4,000 square foot computer room in the Business Continuity Data Center (BCDC). All three datacenters are owned and managed by Georgia Tech and located on campus. These facilities provide general IT as well as HPC services to the entire campus. Since CAEML can gain from a parallelized computing environment for model development, both PACE and FoRCE represent important resources for conducting research.
In addition, CAEML will make use of Prof. Swaminathan’s Mixed Signal Design and Computational laboratory located in the Klaus Building at Georgia Tech which consists of a computer cluster with multiple cores and several desktop computers. The computers are loaded with the latest design software such as ADS, Momentum, Ansys, HSpice, Matlab, CST, APD, Allegro and Sonnet. The measurement laboratory is also located in the Klaus building and consists of Mixed-mode s-parameters VNA 4-port (40 GHz), Time domain reflectometer (TDR) and TDT (8-Channel), Large area test probe station (3-D), 6-inch Probe-station (2-D), Digital sampling oscilloscope (80 GHz), Spectrum analysis with phase noise measurements (30 GHz), Near field measurements for EMI (3 GHz) and LCR Analyzer (30 MHz).
Facilities available for the CAEML program at North Carolina State University include:
Microelectronic Systems Laboratory: The mission of this laboratory is to support the design, test and measurement of microlectronic systems. Paul Franzon is the lab director.
Facilities include the following:
Further detail can be found at www.ece.ncsu.edu/msl
Electronics Research Laboratory. The mission of this laboratory is to support the design and measurement of RF and microwave components. Facilities include:
Further detail can be found at www.ece.ncsu.edu/erl . Paul Franzon is co-director of this laboratory.
Analytical Instrumentation Facility. The mission of the AIF is to support the characterization of micro and nano structures. Equipment available include SEM, STEM, FIB, SIMS, and XPS, amongst others. More information can be found at www.ncsu.edu/aif .
EDA tools for IC and circuit board design are maintained by the Department’s IT Group. The college maintains a site license for IC design tools from Cadence (e.g., Virtuoso) and from Mentor Graphics (e.g., Calibre); circuit board design is performed using Cadence Allegro PCB Designer and CadSoft Eagle PCB Design Software. A site license is also provided for the TCAD device simulator from Synopsys and for a variety of electromagnetic field solvers, including ANSYS HFSS, ANSYS Q3D, and CST Microwave Studio.