Jump to content

AI-driven design automation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Copparihollmann (talk | contribs) at 13:18, 17 June 2025 (Refined citation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

AI Driven Design Automation is the use of artificial intelligence (AI) to automate and improve different parts of the electronic design automation (EDA) process. This applies especially to designing integrated circuits (chips) and complex electronic systems. This field has become important because it can help solve the growing problems of complexity, high costs, and the need to release products faster in the semiconductor industry. AI Driven Design Automation uses several methods, including machine learning, expert systems, and reinforcement learning. These are used for many tasks, from planning a chip's architecture and logic synthesis to its physical design and final verification.

This figure explains how can we train large circuit models by making use of the front-end (depicted in blue) and back-end (depicted in yellow) of the EDA flow in order to either enhance existing EDA tools or to create novel EDA applications.[1]

History

1980s–1990s: Expert systems and early experiments

Using AI for design automation started becoming popular in the 1980s and 1990s, mainly with the creation of expert systems. These systems tried to capture the knowledge and practical rules of human design experts. They used a set of rules and reasoning engines to direct the design process.[2]

A notable early project was the ULYSSES system from Carnegie Mellon University. ULYSSES was a CAD tool integration environment that let expert designers turn their design methods into scripts that could be run automatically. It treated design tools as sources of knowledge that a scheduler could manage.[3]

Another examples is the ADAM (Advanced Design AutoMation) system at the University of Southern California, which used an expert system called the Design Planning Engine. This engine figured out design strategies on the fly and handled different design jobs by organizing specialized knowledge into structured formats called frames.[4]

Other systems like DAA (Design Automation Assistant) used a rule-based approach for specific jobs, such as register transfer level (RTL) design for systems like the IBM 370.[2] Researchers at Carnegie Mellon University also created TALIB, an expert system for mask layout that used over 1200 rules, and EMUCS/DAA for CPU architectural design which used about 70 rules. These projects showed that AI worked better for problems where a few rules could handle a lot of data. At the same time, there was a surge of tools called silicon compilers like MacPitts, Arsenic, and Palladio. They used algorithms and search techniques to explore different design paradigms. This was another way to automate design, even if it was not always based on expert systems.[5] Early tests with neural networks in VLSI design also happened during this time, although they were not as common as systems based on rules.

2000s: Introduction of machine learning

In the 2000s, interest in AI for design automation increased. This was mostly because of better machine learning (ML) algorithms and more available data from design and manufacturing. For example, they were used to model and reduce the effects of small manufacturing differences in semiconductor devices. This became very important as the size of components on chips became smaller. The large amount of data created during chip design provided the foundation needed to train smarter ML models. This allowed for predicting outcomes and optimizing in areas that were hard to automate before.

2016–2020: Reinforcement learning and large scale initiatives

A major turning point happened in the mid to late 2010s, sparked by successes in other areas of AI. The success of DeepMind's AlphaGo in mastering the game of Go inspired researchers. They began to apply reinforcement learning (RL) to difficult EDA problems. These problems often require searching through many options and making a series of decisions.

In 2018, the U.S. DARPA started the Intelligent Design of Electronic Assets (IDEA) program. A main goal of IDEA was to create a fully automated layout generator that required no human intervention. It needed to produce a chip design ready for manufacturing from RTL specifications in 24 hours. Another big initiative was the OpenROAD project, a large effort under IDEA led by UC San Diego with industry and university partners, aimed to build an open source, independent toolchain. It used machine learning, parallelization and divide and conquer approaches.[6]

A clear demonstration of RL's potential came from Google researchers between 2020 and 2021. They created a deep reinforcement learning method for planning the layout of a chip, known as floorplanning. They reported that this method created layouts that were as good as or better than those made by human experts, and it did so in less than six hours. This method used a type of network called a graph convolutional neural network. It showed that it could learn general patterns that could be applied to new problems, getting better as it saw more chip designs. The technology was later used to design Google's Tensor Processing Unit (TPU) accelerators.[7]

2020s: Autonomous systems and agents

Entering the 2020s, the industry saw the commercial launch of autonomous AI driven EDA systems. For example, Synopsys launched DSO.ai (Design Space Optimization AI) in early 2020, calling it the first autonomous artificial intelligence application for chip design in the industry.[8][9] This system uses reinforcement learning to search for the best ways to optimize a design within the huge number of possible solutions, trying to improve power, performance, and area (PPA).[9] By 2023, DSO.ai had been used in over 100 commercial chip productions, which proved that the industry was widely adopting it.[10] Synopsys later grew its AI tools into a suite called Synopsys.ai. The goal was to use AI in the entire EDA workflow, including verification and testing.[11][12]

These advancements, which combine modern AI methods with cloud computing and large data resources, have led to talks about a new phase in EDA. Industry experts and participants sometimes call this 'EDA 4.0'.[13][14] This new era is defined by the widespread use of AI and machine learning to deal with growing design complexity, automate more of the design process, and help engineers handle the huge amounts of data that EDA tools create.[13][15] The purpose of EDA 4.0 is to optimize product performance, get products to market faster and make development and manufacturing smoother through intelligent automation.[14]

Applications

Artificial intelligence (AI) is being used in many stages of the electronic design workflow. It aims to improve efficiency, get better results, and handle the growing complexity of modern integrated circuits.[16] AI helps designers from the very first ideas about architecture all the way to manufacturing and testing.[1]

High level synthesis and architectural exploration

In the first phases of chip design, AI helps with High Level Synthesis (HLS) and exploring different system level design options (DSE). These processes are key for turning general ideas into detailed hardware plans.[16] AI algorithms, often using supervised learning, are used to build simpler, substitute models. These models can quickly guess important design measurements like area, performance, and power for many different architectural options or HLS settings.[1] Being able to guess quickly reduces the need for lengthy simulations. This allows for exploring a wider range of possible designs.[16] For example, the Ithemal tool uses deep neural networks to estimate how fast basic code blocks will run, which helps in making processor architecture decisions.[17] Similarly, PRIMAL uses machine learning for guessing power use at the register transfer level (RTL), giving early information about how much power the chip will use.[18] Reinforcement learning (RL) and Bayesian optimization are also used to guide the DSE process. They help search through the many parameters to find the best HLS settings or architectural details like cache sizes.[19] LLMs are also being tested for creating architectural plans or initial C code for HLS, as seen with GPT4AIGChip.[20][1]

Logic synthesis and optimization

Logic synthesis is the process of changing a high level hardware description into an optimized list of electronic gates, known as a gate level netlist, that is ready for a specific manufacturing process. AI methods help with different parts of this process, including logic optimization, technology mapping, and making improvements after mapping.[16][19] Supervised learning, especially with Graph Neural Networks (GNNs) are good at handling data or problems that can be represented as graphs, for example circuit diagrams, helps create models to predict design properties like power or error rates in approximate circuits.[1]

In logic synthesis and optimization reinforcement learning is used to perform logic optimization directly. For example, agents are trained to choose a series of logic changes to reduce area while meeting timing goals.[16][1] AlphaSyn uses Monte carlo tree search with RL to optimize logic for smaller area.[21] FlowTune uses a multi armed bandit strategy to choose synthesis flows.[1] These methods can also adjust parameters for entire synthesis flows, learning from old designs to recommend the best tool settings for new ones.[16]

Physical design

Physical design turns the list of electronic gates into a physical layout. This layout defines exactly where each component goes and how they are all connected. AI is used a lot in this area to improve PPA metrics.[16][19]

Placement

Placement is the task of finding the best spots for large circuit blocks, called macros, and smaller standard cells. Reinforcement learning has been famously used for macro placement, where an agent learns how to position blocks to reduce wire length and improve timing[22] and other examples like the GoodFloorplan method.[23] Supervised learning models, including CNNs that treat the layout like a picture, are used to predict routing problems like DRVs (e.g., RouteNet[24]) or timing after routing directly from the placement information.[1] RL Sizer uses deep RL to optimize the size of gates during placement to meet timing goals.[25]

Complexity of board games compared to floorplanning in chip design: chess is estimated to have a complexity of ≈ 10¹²⁶, Go about 10³⁶⁰, while arranging hundreds of hard macros on a silicon die balloons beyond 10²⁵⁰⁰ possibilities. The arrow highlights this steep rise in combinatorial complexity.[7][26]

Clock network synthesis

AI helps in Clock Tree Synthesis (CTS) by optimizing the network that distributes the clock signal. GANs, sometimes used with RL (e.g., GAN CTS), are used to predict and improve clock tree structures. The goal is to reduce clock skew and power use.[19][1]

Routing

Routing creates the physical wire connections. AI models predict routing traffic jams using methods like GANs to help guide the routing algorithms.[1] RL is also used to optimize the order in which wires are routed to reduce errors.[16]

Power/ground network synthesis and analysis

AI models, including CNNs and tree based methods, help in designing and analyzing the Power Delivery Network (PDN). They do this by quickly estimating static and dynamic IR drop. This guides the creation of the PDN and reduces the number of design cycles.[16][19][1]

Verification and validation

Verification and validation are a critical step in the design and fabrication of a semiconductor device. These processes often take a long time and AI is used to make them more efficient.[16] LLMs are used to turn plain language requirements into formal SystemVerilog assertions (SVAs) (e.g., AssertLLM)[1] and to help with security verification.[1] Some methods focus on making timing checks much faster, by predicting timing analysis results based on circuit structure[27], which was later improved with transformer models like TF Predictor[28]. Another approach is DeepGate2, which provides a way to learn circuit representations, which in turn can help with verification tasks.[29]

Analog and mixed signal design

AI methods are being used more often in the complex field of analog and mixed signal circuit design. They help in choosing the circuit structure, determining the size of components, and automating the layout.[16] AI models, including Variational Autoencoders (VAEs) and RL, help explore and create new circuit structures.[1] For instance, graph embeddings can be used to optimize the structure of operational amplifiers.[30] Machine learning substitute models give fast performance estimates for component sizing, while RL directly optimizes the component parameters.[16]

Test, manufacturing and yield optimization

AI can also help in the stages after the silicon was manufactured, this includes testing, design for manufacturability (DFM), and improving the production yield.[16] In lithography, AI models like CNNs and GANs are used for SRAF generation (e.g., GAN SRAF[31]) and OPC (e.g., GAN OPC[32]) to improve the amount of successfully produced chips. AI also predicts lithography problems from the layout, known as hotspots.[33] For tuning the broader design flow for manufacturing, FIST uses tree based methods to select parameters.[34]

Hardware-software co-design

Hardware-software co-design is about optimizing the hardware and software parts of a system at the same time. LLMs are starting to be used as tools to help with this. For example, they help in designing Compute in Memory (CiM) DNN accelerators, where how the software is arranged and how the hardware is set up are closely connected.[35][1] LLMs can also create architectural plans (e.g., SpecLLM[36]) or HDL code using benchmarks like VerilogEval[37] and RTLLM,[38] or with tools like AutoChip.[39] Additionally, agents based on LLMs like ChatEDA make it easier to interact with EDA tools for different design stages.[40]

AI methods

People are using Artificial intelligence techniques more and more to solve difficult problems in electronic design automation. These methods look at large amounts of design data, learn complex patterns, and automate decisions. The goal is to improve the quality of designs, make the design process faster, and handle the increasing complexity of making semiconductors. Important approaches include supervised learning, unsupervised learning, reinforcement learning, and generative AI.

Supervised learning

Supervised learning is a type of machine learning where algorithms learn from data that is already labeled.[41] This means every piece of input data in the training set has a known correct answer or ground-truth.[42] The algorithm learns to connect inputs to outputs by finding the patterns and connections in the training data.[43] After it is trained, the model can then make predictions on new data it has not seen before.[44]

In electronic design automation, supervised learning is useful for tasks where past data can predict future results or spot certain problems. This includes estimating design metrics like performance, power, and timing. For example, Ithemal estimates CPU performance,[17] PRIMAL predicts power use at the RTL stage,[18] and other methods predict timing delays in circuits by analyzing their structure.[27][28] It is also used to classify parts of a design to find potential problems, like lithography hotspots[33] or predicting how easy a design will be to route.[24] Learning circuit representations that are aware of their function also often uses supervised methods.[29]

Unsupervised learning

Unsupervised learning involves training algorithms on data without any labels. This lets the models find hidden patterns, structures, or connections in the data by themselves.[45] Common tasks are clustering (which groups similar data together), dimensionality reduction (which reduces the number of variables but keeps important information), and association rule mining (which finds relationships between variables).[46]

In EDA, these methods are valuable for looking through complex design data to find insights that are not obvious. For instance, clustering can group design settings or tool configurations, which helps in automatically tuning the design process, as seen in the FIST tool.[34] A major use is in representation learning, where the aim is to automatically learn useful and often simpler representations (features or embeddings) of circuit data. This could involve learning embeddings for analog circuit structures using methods based on graphs[30] or understanding the function of netlists through contrastive learning methods.[47]

Reinforcement learning

Reinforcement learning (RL) is a kind of machine learning where an agent, or a computer program, learns to make the best decisions by trying things out in a simulated environment. The agent takes actions, moves between different states, and gets rewards or penalties as feedback. The main goal is to get the highest total reward over time.[48] RL is different from supervised learning because it does not need labeled data. It also differs from unsupervised learning because it learns by trial and error to achieve a specific goal.[49]

In EDA, RL is especially good for tasks that require making a series of decisions to find the best solution in very complex situations with many variables. Its adoption by commercial EDA products shows its growing importance.[50] RL has been used for physical design problems like chip floorplanning. In this task, an agent learns to place blocks to improve things like wire length and performance.[22][23] In logic synthesis, RL can guide how optimization steps are chosen and in what order they are applied to get better results, as seen in methods like AlphaSyn.[21] Another example where RL agents can learn effective strategies is adjusting the size of gates to optimize timing.[25]

Comparison of macro-placement strategies for a system-on-chip floorplan: a handcrafted layout created by a human designer (left) and a layout generated by an AI-assisted placer (right). The AI approach combines explicit design-rule constraints with heuristic search and reinforcement-learning optimization, allowing it to evaluate placements that are not obvious to humans while still meeting power, performance and area targets.[51][7]

Generative AI

Generative AI means artificial intelligence models that can create new content, like text, images, or code, instead of just analyzing or working with existing data.[52] These models learn the underlying patterns and structures from the data they are trained on. They then use this knowledge to create new and original outputs.[53]

In EDA, generative AI is being used in many ways, especially through Large Language Models (LLMs) and other architectures like Generative Adversarial Networks (GANs).

Large language models (LLMs)

Large Language Models are deep learning models, often based on the transformer architecture. They are pre trained on huge amounts of text and code.[54] They are very good at understanding, summarizing, creating, and predicting human language and programming languages.[55]

Their abilities are being used in EDA for jobs such as:

  • RTL Code Generation: LLMs are used to automatically write code in a Hardware Description Language (HDL) based on written instructions or requirements. Benchmarks like VerilogEval[37] and RTLLM[38] have been created to check these abilities, and tools like AutoChip aim to automate this process.[39]
  • EDA Script Generation and Tool Interaction: Agents based on LLMs, like ChatEDA, can turn plain language commands into runnable scripts for controlling EDA tools.[40]
  • Architectural Design and Exploration: LLMs help in the early stages of design. They can generate high level synthesis code (for example, GPT4AIGChip[20]), explore design options for special hardware like Compute in Memory accelerators,[35] or help create and review design requirements (SpecLLM[36]).
  • Verification Assistance: Researchers are looking into using LLMs to create verification parts like SystemVerilog Assertions (SVAs) from plain language descriptions.

Other generative models

Besides LLMs, other generative models like Generative Adversarial Networks (GANs) are also used in EDA. A GAN has two neural networks, a generator and a discriminator, which are trained in a competition against each other.[56] The generator learns to make data samples that look like the training data, while the discriminator learns to tell the difference between real and generated samples.[57]

In physical design, GANs have been used for tasks like creating sub resolution assist features (SRAFs) to make chips easier to manufacture in lithography (GAN SRAF[31]) and for optimizing masks (GAN OPC[32]).

Industry adoption and ecosystem

The use of artificial intelligence in electronic design automation is a widespread trend. Many different players in the semiconductor world are helping to create and use these technologies. This includes companies that sell EDA tools and develop software with AI, semiconductor design companies and foundries that use these tools to make and manufacture chips, and very large technology companies that might design their own chips using AI driven methods.

EDA tool vendors

Major EDA companies are leading the way in adding AI to their tool suites to handle growing design complexity. Their strategies often involve creating complete AI platforms. These platforms use machine learning in many different steps of the design and manufacturing process.

Synopsys provides a set of tools in its Synopsys.ai initiative. This initiative aims to improve design metrics and productivity from the system architecture stage all the way to manufacturing. A main component uses reinforcement learning to improve power, performance, and area (PPA) during the process that goes from the initial design description to the final manufacturing file (DSO.ai). Other parts use AI to speed up verification, optimize test pattern generation for manufacturing, and improve the design of analog circuits in different conditions.[12]

Cadence has created its Cadence.AI platform. The company says it uses "agentic AI workflows" to cut down on the design engineering time for complex SoCs.[58] Key platforms use AI to optimize the digital design flow (Cadence Cerebrus), improve verification productivity (Verisium), design custom and analog ICs (Virtuoso Studio), and analyze systems at a high level (Optimality Intelligent System Explorer).[59]

Siemens EDA directs its AI strategy at improving its current software engines and workflows to give engineers better design insights. AI is used inside its Calibre platform to speed up manufacturing tasks like Design for Manufacturability (DFM), Resolution Enhancement Techniques (RET), and Optical Proximity Correction (OPC). AI is also used in its Questa suite to close coverage faster in digital verification and in its Solido suite to lessen the characterization work for analog designs.[60]

Semiconductor design and FPGA companies

Companies that design semiconductor chips, like FPGAs and adaptive SoCs, are major users and creators of EDA methods that are improved with AI to make their design processes more efficient.

AMD offers a suite of tools for its adaptive hardware that uses different AI approaches. The AMD Vitis platform is an environment for developing designs on its SoCs and FPGAs. It includes a component, Vitis AI, which has libraries and pre trained models to speed up AI inference.[61] The related Vivado Design Suite uses machine learning methods to improve the quality of results (QoR) and help with achieving timing goals and estimating power for the hardware design.[62]

NVIDIA has a specific Design Automation Research group to look into new EDA methods.[63] The group focuses on EDA tools that are accelerated by GPUs and using AI methods like Bayesian optimization and reinforcement learning for EDA problems. One example of their research is AutoDMP, a tool that automates macro placement using multi objective Bayesian optimization and a GPU accelerated placer.[64]

Cloud providers and hyperscalers

Large cloud service providers and hyperscale companies have two main roles. They provide the powerful and flexible computing power needed to run difficult AI and EDA tasks, and many also design their own custom silicon, often using AI in their internal design processes.

Google Cloud, for example, provides a platform that supports EDA workloads with flexible computing resources, special storage solutions, and high speed networking.[65] At the same time, Google's internal chip design teams have contributed to EDA research, especially by using reinforcement learning for physical design tasks like chip floorplanning.[22]

IBM provides infrastructure on its cloud platform that is focused on EDA, with a strong emphasis on secure environments for foundries and high performance computing. Their solutions include high performance parallel storage and tools for managing large scale jobs. These are designed to help design houses manage the complex simulation and modeling tasks that are part of modern EDA.[66]

Limitations and challenges

Data quality and availability

A main challenge for using AI effectively in EDA is the availability and quality of data.[16][1] Machine learning models, especially deep learning ones, usually need large, varied, and high quality datasets to be trained. This ensures they can work well on new designs they have not seen before.[1] However, a lot of the detailed design data in the semiconductor industry is secret and very sensitive. This makes companies unwilling to share it.[16][1] This lack of public, detailed examples makes it difficult for university researchers and for the development of models that can be widely used. Even when data is available, it might have problems like being noisy, incomplete, or unbalanced. For instance, having many more examples of successful designs than ones with problems can lead to biased or poorly performing AI models.[19] The work and cost of collecting, organizing, and correctly labeling large EDA datasets also create big obstacles.[1] Solving these data related problems is key for moving AI forward in EDA. Possible solutions include creating strong data augmentation methods, generating realistic synthetic data, and building community platforms for sharing data securely and for benchmarking.[1]

Integration and compute cost

Putting AI solutions into practice in the EDA field has major challenges. These include fitting the AI into the complex sets of tools that already exist and handling the high cost of computing power.[16][19] Adding new AI models and algorithms into established EDA workflows, which are often made of many connected tools and private formats, takes a lot of engineering work and can have problems working with other tools.[1] Also, training and running complex AI models, like deep learning, requires a lot of computing resources. This includes powerful GPUs or special AI accelerators, large amounts of memory, and long processing times.[16] These needs lead to higher costs for both creating and using AI models.[19] Making AI methods able to handle the ever growing size and complexity of modern chip designs, while staying efficient and using a reasonable amount of memory, is still an ongoing challenge.[19][1]

Intellectual property and confidentiality

The use of AI in EDA, especially with sensitive design data, brings up serious worries about protecting secret company information, known as intellectual property (IP), and keeping data private. Chip designs are very valuable IP, and there is always a risk when giving this secret information to AI models, particularly if they are made by other companies or run on cloud platforms.[16] It is extremely important to make sure that design data used for training or making decisions is not compromised, leaked, or used to accidentally leak secret knowledge. While strategies like fine tuning open source models on private data are being tried to reduce some privacy risks, it is essential to set up secure data handling rules, strong access controls, and clear data management policies. The unwillingness to share detailed design data because of these IP and privacy worries also slows down collaborative research and the creation of better AI models for the EDA industry.[1]

Human oversight and interpretability

Even with the push for more automation, the role of human designers is still vital, and making AI models understandable continues to be a challenge.[16] Many advanced deep learning systems, can act like "black boxes," which makes it hard for engineers to understand why they make certain predictions or design choices.[16] This lack of clarity can prevent adoption, as designers might not want to trust or use solutions if their decision making process is not clear, especially in critical applications or when fixing unexpected problems.[19] It is essential to set design goals, check the results from AI, handle new or unusual situations where AI might fail, and provide the specialized knowledge that often guides AI development.[16] To effectively use AI in EDA, it means that human engineers and smart tools need to work together effectively. This requires designers to learn new skills for working with and supervising AI systems.[1]

References

  1. ^ a b c d e f g h i j k l m n o p q r s t u v w x Chen, Lei; Chen, Yiqi; Chu, Zhufei; Fang, Wenji; Ho, Tsung-Yi; Huang, Ru; Huang, Yu; Khan, Sadaf; Li, Min (1 May 2024), The Dawn of AI-Native EDA: Opportunities and Challenges of Large Circuit Models, arXiv, doi:10.48550/arXiv.2403.07257, arXiv:2403.07257, retrieved 14 June 2025
  2. ^ a b Parker, A.C.; Hayati, S. (June 1987). "Automating the VLSI design process using expert systems and silicon compilation". Proceedings of the IEEE. 75 (6): 777–785. doi:10.1109/PROC.1987.13799. ISSN 1558-2256.
  3. ^ Bushnell, M.L.; Director, S.W. (June 1986). "VLSI CAD Tool Integration Using the Ulysses Environment". 23rd ACM/IEEE Design Automation Conference: 55–61. doi:10.1109/DAC.1986.1586068.
  4. ^ Granacki, J.; Knapp, D.; Parker, A. (June 1985). "The ADAM Advanced Design Automation System: Overview, Planner and Natural Language Interface". 22nd ACM/IEEE Design Automation Conference: 727–730. doi:10.1109/DAC.1985.1586023.
  5. ^ Kirk, R. S. (1985). The impact of AI technology on VLSI design. Managing Requirements Knowledge, International Workshop on, CHICAGO. p. 125. doi:10.1109/AFIPS.1985.63.
  6. ^ Ajayi, T.; Blaauw, D. (January 2019). "OpenROAD: Toward a Self-Driving, Open-Source Digital Layout Implementation Tool Chain". Proceedings of Government Microcircuit Applications and Critical Technology Conference.
  7. ^ a b c Mirhoseini, Azalia; Goldie, Anna; Yazgan, Mustafa; Jiang, Joe Wenjie; Songhori, Ebrahim; Wang, Shen; Lee, Young-Joon; Johnson, Eric; Pathak, Omkar; Nova, Azade; Pak, Jiwoo; Tong, Andy; Srinivasa, Kavya; Hang, William; Tuncer, Emre (June 2021). "A graph placement methodology for fast chip design". Nature. 594 (7862): 207–212. doi:10.1038/s41586-021-03544-w. ISSN 1476-4687.
  8. ^ "Synopsys Advances State-of-the-Art in Electronic Design with Revolutionary Artificial Intelligence Technology". news.synopsys.com. Retrieved 14 June 2025.
  9. ^ a b "DSO.ai: AI-Driven Design Applications | Synopsys AI". www.synopsys.com. Retrieved 14 June 2025.
  10. ^ Ward-Foxton, Sally (10 February 2023). "AI-Powered Chip Design Goes Mainstream". EE Times. Retrieved 14 June 2025.
  11. ^ Freund, Karl. "Synopsys.ai: New AI Solutions Across The Entire Chip Development Workflow". Forbes. Retrieved 14 June 2025.
  12. ^ a b "Synopsys.ai – Full Stack, AI-Driven EDA Suite" (PDF). Synopsys. Retrieved 7 June 2025.
  13. ^ a b Yu, Dan (1 June 2023). "Welcome To EDA 4.0 And The AI-Driven Revolution". Semiconductor Engineering. Retrieved 14 June 2025.
  14. ^ a b "EDA 4.0 And The AI-Driven Revolution" (PDF). unipv.news (reporting on a Siemens presentation). 29 November 2023. Retrieved 7 June 2025.
  15. ^ Dahad, Nitin (10 November 2022). "How AI-based EDA will enable, not replace the engineer". Embedded. Retrieved 14 June 2025.
  16. ^ a b c d e f g h i j k l m n o p q r s t u Rapp, Martin; Amrouch, Hussam; Lin, Yibo; Yu, Bei; Pan, David Z.; Wolf, Marilyn; Henkel, Jörg (October 2022). "MLCAD: A Survey of Research in Machine Learning for CAD Keynote Paper". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (10): 3162–3181. doi:10.1109/TCAD.2021.3124762. ISSN 1937-4151.
  17. ^ a b Mendis, Charith; Renda, Alex; Amarasinghe, Saman; Carbin, Michael (21 August 2018). "Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks". arXiv.org. Retrieved 14 June 2025.
  18. ^ a b Zhou, Yuan; Ren, Haoxing; Zhang, Yanqing; Keller, Ben; Khailany, Brucek; Zhang, Zhiru (June 2019). "PRIMAL: Power Inference using Machine Learning". 2019 56th ACM/IEEE Design Automation Conference (DAC): 1–6.
  19. ^ a b c d e f g h i j Gubbi, Kevin Immanuel; Beheshti-Shirazi, Sayed Aresh; Sheaves, Tyler; Salehi, Soheil; PD, Sai Manoj; Rafatirad, Setareh; Sasan, Avesta; Homayoun, Houman (6 June 2022). "Survey of Machine Learning for Electronic Design Automation". Proceedings of the Great Lakes Symposium on VLSI 2022. GLSVLSI '22. New York, NY, USA: Association for Computing Machinery: 513–518. doi:10.1145/3526241.3530834. ISBN 978-1-4503-9322-5.
  20. ^ a b Fu, Yonggan; Zhang, Yongan; Yu, Zhongzhi; Li, Sixu; Ye, Zhifan; Li, Chaojian; Wan, Cheng; Lin, Yingyan Celine (October 2023). "GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323953.
  21. ^ a b Pei, Zehua; Liu, Fangzhou; He, Zhuolun; Chen, Guojin; Zheng, Haisheng; Zhu, Keren; Yu, Bei (October 2023). "AlphaSyn: Logic Synthesis Optimization with Efficient Monte Carlo Tree Search". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323856.
  22. ^ a b c Mirhoseini, Azalia; Goldie, Anna; Yazgan, Mustafa; Jiang, Joe Wenjie; Songhori, Ebrahim; Wang, Shen; Lee, Young-Joon; Johnson, Eric; Pathak, Omkar; Nova, Azade; Pak, Jiwoo; Tong, Andy; Srinivasa, Kavya; Hang, William; Tuncer, Emre (June 2021). "A graph placement methodology for fast chip design". Nature. 594 (7862): 207–212. doi:10.1038/s41586-021-03544-w. ISSN 1476-4687.
  23. ^ a b Xu, Qi; Geng, Hao; Chen, Song; Yuan, Bo; Zhuo, Cheng; Kang, Yi; Wen, Xiaoqing (October 2022). "GoodFloorplan: Graph Convolutional Network and Reinforcement Learning-Based Floorplanning". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (10): 3492–3502. doi:10.1109/TCAD.2021.3131550. ISSN 1937-4151.
  24. ^ a b Xie, Zhiyao; Huang, Yu-Hung; Fang, Guan-Qi; Ren, Haoxing; Fang, Shao-Yun; Chen, Yiran; Hu, Jiang (November 2018). "RouteNet: Routability prediction for Mixed-Size Designs Using Convolutional Neural Network". 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD): 1–8. doi:10.1145/3240765.3240843.
  25. ^ a b Lu, Yi-Chen; Nath, Siddhartha; Khandelwal, Vishal; Lim, Sung Kyu (December 2021). "RL-Sizer: VLSI Gate Sizing for Timing Optimization using Deep Reinforcement Learning". 2021 58th ACM/IEEE Design Automation Conference (DAC): 733–738. doi:10.1109/DAC18074.2021.9586138.
  26. ^ Asianometry (12 December 2021). Google’s Chip Designing AI. Retrieved 17 June 2025 – via YouTube.
  27. ^ a b Kahng, Andrew B.; Mallappa, Uday; Saul, Lawrence (October 2018). "Using Machine Learning to Predict Path-Based Slack from Graph-Based Timing Analysis". 2018 IEEE 36th International Conference on Computer Design (ICCD): 603–612. doi:10.1109/ICCD.2018.00096.
  28. ^ a b Cao, Peng; He, Guoqing; Yang, Tai (July 2023). "TF-Predictor: Transformer-Based Prerouting Path Delay Prediction Framework". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 42 (7): 2227–2237. doi:10.1109/TCAD.2022.3216752. ISSN 1937-4151.
  29. ^ a b Shi, Zhengyuan; Pan, Hongyang; Khan, Sadaf; Li, Min; Liu, Yi; Huang, Junhua; Zhen, Hui-Ling; Yuan, Mingxuan; Chu, Zhufei; Xu, Qiang (October 2023). "DeepGate2: Functionality-Aware Circuit Representation Learning". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323798.
  30. ^ a b Lu, Jialin; Lei, Liangbo; Yang, Fan; Shang, Li; Zeng, Xuan (March 2022). "Topology Optimization of Operational Amplifier in Continuous Space via Graph Embedding". 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE): 142–147. doi:10.23919/DATE54114.2022.9774676.
  31. ^ a b Alawieh, Mohamed Baker; Lin, Yibo; Zhang, Zaiwei; Li, Meng; Huang, Qixing; Pan, David Z. (February 2021). "GAN-SRAF: Subresolution Assist Feature Generation Using Generative Adversarial Networks". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 40 (2): 373–385. doi:10.1109/TCAD.2020.2995338. ISSN 1937-4151.
  32. ^ a b Yang, Haoyu; Li, Shuhe; Ma, Yuzhe; Yu, Bei; Young, Evangeline F. Y. (24 June 2018). "GAN-OPC: mask optimization with lithography-guided generative adversarial nets". Proceedings of the 55th Annual Design Automation Conference. DAC '18. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3195970.3196056. ISBN 978-1-4503-5700-5.
  33. ^ a b Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei. "Imbalance aware lithography hotspot detection: a deep learning approach". SPIE Digital Library. doi:10.1117/1.jmm.16.3.033504.short.
  34. ^ a b Xie, Zhiyao; Fang, Guan-Qi; Huang, Yu-Hung; Ren, Haoxing; Zhang, Yanqing; Khailany, Brucek; Fang, Shao-Yun; Hu, Jiang; Chen, Yiran; Barboza, Erick Carvajal (January 2020). "FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic Design Flow Parameter Tuning". 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC): 19–25. doi:10.1109/ASP-DAC47756.2020.9045201.
  35. ^ a b Yan, Zheyu; Qin, Yifan; Hu, Xiaobo Sharon; Shi, Yiyu (September 2023). "On the Viability of Using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators". 2023 IEEE 36th International System-on-Chip Conference (SOCC): 1–6. doi:10.1109/SOCC58585.2023.10256783.
  36. ^ a b Li, Mengming; Fang, Wenji; Zhang, Qijun; Xie, Zhiyao (24 January 2024). "SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model". arXiv.org. Retrieved 14 June 2025.
  37. ^ a b Liu, Mingjie; Pinckney, Nathaniel; Khailany, Brucek; Ren, Haoxing (October 2023). "Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–8. doi:10.1109/ICCAD57390.2023.10323812.
  38. ^ a b Lu, Yao; Liu, Shang; Zhang, Qijun; Xie, Zhiyao (January 2024). "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model". 2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC): 722–727. doi:10.1109/ASP-DAC58780.2024.10473904.
  39. ^ a b Thakur, Shailja; Blocklove, Jason; Pearce, Hammond; Tan, Benjamin; Garg, Siddharth; Karri, Ramesh (4 June 2024), AutoChip: Automating HDL Generation Using LLM Feedback, arXiv, doi:10.48550/arXiv.2311.04887, arXiv:2311.04887, retrieved 14 June 2025
  40. ^ a b Wu, Haoyuan; He, Zhuolun; Zhang, Xinyun; Yao, Xufeng; Zheng, Su; Zheng, Haisheng; Yu, Bei (October 2024). "ChatEDA: A Large Language Model Powered Autonomous Agent for EDA". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 43 (10): 3184–3197. doi:10.1109/TCAD.2024.3383347. ISSN 1937-4151.
  41. ^ Belcic, Ivan; Stryker, Cole (28 December 2024). "What Is Supervised Learning? | IBM". www.ibm.com. Retrieved 14 June 2025.
  42. ^ "What is Supervised Learning?". Google Cloud. Retrieved 14 June 2025.
  43. ^ "A guide to machine learning algorithms and their applications". www.sas.com. Retrieved 14 June 2025.
  44. ^ "Supervised Learning". www.mathworks.com. Archived from the original on 12 February 2025. Retrieved 14 June 2025.
  45. ^ "What Is Unsupervised Learning? | IBM". www.ibm.com. 23 September 2021. Retrieved 14 June 2025.
  46. ^ Yasar, Kinza; Gillis, Alexander S.; Pratt, Mary K. "What is Unsupervised Learning? | Definition from TechTarget". Search Enterprise AI. Retrieved 14 June 2025.
  47. ^ Wang, Ziyi; Bai, Chen; He, Zhuolun; Zhang, Guangliang; Xu, Qiang; Ho, Tsung-Yi; Yu, Bei; Huang, Yu (23 August 2022). "Functionality matters in netlist representation learning". Proceedings of the 59th ACM/IEEE Design Automation Conference. DAC '22. New York, NY, USA: Association for Computing Machinery: 61–66. doi:10.1145/3489517.3530410. ISBN 978-1-4503-9142-9.
  48. ^ "Reinforcement Learning". GeeksforGeeks. 25 April 2018. Retrieved 14 June 2025.
  49. ^ "Deep RL Bootcamp - Lectures". sites.google.com. Retrieved 14 June 2025.
  50. ^ "Synopsys.ai Unveiled as Industry's First Full-Stack, AI-Driven EDA Suite for Chipmakers". news.synopsys.com. Retrieved 14 June 2025.
  51. ^ Attar, Janet (11 January 2024). "AI-Driven Macro Placement Boosts PPA". Semiconductor Engineering. Retrieved 17 June 2025.
  52. ^ "What is ChatGPT, DALL-E, and generative AI? | McKinsey". www.mckinsey.com. Retrieved 14 June 2025.
  53. ^ Routley, Nick. "What is generative AI? An AI explains". World Economic Forum. Archived from the original on 12 May 2025. Retrieved 14 June 2025.
  54. ^ "What is LLM? - Large Language Models Explained - AWS". Amazon Web Services, Inc. Retrieved 14 June 2025.
  55. ^ "What are Large Language Models? | NVIDIA Glossary". NVIDIA. Retrieved 14 June 2025.
  56. ^ Robinson, Scott; Yasar, Kinza; Lewis, Sarah. "What is a Generative Adversarial Network (GAN)? | Definition from TechTarget". Search Enterprise AI. Retrieved 14 June 2025.
  57. ^ "What Are Generative Adversarial Networks (GANs)?". Amazon Web Services. Retrieved 7 June 2025.
  58. ^ "Cadence.AI: Transforming Chip Design with Agentic AI Workflows". Cadence Design Systems. Retrieved 7 June 2025.
  59. ^ "What is Electronic Design Automation (EDA)?". Cadence Design Systems. Retrieved 7 June 2025.
  60. ^ "A new era of EDA powered by AI". Siemens Digital Industries Software. Retrieved 7 June 2025.
  61. ^ "Vitis AI Developer Hub". AMD. Retrieved 7 June 2025.
  62. ^ "AMD Vivado Design Suite". AMD. Retrieved 7 June 2025.
  63. ^ "Design Automation Research Group". NVIDIA Research. Retrieved 7 June 2025.
  64. ^ Agnesina, Anthony; Ren, Mark (27 March 2023). "AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs". NVIDIA Developer Blog. Retrieved 7 June 2025.
  65. ^ "Scaling Your Chip Design Flow" (PDF). Google Cloud. Retrieved 7 June 2025.
  66. ^ "Leveraging IBM Cloud for electronic design automation (EDA) workloads". IBM. 31 October 2023. Retrieved 7 June 2025.

Cite error: A list-defined reference named "Chen2024AINativeEDA" is not used in the content (see the help page).

The OpenROAD Project – Official website for the open-source autonomous layout generator.

Design Automation Conference (DAC) – Premier academic and industry conference for EDA.

Category:Electronic design automation Category:Applications of artificial intelligence Category:Semiconductor device fabrication Category:Integrated circuits