Jump to content

AI-driven design automation: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Improved format of citations
Tags: Possible self promotion in userspace Visual edit
Line 3: Line 3:


'''AI Driven Design Automation''' is the use of [[artificial intelligence]] (AI) to automate and improve different parts of the [[electronic design automation]] (EDA) process. This applies especially to designing [[integrated circuit]]s (chips) and complex electronic systems. This field has become important because it can help solve the growing problems of complexity, high costs, and the need to release products faster in the [[semiconductor industry]]. AI Driven Design Automation uses several methods, including [[machine learning]], [[expert system]]s, and [[reinforcement learning]]. These are used for many tasks, from planning a chip's architecture and [[logic synthesis]] to its [[physical design (electronics)|physical design]] and final [[verification and validation|verification]]. This article covers the history of AI in EDA, explains its main methods, discusses important applications, and looks at how it affects chip design and the semiconductor industry.
'''AI Driven Design Automation''' is the use of [[artificial intelligence]] (AI) to automate and improve different parts of the [[electronic design automation]] (EDA) process. This applies especially to designing [[integrated circuit]]s (chips) and complex electronic systems. This field has become important because it can help solve the growing problems of complexity, high costs, and the need to release products faster in the [[semiconductor industry]]. AI Driven Design Automation uses several methods, including [[machine learning]], [[expert system]]s, and [[reinforcement learning]]. These are used for many tasks, from planning a chip's architecture and [[logic synthesis]] to its [[physical design (electronics)|physical design]] and final [[verification and validation|verification]]. This article covers the history of AI in EDA, explains its main methods, discusses important applications, and looks at how it affects chip design and the semiconductor industry.
[[File:EDA Large Circuit Models.png|thumb|This figure explains how can we train large circuit models by making use of the front-end (depicted in blue) and back-end (depicted in yellow) of the EDA flow in order to either enhance existing EDA tools or to create novel EDA applications.<ref name=":0">{{Citation |last=Chen |first=Lei |title=The Dawn of AI-Native EDA: Opportunities and Challenges of Large Circuit Models |date=2024-05-01 |url=http://arxiv.org/abs/2403.07257 |access-date=2025-06-14 |publisher=arXiv |doi=10.48550/arXiv.2403.07257 |id=arXiv:2403.07257 |last2=Chen |first2=Yiqi |last3=Chu |first3=Zhufei |last4=Fang |first4=Wenji |last5=Ho |first5=Tsung-Yi |last6=Huang |first6=Ru |last7=Huang |first7=Yu |last8=Khan |first8=Sadaf |last9=Li |first9=Min}}</ref>]]


==History==
==History==
Line 13: Line 14:
Other systems like DAA (Design Automation Assistant) showed how to use approaches based on rules for specific jobs, such as [[Register-transfer level|register transfer level]] (RTL) design for systems like the IBM 370.<ref name="ParkerHayati1987"/> Research at Carnegie Mellon University also created TALIB, an expert system for mask layout that used over 1200 rules, and EMUCS/DAA, for CPU architectural design with about 70 rules. These projects showed that AI worked better for problems where a few rules could handle a lot of data.<ref name="Kirk1985"/> At the same time, tools called [[silicon compiler]]s like MacPitts, Arsenic, and Palladio appeared. They used algorithms and search techniques to explore different design possibilities. This was another way to automate design, even if it was not always based on expert systems.<ref name="Kirk1985"/> Early tests with [[artificial neural network|neural networks]] in [[Very Large Scale Integration|VLSI]] design also happened during this time, although they were not as common as systems based on rules.
Other systems like DAA (Design Automation Assistant) showed how to use approaches based on rules for specific jobs, such as [[Register-transfer level|register transfer level]] (RTL) design for systems like the IBM 370.<ref name="ParkerHayati1987"/> Research at Carnegie Mellon University also created TALIB, an expert system for mask layout that used over 1200 rules, and EMUCS/DAA, for CPU architectural design with about 70 rules. These projects showed that AI worked better for problems where a few rules could handle a lot of data.<ref name="Kirk1985"/> At the same time, tools called [[silicon compiler]]s like MacPitts, Arsenic, and Palladio appeared. They used algorithms and search techniques to explore different design possibilities. This was another way to automate design, even if it was not always based on expert systems.<ref name="Kirk1985"/> Early tests with [[artificial neural network|neural networks]] in [[Very Large Scale Integration|VLSI]] design also happened during this time, although they were not as common as systems based on rules.


===2000s: Introduction of Machine Learning===
===2000s: Introduction of machine learning===
In the 2000s, interest in AI for design automation came back. This was mostly because of better [[machine learning]] (ML) algorithms and more available data from design and manufacturing. ML methods started being used for complex problems. For example, they were used to model and reduce the effects of small manufacturing differences in semiconductor devices. This became very important as the size of components on chips became smaller. The large amount of data created during chip design provided the foundation needed to train smarter ML models. This allowed for predicting outcomes and optimizing in areas that were hard to automate before.
In the 2000s, interest in AI for design automation came back. This was mostly because of better [[machine learning]] (ML) algorithms and more available data from design and manufacturing. ML methods started being used for complex problems. For example, they were used to model and reduce the effects of small manufacturing differences in semiconductor devices. This became very important as the size of components on chips became smaller. The large amount of data created during chip design provided the foundation needed to train smarter ML models. This allowed for predicting outcomes and optimizing in areas that were hard to automate before.


===2016–2020: Reinforcement Learning and Large Scale Initiatives===
===2016–2020: Reinforcement learning and large scale initiatives===
A major turning point happened in the mid to late 2010s, sparked by successes in other areas of AI. The success of [[DeepMind]]'s [[AlphaGo]] in mastering the game of [[Go (game)|Go]] inspired researchers. They began to apply [[Reinforcement learning|reinforcement learning (RL)]] to difficult EDA problems. These problems often require searching through many options and making a series of decisions.
A major turning point happened in the mid to late 2010s, sparked by successes in other areas of AI. The success of [[DeepMind]]'s [[AlphaGo]] in mastering the game of [[Go (game)|Go]] inspired researchers. They began to apply [[Reinforcement learning|reinforcement learning (RL)]] to difficult EDA problems. These problems often require searching through many options and making a series of decisions.


Line 23: Line 24:
A clear demonstration of RL's potential came from Google researchers between 2020 and 2021. They created a deep reinforcement learning method for planning the layout of a chip, known as [[Floorplan (microelectronics)|floorplanning]]. They reported that this method created layouts that were as good as or better than those made by human experts, and it did so in less than six hours.<ref name="MirhoseiniNature2021"/> This method used a type of network called a [[graph neural network|graph convolutional neural network]]. It showed that it could learn general patterns that could be applied to new problems, getting better as it saw more chip designs. The technology was later used to design Google's [[Tensor Processing Unit|Tensor Processing Unit (TPU)]] accelerators.<ref name="MirhoseiniNature2021"/>
A clear demonstration of RL's potential came from Google researchers between 2020 and 2021. They created a deep reinforcement learning method for planning the layout of a chip, known as [[Floorplan (microelectronics)|floorplanning]]. They reported that this method created layouts that were as good as or better than those made by human experts, and it did so in less than six hours.<ref name="MirhoseiniNature2021"/> This method used a type of network called a [[graph neural network|graph convolutional neural network]]. It showed that it could learn general patterns that could be applied to new problems, getting better as it saw more chip designs. The technology was later used to design Google's [[Tensor Processing Unit|Tensor Processing Unit (TPU)]] accelerators.<ref name="MirhoseiniNature2021"/>


===2020s: Autonomous Systems and Agents===
===2020s: Autonomous systems and agents===
Entering the 2020s, the industry saw the commercial launch of autonomous AI driven EDA systems. For example, Synopsys launched DSO.ai™ (Design Space Optimization AI) in early 2020, calling it the first autonomous artificial intelligence application for chip design in the industry.<ref name="SynopsysDSOlaunch2020" /><ref name="SynopsysDSOsite" /> This system uses reinforcement learning to search for the best ways to optimize a design within the huge number of possible solutions, trying to improve power, performance, and area (PPA).<ref name="SynopsysDSOsite" /> By 2023, DSO.ai had been used in over 100 commercial chip productions, which showed that the industry was widely adopting it.<ref name="EETimesDSO mainstream" /> Synopsys later grew its AI tools into a suite called Synopsys.ai. The goal was to use AI in the entire EDA workflow, including verification and testing.<ref name="ForbesSynopsysAIsuite" /><ref name="VirtualGraffitiSynopsysAI" />
Entering the 2020s, the industry saw the commercial launch of autonomous AI driven EDA systems. For example, Synopsys launched DSO.ai™ (Design Space Optimization AI) in early 2020, calling it the first autonomous artificial intelligence application for chip design in the industry.<ref name="SynopsysDSOlaunch2020" /><ref name="SynopsysDSOsite" /> This system uses reinforcement learning to search for the best ways to optimize a design within the huge number of possible solutions, trying to improve power, performance, and area (PPA).<ref name="SynopsysDSOsite" /> By 2023, DSO.ai had been used in over 100 commercial chip productions, which showed that the industry was widely adopting it.<ref name="EETimesDSO mainstream" /> Synopsys later grew its AI tools into a suite called Synopsys.ai. The goal was to use AI in the entire EDA workflow, including verification and testing.<ref name="ForbesSynopsysAIsuite" /><ref name="VirtualGraffitiSynopsysAI" />


These advancements, which combine modern AI methods with [[cloud computing]] and large data resources, have led to talks about a new phase in EDA. Industry experts and participants sometimes call this 'EDA 4.0'.<ref name="SemiconductorEngineeringEDA4.0" /><ref name="UnipvNewsEDA4.0" /> This new era is defined by the widespread use of AI and machine learning to deal with growing design complexity, automate more of the design process, and help engineers handle the huge amounts of data that EDA tools create.<ref name="SemiconductorEngineeringEDA4.0" /><ref name="EmbeddedAIbasedEDA" /> The purpose of EDA 4.0 is to optimize product performance, get products to market faster, and make development and manufacturing smoother through intelligent automation.<ref name="UnipvNewsEDA4.0" />
These advancements, which combine modern AI methods with [[cloud computing]] and large data resources, have led to talks about a new phase in EDA. Industry experts and participants sometimes call this 'EDA 4.0'.<ref name="SemiconductorEngineeringEDA4.0" /><ref name="UnipvNewsEDA4.0" /> This new era is defined by the widespread use of AI and machine learning to deal with growing design complexity, automate more of the design process, and help engineers handle the huge amounts of data that EDA tools create.<ref name="SemiconductorEngineeringEDA4.0" /><ref name="EmbeddedAIbasedEDA" /> The purpose of EDA 4.0 is to optimize product performance, get products to market faster, and make development and manufacturing smoother through intelligent automation.<ref name="UnipvNewsEDA4.0" />


== AI Methods ==
== AI methods ==


People are using [[Artificial intelligence]] techniques more and more to solve difficult problems in electronic design automation. These methods look at large amounts of design data, learn complex patterns, and automate decisions. The goal is to improve the quality of designs, make the design process faster, and handle the increasing complexity of making semiconductors. Important approaches include supervised learning, unsupervised learning, reinforcement learning, and generative AI.
People are using [[Artificial intelligence]] techniques more and more to solve difficult problems in electronic design automation. These methods look at large amounts of design data, learn complex patterns, and automate decisions. The goal is to improve the quality of designs, make the design process faster, and handle the increasing complexity of making semiconductors. Important approaches include supervised learning, unsupervised learning, reinforcement learning, and generative AI.


===Supervised Learning===
===Supervised learning===
[[Supervised learning]] is a type of machine learning where algorithms learn from data that is already labeled.<ref name="DefSupervisedLearningIBM"/> This means every piece of input data in the training set has a known correct answer or "label."<ref name="DefSupervisedLearningGoogle"/> The algorithm learns to connect inputs to outputs by finding the patterns and connections in the training data.<ref name="DefSupervisedLearningSAS"/> After it is trained, the model can then make predictions on new data it has not seen before.<ref name="DefSupervisedLearningMathWorks"/>
[[Supervised learning]] is a type of machine learning where algorithms learn from data that is already labeled.<ref name="DefSupervisedLearningIBM"/> This means every piece of input data in the training set has a known correct answer or "label."<ref name="DefSupervisedLearningGoogle"/> The algorithm learns to connect inputs to outputs by finding the patterns and connections in the training data.<ref name="DefSupervisedLearningSAS"/> After it is trained, the model can then make predictions on new data it has not seen before.<ref name="DefSupervisedLearningMathWorks"/>


In electronic design automation, supervised learning is useful for tasks where past data can predict future results or spot certain problems. This includes estimating design metrics like performance, power, and timing. For example, Ithemal estimates CPU performance,<ref name="Mendis2019Ithemal"/> PRIMAL predicts power use at the RTL stage,<ref name="Zhou2019Primal"/> and other methods predict timing delays in circuits by analyzing their structure.<ref name="Kahng2018PBAfromGBA"/><ref name="Cao2022TFpredictor"/> It is also used to classify parts of a design to find potential problems, like lithography hotspots<ref name="Yang2017Hotspot"/> or predicting how easy a design will be to route.<ref name="Xie2018RouteNet"/> Learning circuit representations that are aware of their function also often uses supervised methods.<ref name="Shi2023DeepGate2"/>
In electronic design automation, supervised learning is useful for tasks where past data can predict future results or spot certain problems. This includes estimating design metrics like performance, power, and timing. For example, Ithemal estimates CPU performance,<ref name="Mendis2019Ithemal"/> PRIMAL predicts power use at the RTL stage,<ref name="Zhou2019Primal"/> and other methods predict timing delays in circuits by analyzing their structure.<ref name="Kahng2018PBAfromGBA"/><ref name="Cao2022TFpredictor"/> It is also used to classify parts of a design to find potential problems, like lithography hotspots<ref name="Yang2017Hotspot"/> or predicting how easy a design will be to route.<ref name="Xie2018RouteNet"/> Learning circuit representations that are aware of their function also often uses supervised methods.<ref name="Shi2023DeepGate2"/>


===Unsupervised Learning===
===Unsupervised learning===
[[Unsupervised learning]] involves training algorithms on data without any labels. This lets the models find hidden patterns, structures, or connections in the data by themselves.<ref name="DefUnsupervisedLearningAWS"/> Common tasks are clustering (which groups similar data together), dimensionality reduction (which reduces the number of variables but keeps important information), and association rule mining (which finds relationships between variables).<ref name="DefUnsupervisedLearningTechTarget"/>
[[Unsupervised learning]] involves training algorithms on data without any labels. This lets the models find hidden patterns, structures, or connections in the data by themselves.<ref name="DefUnsupervisedLearningAWS"/> Common tasks are clustering (which groups similar data together), dimensionality reduction (which reduces the number of variables but keeps important information), and association rule mining (which finds relationships between variables).<ref name="DefUnsupervisedLearningTechTarget"/>


In EDA, these methods are valuable for looking through complex design data to find insights that are not obvious. For instance, clustering can group design settings or tool configurations, which helps in automatically tuning the design process, as seen in the FIST tool.<ref name="Xie2020Fist"/> A major use is in representation learning, where the aim is to automatically learn useful and often simpler representations (features or embeddings) of circuit data. This could involve learning embeddings for analog circuit structures using methods based on graphs<ref name="Lu2022AnalogTopologyOpt"/> or understanding the function of netlists through contrastive learning methods.<ref name="Wang2022NetlistFunctionality"/>
In EDA, these methods are valuable for looking through complex design data to find insights that are not obvious. For instance, clustering can group design settings or tool configurations, which helps in automatically tuning the design process, as seen in the FIST tool.<ref name="Xie2020Fist"/> A major use is in representation learning, where the aim is to automatically learn useful and often simpler representations (features or embeddings) of circuit data. This could involve learning embeddings for analog circuit structures using methods based on graphs<ref name="Lu2022AnalogTopologyOpt"/> or understanding the function of netlists through contrastive learning methods.<ref name="Wang2022NetlistFunctionality"/>


===Reinforcement Learning===
===Reinforcement learning===
[[Reinforcement learning]] (RL) is a kind of machine learning where an agent, or a computer program, learns to make the best decisions by trying things out in a simulated environment.<ref name="DefReinforcementLearningGoogle"/> The agent takes actions, moves between different states, and gets rewards or penalties as feedback. The main goal is to get the highest total reward over time.<ref name="DefReinforcementLearningMicrosoft"/> RL is different from supervised learning because it does not need labeled data. It also differs from unsupervised learning because it learns by trial and error to achieve a specific goal.<ref name="DefReinforcementLearningBerkeley"/>
[[Reinforcement learning]] (RL) is a kind of machine learning where an agent, or a computer program, learns to make the best decisions by trying things out in a simulated environment. The agent takes actions, moves between different states, and gets rewards or penalties as feedback. The main goal is to get the highest total reward over time.<ref name="DefReinforcementLearningMicrosoft"/> RL is different from supervised learning because it does not need labeled data. It also differs from unsupervised learning because it learns by trial and error to achieve a specific goal.<ref name="DefReinforcementLearningBerkeley"/>


In EDA, RL is especially good for tasks that require making a series of decisions to find the best solution in very complex situations with many variables. Its adoption by commercial EDA products shows its growing importance.<ref name="SynopsysAI2023"/> RL has been used for physical design problems like chip floorplanning. In this task, an agent learns to place blocks to improve things like wire length and performance.<ref name="Mirhoseini2021Floorplanning"/><ref name="Xu2021GoodFloorplan"/> In logic synthesis, RL can guide how optimization steps are chosen and in what order they are applied to get better results, as seen in methods like AlphaSyn.<ref name="Pei2023AlphaSyn"/> Adjusting the size of gates to optimize timing is another area where RL agents can learn effective strategies.<ref name="Lu2021RLSizer"/>
In EDA, RL is especially good for tasks that require making a series of decisions to find the best solution in very complex situations with many variables. Its adoption by commercial EDA products shows its growing importance.<ref name="SynopsysAI2023"/> RL has been used for physical design problems like chip floorplanning. In this task, an agent learns to place blocks to improve things like wire length and performance.<ref name="Mirhoseini2021Floorplanning"/><ref name="Xu2021GoodFloorplan"/> In logic synthesis, RL can guide how optimization steps are chosen and in what order they are applied to get better results, as seen in methods like AlphaSyn.<ref name="Pei2023AlphaSyn"/> Adjusting the size of gates to optimize timing is another area where RL agents can learn effective strategies.<ref name="Lu2021RLSizer"/>
Line 52: Line 53:
In EDA, generative AI is being used in many ways, especially through Large Language Models (LLMs) and other architectures like Generative Adversarial Networks (GANs).
In EDA, generative AI is being used in many ways, especially through Large Language Models (LLMs) and other architectures like Generative Adversarial Networks (GANs).


====Large Language Models (LLMs)====
====Large language models (LLMs)====
[[Large language model|Large Language Models]] are deep learning models, often based on the transformer architecture. They are pre trained on huge amounts of text and code.<ref name="DefLLMAWS"/> They are very good at understanding, summarizing, creating, and predicting human language and programming languages.<ref name="DefLLMNvidia"/>
[[Large language model|Large Language Models]] are deep learning models, often based on the transformer architecture. They are pre trained on huge amounts of text and code.<ref name="DefLLMAWS"/> They are very good at understanding, summarizing, creating, and predicting human language and programming languages.<ref name="DefLLMNvidia"/>

Their abilities are being used in EDA for jobs such as:
Their abilities are being used in EDA for jobs such as:


Line 63: Line 65:


'''Verification Assistance:''' Researchers are looking into using LLMs to create verification parts like SystemVerilog Assertions (SVAs) from plain language descriptions.
'''Verification Assistance:''' Researchers are looking into using LLMs to create verification parts like SystemVerilog Assertions (SVAs) from plain language descriptions.
====Other Generative Models====
====Other generative models====
Besides LLMs, other generative models like [[Generative adversarial network|Generative Adversarial Networks]] (GANs) are also used in EDA. A GAN has two neural networks, a generator and a discriminator, which are trained in a competition against each other.<ref name="DefGANOracle"/> The generator learns to make data samples that look like the training data, while the discriminator learns to tell the difference between real and generated samples.<ref name="DefGANAWS"/>
Besides LLMs, other generative models like [[Generative adversarial network|Generative Adversarial Networks]] (GANs) are also used in EDA. A GAN has two neural networks, a generator and a discriminator, which are trained in a competition against each other.<ref name="DefGANOracle"/> The generator learns to make data samples that look like the training data, while the discriminator learns to tell the difference between real and generated samples.<ref name="DefGANAWS"/>

In physical design, GANs have been used for tasks like creating sub resolution assist features (SRAFs) to make chips easier to manufacture in lithography (GAN SRAF<ref name="Alawieh2020GANSRAF"/>) and for optimizing masks (GAN OPC<ref name="Yang2018GANOPC"/>).
In physical design, GANs have been used for tasks like creating sub resolution assist features (SRAFs) to make chips easier to manufacture in lithography (GAN SRAF<ref name="Alawieh2020GANSRAF" />) and for optimizing masks (GAN OPC<ref name="Yang2018GANOPC" />).


== Applications ==
== Applications ==


Artificial intelligence (AI) is being used in many stages of the electronic design workflow. It aims to improve efficiency, get better results, and handle the growing complexity of modern integrated circuits.<ref name="Rapp2021MLCAD"/> <ref name="Gubbi2022Survey"/> <ref name="Chen2024AINativeEDA"/> AI helps designers from the very first ideas about architecture all the way to manufacturing and testing.
Artificial intelligence (AI) is being used in many stages of the electronic design workflow. It aims to improve efficiency, get better results, and handle the growing complexity of modern integrated circuits.<ref name="Rapp2021MLCAD"/> <ref name="Gubbi2022Survey"/> <ref name=":0" /> AI helps designers from the very first ideas about architecture all the way to manufacturing and testing.


===High Level Synthesis and Architectural Exploration===
===High level synthesis and architectural exploration===
In the first phases of chip design, AI helps with [[High-level synthesis|High Level Synthesis (HLS)]] and exploring different system level design options (DSE). These processes are key for turning general ideas into detailed hardware plans.<ref name="Rapp2021MLCAD"/> AI algorithms, often using [[Supervised learning|supervised learning]], are used to build simpler, substitute models. These models can quickly guess important design measurements like area, performance, and power for many different architectural options or HLS settings.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Being able to guess quickly reduces the need for lengthy simulations. This allows for exploring a wider range of possible designs.<ref name="Rapp2021MLCAD"/> For example, the Ithemal tool uses deep neural networks to estimate how fast basic code blocks will run, which helps in making processor architecture decisions.<ref name="Mendis2019Ithemal"/> Similarly, PRIMAL uses machine learning for guessing power use at the register transfer level (RTL), giving early information about how much power the chip will use.<ref name="Zhou2019Primal"/> [[Reinforcement learning|Reinforcement learning (RL)]] and [[Bayesian optimization]] are also used to guide the DSE process. They help search through the many parameters to find the best HLS settings or architectural details like cache sizes.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> LLMs are also being tested for creating architectural plans or initial C code for HLS, as seen with GPT4AIGChip.<ref name="Fu2023GPT4AIGChip"/><ref name="Chen2024AINativeEDA"/>
In the first phases of chip design, AI helps with [[High-level synthesis|High Level Synthesis (HLS)]] and exploring different system level design options (DSE). These processes are key for turning general ideas into detailed hardware plans.<ref name="Rapp2021MLCAD"/> AI algorithms, often using [[Supervised learning|supervised learning]], are used to build simpler, substitute models. These models can quickly guess important design measurements like area, performance, and power for many different architectural options or HLS settings.<ref name="Rapp2021MLCAD"/><ref name=":0" /> Being able to guess quickly reduces the need for lengthy simulations. This allows for exploring a wider range of possible designs.<ref name="Rapp2021MLCAD"/> For example, the Ithemal tool uses deep neural networks to estimate how fast basic code blocks will run, which helps in making processor architecture decisions.<ref name="Mendis2019Ithemal"/> Similarly, PRIMAL uses machine learning for guessing power use at the register transfer level (RTL), giving early information about how much power the chip will use.<ref name="Zhou2019Primal"/> [[Reinforcement learning|Reinforcement learning (RL)]] and [[Bayesian optimization]] are also used to guide the DSE process. They help search through the many parameters to find the best HLS settings or architectural details like cache sizes.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> LLMs are also being tested for creating architectural plans or initial C code for HLS, as seen with GPT4AIGChip.<ref name="Fu2023GPT4AIGChip"/><ref name=":0" />


===Logic Synthesis and Optimization===
===Logic synthesis and optimization===
[[Logic synthesis]] is the process of changing a high level hardware description into an optimized list of electronic gates, known as a [[Netlist|gate level netlist]], that is ready for a specific manufacturing process. AI methods help with different parts of this process, including logic optimization, technology mapping, and making improvements after mapping.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> Supervised learning, especially with [[Graph neural network|Graph Neural Networks (GNNs)]] which are good at handling data that looks like a circuit diagram, helps create models to predict design properties like power or error rates in approximate circuits.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> These predictions then guide the optimization algorithms. Reinforcement learning is used to perform logic optimization directly. For example, agents are trained to choose a series of logic changes to reduce area while meeting timing goals.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> AlphaSyn uses Monte Carlo Tree Search with RL to optimize logic for smaller area.<ref name="Pei2023AlphaSyn"/> FlowTune uses a multi armed bandit strategy to choose synthesis flows.<ref name="Chen2024AINativeEDA"/> AI can also adjust parameters for entire synthesis flows, learning from old designs to recommend the best tool settings for new ones.<ref name="Rapp2021MLCAD"/>
[[Logic synthesis]] is the process of changing a high level hardware description into an optimized list of electronic gates, known as a [[Netlist|gate level netlist]], that is ready for a specific manufacturing process. AI methods help with different parts of this process, including logic optimization, technology mapping, and making improvements after mapping.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> Supervised learning, especially with [[Graph neural network|Graph Neural Networks (GNNs)]] which are good at handling data that looks like a circuit diagram, helps create models to predict design properties like power or error rates in approximate circuits.<ref name="Rapp2021MLCAD"/><ref name=":0" /> These predictions then guide the optimization algorithms. Reinforcement learning is used to perform logic optimization directly. For example, agents are trained to choose a series of logic changes to reduce area while meeting timing goals.<ref name="Rapp2021MLCAD"/><ref name=":0" /> AlphaSyn uses Monte Carlo Tree Search with RL to optimize logic for smaller area.<ref name="Pei2023AlphaSyn"/> FlowTune uses a multi armed bandit strategy to choose synthesis flows.<ref name=":0" /> AI can also adjust parameters for entire synthesis flows, learning from old designs to recommend the best tool settings for new ones.<ref name="Rapp2021MLCAD"/>


===Physical Design===
===Physical design===
[[Physical design (electronics)|Physical design]] turns the list of electronic gates into a physical layout. This layout defines exactly where each component goes and how they are all connected. AI is used a lot in this area to improve PPA metrics.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/>
[[Physical design (electronics)|Physical design]] turns the list of electronic gates into a physical layout. This layout defines exactly where each component goes and how they are all connected. AI is used a lot in this area to improve PPA metrics.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/>
====Placement====
====Placement====
[[Place and route#Placement|Placement]] is the task of finding the best spots for large circuit blocks, called macros, and smaller standard cells. Reinforcement learning has been famously used for macro placement, where an agent learns how to position blocks to reduce wire length and improve timing, as shown by Google's research<ref name="Mirhoseini2021Floorplanning"/> and the GoodFloorplan method.<ref name="Xu2021GoodFloorplan"/> Supervised learning models, including [[Convolutional neural network|CNNs]] that treat the layout like a picture, are used to predict routing problems like [[Design rule checking|DRVs]] (e.g., RouteNet<ref name="Xie2018RouteNet"/>) or timing after routing directly from the placement information.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> RL Sizer uses deep RL to optimize the size of gates during placement to meet timing goals.<ref name="Lu2021RLSizer"/>
[[Place and route#Placement|Placement]] is the task of finding the best spots for large circuit blocks, called macros, and smaller standard cells. Reinforcement learning has been famously used for macro placement, where an agent learns how to position blocks to reduce wire length and improve timing, as shown by Google's research<ref name="Mirhoseini2021Floorplanning"/> and the GoodFloorplan method.<ref name="Xu2021GoodFloorplan"/> Supervised learning models, including [[Convolutional neural network|CNNs]] that treat the layout like a picture, are used to predict routing problems like [[Design rule checking|DRVs]] (e.g., RouteNet<ref name="Xie2018RouteNet"/>) or timing after routing directly from the placement information.<ref name="Rapp2021MLCAD"/><ref name=":0" /> RL Sizer uses deep RL to optimize the size of gates during placement to meet timing goals.<ref name="Lu2021RLSizer"/>
====Clock Network Synthesis====
====Clock network synthesis====
AI helps in Clock Tree Synthesis (CTS) by optimizing the network that distributes the clock signal. GANs, sometimes used with RL (e.g., GAN CTS), are used to predict and improve clock tree structures. The goal is to reduce clock skew and power use.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/><ref name="Chen2024AINativeEDA"/>
AI helps in Clock Tree Synthesis (CTS) by optimizing the network that distributes the clock signal. GANs, sometimes used with RL (e.g., GAN CTS), are used to predict and improve clock tree structures. The goal is to reduce clock skew and power use.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/><ref name="Chen2024AINativeEDA"/>
====Routing====
====Routing====
[[Place and route#Routing|Routing]] creates the physical wire connections. AI models predict routing traffic jams using methods like GANs to help guide the routing algorithms.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> RL is also used to optimize the order in which wires are routed to reduce errors.<ref name="Rapp2021MLCAD"/>
[[Place and route#Routing|Routing]] creates the physical wire connections. AI models predict routing traffic jams using methods like GANs to help guide the routing algorithms.<ref name="Rapp2021MLCAD"/><ref name=":0" /> RL is also used to optimize the order in which wires are routed to reduce errors.<ref name="Rapp2021MLCAD"/>
====Power/Ground Network Synthesis and Analysis====
====Power/ground network synthesis and analysis====
AI models, including CNNs and tree based methods, help in designing and analyzing the Power Delivery Network (PDN). They do this by quickly estimating static and dynamic [[IR drop]]. This guides the creation of the PDN and reduces the number of design cycles.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/><ref name="Chen2024AINativeEDA"/>
AI models, including CNNs and tree based methods, help in designing and analyzing the Power Delivery Network (PDN). They do this by quickly estimating static and dynamic [[IR drop]]. This guides the creation of the PDN and reduces the number of design cycles.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/><ref name=":0" />


===Verification and Validation===
===Verification and validation===
[[Verification and validation (software)|Verification and validation]] are very important to make sure a chip works correctly. AI is used to make these processes, which often take a long time, more efficient.<ref name="Rapp2021MLCAD"/> LLMs are used to turn plain language requirements into formal [[SystemVerilog]] assertions (SVAs) (e.g., AssertLLM)<ref name="Chen2024AINativeEDA"/> and to help with security verification.<ref name="Chen2024AINativeEDA"/> Methods for predicting timing analysis results based on circuit structure, like those first developed by Kahng et al.<ref name="Kahng2018PBAfromGBA"/> and improved with transformer models like TF Predictor,<ref name="Cao2022TFpredictor"/> make the final timing checks much faster. DeepGate2 provides a way to learn circuit representations that understand function, which can help with verification tasks.<ref name="Shi2023DeepGate2"/>
[[Verification and validation (software)|Verification and validation]] are very important to make sure a chip works correctly. AI is used to make these processes, which often take a long time, more efficient.<ref name="Rapp2021MLCAD"/> LLMs are used to turn plain language requirements into formal [[SystemVerilog]] assertions (SVAs) (e.g., AssertLLM)<ref name=":0" /> and to help with security verification.<ref name=":0" /> Methods for predicting timing analysis results based on circuit structure, like those first developed by Kahng et al.<ref name="Kahng2018PBAfromGBA"/> and improved with transformer models like TF Predictor,<ref name="Cao2022TFpredictor"/> make the final timing checks much faster. DeepGate2 provides a way to learn circuit representations that understand function, which can help with verification tasks.<ref name="Shi2023DeepGate2"/>


===Analog and Mixed Signal Design===
===Analog and mixed signal design===
AI methods are being used more often in the complex field of [[Analog circuit|analog]] and [[Mixed-signal integrated circuit|mixed signal circuit]] design. They are helping to choose the circuit structure, determine the size of components, and automate the layout.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> AI models, including Variational Autoencoders (VAEs) and RL, help to explore and create new circuit structures.<ref name="Chen2024AINativeEDA"/> For instance, graph embeddings can be used to optimize the structure of operational amplifiers.<ref name="Lu2022AnalogTopologyOpt"/> Machine learning substitute models give fast performance estimates for component sizing, while RL directly optimizes the component parameters.<ref name="Chen2024AINativeEDA"/><ref name="Rapp2021MLCAD"/>
AI methods are being used more often in the complex field of [[Analog circuit|analog]] and [[Mixed-signal integrated circuit|mixed signal circuit]] design. They are helping to choose the circuit structure, determine the size of components, and automate the layout.<ref name="Rapp2021MLCAD"/><ref name=":0" /> AI models, including Variational Autoencoders (VAEs) and RL, help to explore and create new circuit structures.<ref name=":0" /> For instance, graph embeddings can be used to optimize the structure of operational amplifiers.<ref name="Lu2022AnalogTopologyOpt"/> Machine learning substitute models give fast performance estimates for component sizing, while RL directly optimizes the component parameters.<ref name=":0" /><ref name="Rapp2021MLCAD"/>


===Test, Manufacturing, and Yield Optimization===
===Test, manufacturing and yield optimization===
AI helps in the stages after the silicon chip is made, including testing, [[design for manufacturability]] (DFM), and improving the [[Semiconductor device fabrication#Wafer processing|production yield]].<ref name="Rapp2021MLCAD"/> In lithography, AI models like CNNs and GANs are used for SRAF generation (e.g., GAN SRAF<ref name="Alawieh2020GANSRAF"/>) and [[Optical proximity correction|OPC]] (e.g., GAN OPC<ref name="Yang2018GANOPC"/>) to improve how well the chip prints. AI also predicts lithography problems, known as hotspots, from the layout, as shown by Yang et al.<ref name="Yang2017Hotspot"/> For tuning the broader design flow for manufacturing, FIST uses tree based methods to select parameters.<ref name="Xie2020Fist"/>
AI helps in the stages after the silicon chip is made, including testing, [[design for manufacturability]] (DFM), and improving the [[Semiconductor device fabrication#Wafer processing|production yield]].<ref name="Rapp2021MLCAD"/> In lithography, AI models like CNNs and GANs are used for SRAF generation (e.g., GAN SRAF<ref name="Alawieh2020GANSRAF"/>) and [[Optical proximity correction|OPC]] (e.g., GAN OPC<ref name="Yang2018GANOPC"/>) to improve how well the chip prints. AI also predicts lithography problems, known as hotspots, from the layout, as shown by Yang et al.<ref name="Yang2017Hotspot"/> For tuning the broader design flow for manufacturing, FIST uses tree based methods to select parameters.<ref name="Xie2020Fist"/>


===Hardware-Software Co-design===
===Hardware-software co-design===
Hardware-Software Co-design is about optimizing the hardware and software parts of a system at the same time. LLMs are starting to be used as tools to help with this. For example, they help in designing Compute in Memory (CiM) DNN accelerators, where how the software is arranged and how the hardware is set up are closely connected, as explored by Yan et al.<ref name="Yan2023LLMsForCiM"/><ref name="Chen2024AINativeEDA"/> LLMs can also create architectural plans (e.g., SpecLLM<ref name="Li2024SpecLLM"/>) or HDL code using benchmarks like VerilogEval<ref name="Liu2023VerilogEval"/> and RTLLM,<ref name="Lu2024RTLLM"/> or with tools like AutoChip.<ref name="Thakur2023AutoChip"/> Additionally, agents based on LLMs like ChatEDA make it easier to interact with EDA tools for different design stages.<ref name="Wu2024ChatEDA"/> For netlist representation, work by Wang et al. focuses on learning embeddings that understand the function of the circuit.<ref name="Wang2022NetlistFunctionality"/>
Hardware-Software Co-design is about optimizing the hardware and software parts of a system at the same time. LLMs are starting to be used as tools to help with this. For example, they help in designing Compute in Memory (CiM) DNN accelerators, where how the software is arranged and how the hardware is set up are closely connected, as explored by Yan et al.<ref name="Yan2023LLMsForCiM"/><ref name="Chen2024AINativeEDA"/> LLMs can also create architectural plans (e.g., SpecLLM<ref name="Li2024SpecLLM"/>) or HDL code using benchmarks like VerilogEval<ref name="Liu2023VerilogEval"/> and RTLLM,<ref name="Lu2024RTLLM"/> or with tools like AutoChip.<ref name="Thakur2023AutoChip"/> Additionally, agents based on LLMs like ChatEDA make it easier to interact with EDA tools for different design stages.<ref name="Wu2024ChatEDA"/> For netlist representation, work by Wang et al. focuses on learning embeddings that understand the function of the circuit.<ref name="Wang2022NetlistFunctionality"/>


==Industry Adoption and Ecosystem==
==Industry adoption and ecosystem==


The use of artificial intelligence in electronic design automation is a widespread trend. Many different players in the semiconductor world are helping to create and use these technologies. This includes companies that sell EDA tools and develop software with AI, semiconductor design companies and foundries that use these tools to make and manufacture chips, and very large technology companies that might design their own chips using AI driven methods.
The use of artificial intelligence in electronic design automation is a widespread trend. Many different players in the semiconductor world are helping to create and use these technologies. This includes companies that sell EDA tools and develop software with AI, semiconductor design companies and foundries that use these tools to make and manufacture chips, and very large technology companies that might design their own chips using AI driven methods.


===EDA Tool Vendors===
===EDA tool vendors===
Major EDA companies are leading the way in adding AI to their tool suites to handle growing design complexity. Their strategies often involve creating complete AI platforms. These platforms use machine learning in many different steps of the design and manufacturing process.
Major EDA companies are leading the way in adding AI to their tool suites to handle growing design complexity. Their strategies often involve creating complete AI platforms. These platforms use machine learning in many different steps of the design and manufacturing process.


[[Synopsys]] provides a set of tools in its Synopsys.ai initiative. This initiative aims to improve design metrics and productivity from the system architecture stage all the way to manufacturing.<ref name="SynopsysAIBrochure" /> A main component uses reinforcement learning to improve power, performance, and area (PPA) during the process that goes from the initial design description to the final manufacturing file (DSO.ai). Other parts use AI to speed up verification, optimize test pattern generation for manufacturing, and improve the design of analog circuits in different conditions.<ref name="SynopsysAIBrochure" />
[[Synopsys]] provides a set of tools in its Synopsys.ai initiative. This initiative aims to improve design metrics and productivity from the system architecture stage all the way to manufacturing.<ref name="VirtualGraffitiSynopsysAI" /> A main component uses reinforcement learning to improve power, performance, and area (PPA) during the process that goes from the initial design description to the final manufacturing file (DSO.ai). Other parts use AI to speed up verification, optimize test pattern generation for manufacturing, and improve the design of analog circuits in different conditions.<ref name="VirtualGraffitiSynopsysAI" />


[[Cadence Design Systems|Cadence]] has created its Cadence.AI platform. The company says it uses "agentic AI workflows" to cut down on the design engineering time for complex SoCs.<ref name="CadenceAIOverview" /> Key platforms use AI to optimize the digital design flow (Cadence Cerebrus), improve verification productivity (Verisium), design custom and analog ICs (Virtuoso Studio), and analyze systems at a high level (Optimality Intelligent System Explorer).<ref name="CadenceAIOverview" /><ref name="CadenceEDAWhatIs" />
[[Cadence Design Systems|Cadence]] has created its Cadence.AI platform. The company says it uses "agentic AI workflows" to cut down on the design engineering time for complex SoCs.<ref name="CadenceAIOverview" /> Key platforms use AI to optimize the digital design flow (Cadence Cerebrus), improve verification productivity (Verisium), design custom and analog ICs (Virtuoso Studio), and analyze systems at a high level (Optimality Intelligent System Explorer).<ref name="CadenceAIOverview" /><ref name="CadenceEDAWhatIs" />
Line 113: Line 116:
[[Siemens|Siemens EDA]] directs its AI strategy at improving its current software engines and workflows to give engineers better design insights.<ref name="SiemensEDAWhitePaper" /> AI is used inside its Calibre platform to speed up manufacturing tasks like Design for Manufacturability (DFM), Resolution Enhancement Techniques (RET), and Optical Proximity Correction (OPC). AI is also used in its Questa suite to close coverage faster in digital verification and in its Solido suite to lessen the characterization work for analog designs.<ref name="SiemensEDAWhitePaper" />
[[Siemens|Siemens EDA]] directs its AI strategy at improving its current software engines and workflows to give engineers better design insights.<ref name="SiemensEDAWhitePaper" /> AI is used inside its Calibre platform to speed up manufacturing tasks like Design for Manufacturability (DFM), Resolution Enhancement Techniques (RET), and Optical Proximity Correction (OPC). AI is also used in its Questa suite to close coverage faster in digital verification and in its Solido suite to lessen the characterization work for analog designs.<ref name="SiemensEDAWhitePaper" />


===Semiconductor Design and FPGA Companies===
===Semiconductor design and FPGA companies===
Companies that design semiconductor chips, like FPGAs and adaptive SoCs, are major users and creators of EDA methods that are improved with AI to make their design processes more efficient.
Companies that design semiconductor chips, like FPGAs and adaptive SoCs, are major users and creators of EDA methods that are improved with AI to make their design processes more efficient.


Line 120: Line 123:
[[Nvidia|NVIDIA]] has a specific Design Automation Research group to look into new EDA methods.<ref name="NVIDIADAResearch" /> The group focuses on EDA tools that are accelerated by GPUs and using AI methods like Bayesian optimization and reinforcement learning for EDA problems. One example of their research is AutoDMP, a tool that automates macro placement using multi objective Bayesian optimization and a GPU accelerated placer.<ref name="NVIDIAAutoDMPBlog" />
[[Nvidia|NVIDIA]] has a specific Design Automation Research group to look into new EDA methods.<ref name="NVIDIADAResearch" /> The group focuses on EDA tools that are accelerated by GPUs and using AI methods like Bayesian optimization and reinforcement learning for EDA problems. One example of their research is AutoDMP, a tool that automates macro placement using multi objective Bayesian optimization and a GPU accelerated placer.<ref name="NVIDIAAutoDMPBlog" />


===Cloud Providers and Hyperscalers===
===Cloud providers and hyperscalers===
Large cloud service providers and hyperscale companies have two main roles. They provide the powerful and flexible computing power needed to run difficult AI and EDA tasks, and many also design their own custom silicon, often using AI in their internal design processes.
Large cloud service providers and hyperscale companies have two main roles. They provide the powerful and flexible computing power needed to run difficult AI and EDA tasks, and many also design their own custom silicon, often using AI in their internal design processes.


Line 127: Line 130:
[[IBM]] provides infrastructure on its cloud platform that is focused on EDA, with a strong emphasis on secure environments for foundries and high performance computing.<ref name="IBMEDABlog" /> Their solutions include high performance parallel storage and tools for managing large scale jobs. These are designed to help design houses manage the complex simulation and modeling tasks that are part of modern EDA.<ref name="IBMEDABlog" />
[[IBM]] provides infrastructure on its cloud platform that is focused on EDA, with a strong emphasis on secure environments for foundries and high performance computing.<ref name="IBMEDABlog" /> Their solutions include high performance parallel storage and tools for managing large scale jobs. These are designed to help design houses manage the complex simulation and modeling tasks that are part of modern EDA.<ref name="IBMEDABlog" />


== Limitations & Challenges ==
== Limitations and challenges ==


===Data Quality and Availability===
===Data quality and availability===
A main challenge for using AI effectively in EDA is the availability and quality of data.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Machine learning models, especially deep learning ones, usually need large, varied, and high quality datasets to be trained. This ensures they can work well on new designs they have not seen before.<ref name="Chen2024AINativeEDA"/> However, a lot of the detailed design data in the semiconductor industry is secret and very sensitive. This makes companies unwilling to share it.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> This lack of public, detailed examples makes it difficult for university researchers and for the development of models that can be widely used. Even when data is available, it might have problems like being noisy, incomplete, or unbalanced. For instance, having many more examples of successful designs than ones with problems can lead to biased or poorly performing AI models.<ref name="Gubbi2022Survey"/> The work and cost of collecting, organizing, and correctly labeling large EDA datasets also create big obstacles.<ref name="Chen2024AINativeEDA"/> Solving these data related problems is key for moving AI forward in EDA. Possible solutions include creating strong data augmentation methods, generating realistic synthetic data, and building community platforms for sharing data securely and for benchmarking.<ref name="Chen2024AINativeEDA"/>
A main challenge for using AI effectively in EDA is the availability and quality of data.<ref name="Rapp2021MLCAD"/><ref name=":0" /> Machine learning models, especially deep learning ones, usually need large, varied, and high quality datasets to be trained. This ensures they can work well on new designs they have not seen before.<ref name=":0" /> However, a lot of the detailed design data in the semiconductor industry is secret and very sensitive. This makes companies unwilling to share it.<ref name="Rapp2021MLCAD"/><ref name=":0" /> This lack of public, detailed examples makes it difficult for university researchers and for the development of models that can be widely used. Even when data is available, it might have problems like being noisy, incomplete, or unbalanced. For instance, having many more examples of successful designs than ones with problems can lead to biased or poorly performing AI models.<ref name="Gubbi2022Survey"/> The work and cost of collecting, organizing, and correctly labeling large EDA datasets also create big obstacles.<ref name=":0" /> Solving these data related problems is key for moving AI forward in EDA. Possible solutions include creating strong data augmentation methods, generating realistic synthetic data, and building community platforms for sharing data securely and for benchmarking.<ref name=":0" />


===Integration and Compute Cost===
===Integration and compute cost===
Putting AI solutions into practice in the EDA field has major challenges. These include fitting the AI into the complex sets of tools that already exist and handling the high cost of computing power.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> Adding new AI models and algorithms into established EDA workflows, which are often made of many connected tools and private formats, takes a lot of engineering work and can have problems working with other tools.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Also, training and running complex AI models, especially deep learning ones, requires a lot of computing resources. This includes powerful GPUs or special AI accelerators, large amounts of memory, and long processing times.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> These needs lead to higher costs for both creating and using the AI models.<ref name="Gubbi2022Survey"/> Making AI methods able to handle the ever growing size and complexity of modern chip designs, while staying efficient and using a reasonable amount of memory, is still an ongoing challenge.<ref name="Gubbi2022Survey"/><ref name="Chen2024AINativeEDA"/>
Putting AI solutions into practice in the EDA field has major challenges. These include fitting the AI into the complex sets of tools that already exist and handling the high cost of computing power.<ref name="Rapp2021MLCAD"/><ref name="Gubbi2022Survey"/> Adding new AI models and algorithms into established EDA workflows, which are often made of many connected tools and private formats, takes a lot of engineering work and can have problems working with other tools.<ref name="Rapp2021MLCAD"/><ref name=":0" /> Also, training and running complex AI models, especially deep learning ones, requires a lot of computing resources. This includes powerful GPUs or special AI accelerators, large amounts of memory, and long processing times.<ref name="Rapp2021MLCAD"/><ref name=":0" /> These needs lead to higher costs for both creating and using the AI models.<ref name="Gubbi2022Survey"/> Making AI methods able to handle the ever growing size and complexity of modern chip designs, while staying efficient and using a reasonable amount of memory, is still an ongoing challenge.<ref name="Gubbi2022Survey"/><ref name=":0" />


===Intellectual Property and Confidentiality===
===Intellectual property and confidentiality===
The use of AI in EDA, especially with sensitive design data, brings up serious worries about protecting secret company information, known as intellectual property (IP), and keeping data private.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Chip designs are very valuable IP, and there is always a risk when giving this secret information to AI models, particularly if they are made by other companies or run on cloud platforms.<ref name="Rapp2021MLCAD"/> It is extremely important to make sure that design data used for training or making decisions is not compromised, leaked, or used to accidentally leak secret knowledge.<ref name="Chen2024AINativeEDA"/> While strategies like fine tuning open source models on private data are being tried to reduce some privacy risks,<ref name="Chen2024AINativeEDA"/> it is essential to set up secure data handling rules, strong access controls, and clear data management policies. The unwillingness to share detailed design data because of these IP and privacy worries also slows down collaborative research and the creation of better AI models for the EDA industry.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/>
The use of AI in EDA, especially with sensitive design data, brings up serious worries about protecting secret company information, known as intellectual property (IP), and keeping data private.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Chip designs are very valuable IP, and there is always a risk when giving this secret information to AI models, particularly if they are made by other companies or run on cloud platforms.<ref name="Rapp2021MLCAD"/> It is extremely important to make sure that design data used for training or making decisions is not compromised, leaked, or used to accidentally leak secret knowledge.<ref name="Chen2024AINativeEDA"/> While strategies like fine tuning open source models on private data are being tried to reduce some privacy risks,<ref name="Chen2024AINativeEDA"/> it is essential to set up secure data handling rules, strong access controls, and clear data management policies. The unwillingness to share detailed design data because of these IP and privacy worries also slows down collaborative research and the creation of better AI models for the EDA industry.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/>


===Human Oversight and Interpretability===
===Human oversight and interpretability===
Even with the push for more automation, the role of human designers is still vital, and making AI models understandable continues to be a challenge.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> Many advanced AI models, especially deep learning systems, can act like "black boxes," which makes it hard for engineers to understand why they make certain predictions or design choices.<ref name="Rapp2021MLCAD"/> This lack of clarity can prevent adoption, as designers might not want to trust or use solutions if their decision making process is not clear, especially in critical applications or when fixing unexpected problems.<ref name="Gubbi2022Survey"/> While AI can automate many jobs, human knowledge is still essential. It is needed to set design goals, check the results from AI, handle new or unusual situations where AI might fail, and provide the specialized knowledge that often guides AI development.<ref name="Rapp2021MLCAD"/><ref name="Chen2024AINativeEDA"/> To effectively use AI in EDA, it means that human engineers and smart tools need to work together effectively. This requires designers to learn new skills for working with and supervising AI systems.<ref name="Chen2024AINativeEDA"/>
Even with the push for more automation, the role of human designers is still vital, and making AI models understandable continues to be a challenge.<ref name="Rapp2021MLCAD"/><ref name=":0" /> Many advanced AI models, especially deep learning systems, can act like "black boxes," which makes it hard for engineers to understand why they make certain predictions or design choices.<ref name="Rapp2021MLCAD"/> This lack of clarity can prevent adoption, as designers might not want to trust or use solutions if their decision making process is not clear, especially in critical applications or when fixing unexpected problems.<ref name="Gubbi2022Survey"/> While AI can automate many jobs, human knowledge is still essential. It is needed to set design goals, check the results from AI, handle new or unusual situations where AI might fail, and provide the specialized knowledge that often guides AI development.<ref name="Rapp2021MLCAD"/><ref name=":0" /> To effectively use AI in EDA, it means that human engineers and smart tools need to work together effectively. This requires designers to learn new skills for working with and supervising AI systems.<ref name=":0" />
==References==
==References==
<references>
<references>
<ref name="ParkerHayati1987">{{cite journal |last1=Parker |first1=A. C. |last2=Hayati |first2=S. |year=1987 |title=Automating the VLSI design process using expert systems and silicon compilation |journal=Proceedings of the IEEE |volume=75 |issue=6 |pages=777-785 |doi=10.1109/PROC.1987.13806}}</ref>
<ref name="ParkerHayati1987">{{Cite journal |last=Parker |first=A.C. |last2=Hayati |first2=S. |date=1987-06 |title=Automating the VLSI design process using expert systems and silicon compilation |url=https://ieeexplore.ieee.org/document/1458066/ |journal=Proceedings of the IEEE |volume=75 |issue=6 |pages=777–785 |doi=10.1109/PROC.1987.13799 |issn=1558-2256}}</ref>
<ref name="BushnellDirector1986">{{cite conference |last1=Bushnell |first1=M. |last2=Director |first2=S. |year=1986 |title=VLSI CAD tool integration using the ULYSSES environment |book-title=Proceedings of the 23rd Design Automation Conference |pages=55-61 |doi=10.1109/DAC.1986.158594}}</ref>
<ref name="BushnellDirector1986">{{Cite journal |last=Bushnell |first=M.L. |last2=Director |first2=S.W. |date=1986-06 |title=VLSI CAD Tool Integration Using the Ulysses Environment |url=https://ieeexplore.ieee.org/document/1586068/ |journal=23rd ACM/IEEE Design Automation Conference |pages=55–61 |doi=10.1109/DAC.1986.1586068}}</ref>
<ref name="GranackiKnappParker1985">{{cite conference |last1=Granacki |first1=J. |last2=Knapp |first2=D. |last3=Parker |first3=A. |year=1985 |title=The ADAM Advanced Design AutoMation System: Overview, planner and natural language interface |book-title=Proceedings of the 22nd Design Automation Conference |pages=727-733 |doi=10.1145/318012.318155}}</ref>
<ref name="GranackiKnappParker1985">{{Cite journal |last=Granacki |first=J. |last2=Knapp |first2=D. |last3=Parker |first3=A. |date=1985-06 |title=The ADAM Advanced Design Automation System: Overview, Planner and Natural Language Interface |url=https://ieeexplore.ieee.org/document/1586023/ |journal=22nd ACM/IEEE Design Automation Conference |pages=727–730 |doi=10.1109/DAC.1985.1586023}}</ref>
<ref name="Kirk1985">{{cite conference |last1=Kirk |first1=R. S. |year=1985 |date=December 1985 |title=The impact of AI technology on VLSI design |book-title=Proceedings of the International Workshop on Managing Requirements Knowledge |publisher=IEEE Computer Society |pages=125-125 |doi=10.1109/MARQWK.1985.10}}</ref>
<ref name="Kirk1985">{{Cite book |last=Kirk |first=R. S. |url=https://doi.ieeecomputersociety.org/10.1109/AFIPS.1985.63 |title=The impact of AI technology on VLSI design |date=1985 |publisher=Managing Requirements Knowledge, International Workshop on, CHICAGO |pages=125 |doi=10.1109/AFIPS.1985.63}}</ref>
<ref name="AjayiBlaauw2019">{{cite conference |last1=Ajayi |first1=T. |last2=Blaauw |first2=D. |year=2019 |date=January 2019 |title=OpenROAD: Toward a self-driving, open-source digital layout implementation tool chain |book-title=Proceedings of the Government Microcircuit Applications & Critical Technology Conference (GOMACTech) |url=https://dl.acm.org/doi/pdf/10.5555/3366159.3366161}}</ref>
<ref name="AjayiBlaauw2019">{{Cite journal |last=Ajayi |first=T. |last2=Blaauw |first2=D. |date=2019-01 |title=OpenROAD: Toward a Self-Driving, Open-Source Digital Layout Implementation Tool Chain |url=https://par.nsf.gov/biblio/10171024-openroad-toward-self-driving-open-source-digital-layout-implementation-tool-chain |journal=Proceedings of Government Microcircuit Applications and Critical Technology Conference |language=en}}</ref>
<ref name="MirhoseiniNature2021">{{cite journal |last1=Mirhoseini |first1=A. |last2=Goldie |first2=A. |last3=Yazgan |first3=M. |year=2021 |title=A graph placement methodology for fast chip design |journal=Nature |volume=594 |issue=7862 |pages=207-212 |doi=10.1038/s41586-021-03544-w}}</ref>
<ref name="MirhoseiniNature2021">{{Cite journal |last=Mirhoseini |first=Azalia |last2=Goldie |first2=Anna |last3=Yazgan |first3=Mustafa |last4=Jiang |first4=Joe Wenjie |last5=Songhori |first5=Ebrahim |last6=Wang |first6=Shen |last7=Lee |first7=Young-Joon |last8=Johnson |first8=Eric |last9=Pathak |first9=Omkar |last10=Nova |first10=Azade |last11=Pak |first11=Jiwoo |last12=Tong |first12=Andy |last13=Srinivasa |first13=Kavya |last14=Hang |first14=William |last15=Tuncer |first15=Emre |date=2021-06 |title=A graph placement methodology for fast chip design |url=https://www.nature.com/articles/s41586-021-03544-w |journal=Nature |language=en |volume=594 |issue=7862 |pages=207–212 |doi=10.1038/s41586-021-03544-w |issn=1476-4687}}</ref>
<ref name="SynopsysDSOlaunch2020">{{cite web |url=https://news.synopsys.com/2020-03-11-Synopsys-Advances-State-of-the-Art-in-Electronic-Design-with-Revolutionary-Artificial-Intelligence-Technology |title=Synopsys Advances State-of-the-Art in Electronic Design with Revolutionary Artificial Intelligence Technology |publisher=PR Newswire |date=March 11, 2020 |accessdate=June 7, 2025}}</ref>
<ref name="SynopsysDSOlaunch2020">{{Cite web |title=Synopsys Advances State-of-the-Art in Electronic Design with Revolutionary Artificial Intelligence Technology |url=https://news.synopsys.com/2020-03-11-Synopsys-Advances-State-of-the-Art-in-Electronic-Design-with-Revolutionary-Artificial-Intelligence-Technology |access-date=2025-06-14 |website=news.synopsys.com |language=en}}</ref>
<ref name="SynopsysDSOsite">{{cite web |url=https://www.synopsys.com/ai/dso-ai.html |title=DSO.ai: AI-Driven Design Applications |publisher=Synopsys |accessdate=June 7, 2025}}</ref>
<ref name="SynopsysDSOsite">{{Cite web |title=DSO.ai: AI-Driven Design Applications {{!}} Synopsys AI |url=https://www.synopsys.com/ai/ai-powered-eda/dso-ai.html |access-date=2025-06-14 |website=www.synopsys.com |language=en}}</ref>
<ref name="EETimesDSO mainstream">{{cite web |url=https://www.eetimes.com/ai-powered-chip-design-goes-mainstream/ |title=AI-Powered Chip Design Goes Mainstream |author=Nitin Dahad |publisher=EE Times |date=February 10, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="EETimesDSO mainstream">{{Cite web |last=Ward-Foxton |first=Sally |date=2023-02-10 |title=AI-Powered Chip Design Goes Mainstream |url=https://www.eetimes.com/ai-powered-chip-design-goes-mainstream/ |access-date=2025-06-14 |website=EE Times}}</ref>
<ref name="ForbesSynopsysAIsuite">{{cite web |url=https://www.forbes.com/sites/karlfreund/2023/03/29/synopsysai-new-ai-solutions-across-the-entire-chip-development-workflow/ |title=Synopsys.ai: New AI Solutions Across The Entire Chip Development Workflow |author=Karl Freund |publisher=Forbes |date=March 29, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="ForbesSynopsysAIsuite">{{Cite web |last=Freund |first=Karl |title=Synopsys.ai: New AI Solutions Across The Entire Chip Development Workflow |url=https://www.forbes.com/sites/karlfreund/2023/03/29/synopsysai-new-ai-solutions-across-the-entire-chip-workflow/ |access-date=2025-06-14 |website=Forbes |language=en}}</ref>
<ref name="VirtualGraffitiSynopsysAI">{{cite web |url=https://www.synopsys.com/ai.html |title=Synopsys.ai – Full Stack, AI-Driven EDA Suite |publisher=Synopsys |accessdate=June 7, 2025}}</ref>
<ref name="VirtualGraffitiSynopsysAI">{{cite web |title=Synopsys.ai – Full Stack, AI-Driven EDA Suite |url=https://www.synopsys.com/content/dam/synopsys/solutions/synopsys-ai-brochure.pdf |accessdate=June 7, 2025 |publisher=Synopsys}}</ref>
<ref name="SemiconductorEngineeringEDA4.0">{{cite web |url=https://semiengineering.com/welcome-to-eda-4-0-and-the-ai-driven-revolution/ |title=Welcome To EDA 4.0 And The AI-Driven Revolution |publisher=Semiconductor Engineering |date=June 1, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="SemiconductorEngineeringEDA4.0">{{Cite web |last=Yu |first=Dan |date=2023-06-01 |title=Welcome To EDA 4.0 And The AI-Driven Revolution |url=https://semiengineering.com/welcome-to-eda-4-0-and-the-ai-driven-revolution/ |access-date=2025-06-14 |website=Semiconductor Engineering |language=en-US}}</ref>
<ref name="UnipvNewsEDA4.0">{{cite web |url=https://unipv.news/2023/11/29/eda-4-0-and-the-ai-driven-revolution-2/ |title=EDA 4.0 And The AI-Driven Revolution |publisher=unipv.news (reporting on a Siemens presentation) |date=November 29, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="UnipvNewsEDA4.0">{{cite web |date=November 29, 2023 |title=EDA 4.0 And The AI-Driven Revolution |url=https://www.unipv.news/sites/magazine/files/2023-11/2023_11_29_Locandina_EDA%204.0.pdf |accessdate=June 7, 2025 |publisher=unipv.news (reporting on a Siemens presentation)}}</ref>
<ref name="EmbeddedAIbasedEDA">{{cite web |url=https://www.embedded.com/how-ai-based-eda-will-enable-not-replace-the-engineer/ |title=How AI-based EDA will enable, not replace the engineer |author=Nitin Dahad |publisher=Embedded.com |date=November 10, 2022 |accessdate=June 7, 2025}}</ref>
<ref name="EmbeddedAIbasedEDA">{{Cite web |last=Dahad |first=Nitin |date=2022-11-10 |title=How AI-based EDA will enable, not replace the engineer |url=https://www.embedded.com/how-ai-based-eda-will-enable-not-replace-the-engineer/ |access-date=2025-06-14 |website=Embedded |language=en-US}}</ref>
<ref name="DefSupervisedLearningIBM">{{cite web |url=https://www.ibm.com/topics/supervised-learning |title=What Is Supervised Learning? |publisher=IBM |accessdate=June 7, 2025}}</ref>
<ref name="DefSupervisedLearningIBM">{{Cite web |last=Belcic |first=Ivan |last2=Stryker |first2=Cole |date=2024-12-28 |title=What Is Supervised Learning? {{!}} IBM |url=https://www.ibm.com/think/topics/supervised-learning |access-date=2025-06-14 |website=www.ibm.com |language=en}}</ref>
<ref name="DefSupervisedLearningGoogle">{{cite web |url=https://cloud.google.com/discover/what-is-supervised-learning |title=What is Supervised Learning? |publisher=Google Cloud |accessdate=June 7, 2025}}</ref>
<ref name="DefSupervisedLearningGoogle">{{Cite web |title=What is Supervised Learning? |url=https://cloud.google.com/discover/what-is-supervised-learning |access-date=2025-06-14 |website=Google Cloud |language=en-US}}</ref>
<ref name="DefSupervisedLearningSAS">{{cite web |url=https://www.sas.com/en_us/insights/analytics/supervised-learning.html |title=Supervised Learning |publisher=SAS Institute |accessdate=June 7, 2025}}</ref>
<ref name="DefSupervisedLearningSAS">{{Cite web |title=A guide to machine learning algorithms and their applications |url=https://www.sas.com/en_gb/insights/articles/analytics/machine-learning-algorithms-guide.html |access-date=2025-06-14 |website=www.sas.com |language=en-GB}}</ref>
<ref name="DefSupervisedLearningMathWorks">{{cite web |url=https://www.mathworks.com/discovery/supervised-learning.html |title=What Is Supervised Learning? - MATLAB & Simulink |publisher=MathWorks |accessdate=June 7, 2025}}</ref>
<ref name="DefSupervisedLearningMathWorks">{{Cite web |title=Supervised Learning |url=https://www.mathworks.com/discovery/supervised-learning.html |archive-url=http://web.archive.org/web/20250212214655/https://www.mathworks.com/discovery/supervised-learning.html |archive-date=2025-02-12 |access-date=2025-06-14 |website=www.mathworks.com |language=en}}</ref>
<ref name="Mendis2019Ithemal">{{cite conference |last1=Mendis |first1=C. |last2=Renda |first2=A. |last3=Amarasinghe |first3=S. |last4=Carbin |first4=M. |year=2019 |date=May 2019 |title=Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks |book-title=International Conference on Machine Learning |pages=4505-4515 |publisher=PMLR}}</ref>
<ref name="Mendis2019Ithemal">{{Cite web |last=Mendis |first=Charith |last2=Renda |first2=Alex |last3=Amarasinghe |first3=Saman |last4=Carbin |first4=Michael |date=2018-08-21 |title=Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks |url=https://arxiv.org/abs/1808.07412v2 |access-date=2025-06-14 |website=arXiv.org |language=en}}</ref>
<ref name="Zhou2019Primal">{{cite conference |last1=Zhou |first1=Y. |last2=Ren |first2=H. |last3=Zhang |first3=Y. |last4=Keller |first4=B. |last5=Khailany |first5=B. |last6=Zhang |first6=Z. |year=2019 |date=June 2019 |title=PRIMAL: Power inference using machine learning |book-title=Proceedings of the 56th Annual Design Automation Conference 2019 |pages=1-6 |doi=10.1145/3316781.3317868}}</ref>
<ref name="Zhou2019Primal">{{Cite journal |last=Zhou |first=Yuan |last2=Ren |first2=Haoxing |last3=Zhang |first3=Yanqing |last4=Keller |first4=Ben |last5=Khailany |first5=Brucek |last6=Zhang |first6=Zhiru |date=2019-06 |title=PRIMAL: Power Inference using Machine Learning |url=https://ieeexplore.ieee.org/document/8806775 |journal=2019 56th ACM/IEEE Design Automation Conference (DAC) |pages=1–6}}</ref>
<ref name="Kahng2018PBAfromGBA">{{cite conference |last1=Kahng |first1=A. B. |last2=Mallappa |first2=U. |last3=Saul |first3=L. |year=2018 |date=October 2018 |title=Using machine learning to predict path-based slack from graph-based timing analysis |book-title=2018 IEEE 36th International Conference on Computer Design (ICCD) |pages=603-612 |publisher=IEEE |doi=10.1109/ICCD.2018.00099}}</ref>
<ref name="Kahng2018PBAfromGBA">{{Cite journal |last=Kahng |first=Andrew B. |last2=Mallappa |first2=Uday |last3=Saul |first3=Lawrence |date=2018-10 |title=Using Machine Learning to Predict Path-Based Slack from Graph-Based Timing Analysis |url=https://ieeexplore.ieee.org/document/8615746 |journal=2018 IEEE 36th International Conference on Computer Design (ICCD) |pages=603–612 |doi=10.1109/ICCD.2018.00096}}</ref>
<ref name="Cao2022TFpredictor">{{cite journal |last1=Cao |first1=P. |last2=He |first2=G. |last3=Yang |first3=T. |year=2022 |title=Tf-predictor: Transformer-based prerouting path delay prediction framework |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=42 |issue=7 |pages=2227-2237 |doi=10.1109/TCAD.2022.3226970}}</ref>
<ref name="Cao2022TFpredictor">{{Cite journal |last=Cao |first=Peng |last2=He |first2=Guoqing |last3=Yang |first3=Tai |date=2023-07 |title=TF-Predictor: Transformer-Based Prerouting Path Delay Prediction Framework |url=https://ieeexplore.ieee.org/document/9927326 |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=42 |issue=7 |pages=2227–2237 |doi=10.1109/TCAD.2022.3216752 |issn=1937-4151}}</ref>
<ref name="Yang2017Hotspot">{{cite journal |last1=Yang |first1=H. |last2=Luo |first2=L. |last3=Su |first3=J. |last4=Lin |first4=C. |last5=Yu |first5=B. |year=2017 |title=Imbalance aware lithography hotspot detection: a deep learning approach |journal=Journal of Micro/Nanolithography, MEMS, and MOEMS |volume=16 |issue=3 |pages=033504 |doi=10.1117/1.JMM.16.3.033504}}</ref>
<ref name="Yang2017Hotspot">{{Cite journal |last=Yang |first=Haoyu |last2=Luo |first2=Luyang |last3=Su |first3=Jing |last4=Lin |first4=Chenxi |last5=Yu |first5=Bei |title=Imbalance aware lithography hotspot detection: a deep learning approach |url=https://www.spiedigitallibrary.org/journals/Journal-of-MicroNanolithography-MEMS-and-MOEMS/volume-16/issue-3/033504/Imbalance-aware-lithography-hotspot-detection-a-deep-learning-approach/10.1117/1.JMM.16.3.033504.short |journal=SPIE Digital Library |doi=10.1117/1.jmm.16.3.033504.short}}</ref>
<ref name="Xie2018RouteNet">{{cite conference |last1=Xie |first1=Z. |last2=Huang |first2=Y. H. |last3=Fang |first3=G. Q. |last4=Ren |first4=H. |last5=Fang |first5=S. Y. |last6=Chen |first6=Y. |last7=Hu |first7=J. |year=2018 |date=November 2018 |title=RouteNet: Routability prediction for mixed-size designs using convolutional neural network |book-title=2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) |pages=1-8 |publisher=IEEE |doi=10.1109/ICCAD.2018.8573507}}</ref>
<ref name="Xie2018RouteNet">{{Cite journal |last=Xie |first=Zhiyao |last2=Huang |first2=Yu-Hung |last3=Fang |first3=Guan-Qi |last4=Ren |first4=Haoxing |last5=Fang |first5=Shao-Yun |last6=Chen |first6=Yiran |last7=Hu |first7=Jiang |date=2018-11 |title=RouteNet: Routability prediction for Mixed-Size Designs Using Convolutional Neural Network |url=https://ieeexplore.ieee.org/document/8587655 |journal=2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) |pages=1–8 |doi=10.1145/3240765.3240843}}</ref>
<ref name="Shi2023DeepGate2">{{cite conference |last1=Shi |first1=Z. |last2=Pan |first2=H. |last3=Khan |first3=S. |last4=Li |first4=M. |last5=Liu |first5=Y. |last6=Huang |first6=J. |year=2023 |date=October 2023 |title=Deepgate2: Functionality-aware circuit representation learning |book-title=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1-9 |publisher=IEEE |doi=10.1109/ICCAD57390.2023.10320806}}</ref>
<ref name="Shi2023DeepGate2">{{Cite journal |last=Shi |first=Zhengyuan |last2=Pan |first2=Hongyang |last3=Khan |first3=Sadaf |last4=Li |first4=Min |last5=Liu |first5=Yi |last6=Huang |first6=Junhua |last7=Zhen |first7=Hui-Ling |last8=Yuan |first8=Mingxuan |last9=Chu |first9=Zhufei |last10=Xu |first10=Qiang |date=2023-10 |title=DeepGate2: Functionality-Aware Circuit Representation Learning |url=https://ieeexplore.ieee.org/document/10323798 |journal=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1–9 |doi=10.1109/ICCAD57390.2023.10323798}}</ref>
<ref name="DefUnsupervisedLearningAWS">{{cite web |url=https://aws.amazon.com/what-is/unsupervised-learning/ |title=What is Unsupervised Learning? |publisher=Amazon Web Services |accessdate=June 7, 2025}}</ref>
<ref name="DefUnsupervisedLearningAWS">{{Cite web |date=2021-09-23 |title=What Is Unsupervised Learning? {{!}} IBM |url=https://www.ibm.com/think/topics/unsupervised-learning |access-date=2025-06-14 |website=www.ibm.com |language=en}}</ref>
<ref name="DefUnsupervisedLearningTechTarget">{{cite web |url=https://www.techtarget.com/searchenterpriseai/definition/unsupervised-learning |title=What is unsupervised learning? |publisher=TechTarget |accessdate=June 7, 2025}}</ref>
<ref name="DefUnsupervisedLearningTechTarget">{{Cite web |last=Yasar |first=Kinza |last2=Gillis |first2=Alexander S. |last3=Pratt |first3=Mary K. |title=What is Unsupervised Learning? {{!}} Definition from TechTarget |url=https://www.techtarget.com/searchenterpriseai/definition/unsupervised-learning |access-date=2025-06-14 |website=Search Enterprise AI |language=en}}</ref>
<ref name="Xie2020Fist">{{cite conference |last1=Xie |first1=Z. |last2=Fang |first2=G. Q. |last3=Huang |first3=Y. H. |last4=Ren |first4=H. |last5=Zhang |first5=Y. |last6=Khailany |first6=B. |year=2020 |date=January 2020 |title=FIST: A feature-importance sampling and tree-based method for automatic design flow parameter tuning |book-title=2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) |pages=19-25 |publisher=IEEE |doi=10.1109/ASP-DAC47756.2020.9045421}}</ref>
<ref name="Xie2020Fist">{{Cite journal |last=Xie |first=Zhiyao |last2=Fang |first2=Guan-Qi |last3=Huang |first3=Yu-Hung |last4=Ren |first4=Haoxing |last5=Zhang |first5=Yanqing |last6=Khailany |first6=Brucek |last7=Fang |first7=Shao-Yun |last8=Hu |first8=Jiang |last9=Chen |first9=Yiran |last10=Barboza |first10=Erick Carvajal |date=2020-01 |title=FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic Design Flow Parameter Tuning |url=https://ieeexplore.ieee.org/abstract/document/9045201 |journal=2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) |pages=19–25 |doi=10.1109/ASP-DAC47756.2020.9045201}}</ref>
<ref name="Lu2022AnalogTopologyOpt">{{cite conference |last1=Lu |first1=J. |last2=Lei |first2=L. |last3=Yang |first3=F. |last4=Shang |first4=L. |last5=Zeng |first5=X. |year=2022 |date=March 2022 |title=Topology optimization of operational amplifier in continuous space via graph embedding |book-title=2022 Design, Automation & Test in Europe Conference & Exhibition (DATE) |pages=142-147 |publisher=IEEE |doi=10.23919/DATE54114.2022.9774676}}</ref>
<ref name="Lu2022AnalogTopologyOpt">{{Cite journal |last=Lu |first=Jialin |last2=Lei |first2=Liangbo |last3=Yang |first3=Fan |last4=Shang |first4=Li |last5=Zeng |first5=Xuan |date=2022-03 |title=Topology Optimization of Operational Amplifier in Continuous Space via Graph Embedding |url=https://ieeexplore.ieee.org/document/9774676 |journal=2022 Design, Automation & Test in Europe Conference & Exhibition (DATE) |pages=142–147 |doi=10.23919/DATE54114.2022.9774676}}</ref>
<ref name="Wang2022NetlistFunctionality">{{cite conference |last1=Wang |first1=Z. |last2=Bai |first2=C. |last3=He |first3=Z. |last4=Zhang |first4=G. |last5=Xu |first5=Q. |last6=Ho |first6=T. Y. |year=2022 |date=July 2022 |title=Functionality matters in netlist representation learning |book-title=Proceedings of the 59th ACM/IEEE Design Automation Conference |pages=61-66 |doi=10.1145/3489517.3530505}}</ref>
<ref name="Wang2022NetlistFunctionality">{{Cite journal |last=Wang |first=Ziyi |last2=Bai |first2=Chen |last3=He |first3=Zhuolun |last4=Zhang |first4=Guangliang |last5=Xu |first5=Qiang |last6=Ho |first6=Tsung-Yi |last7=Yu |first7=Bei |last8=Huang |first8=Yu |date=2022-08-23 |title=Functionality matters in netlist representation learning |url=https://dl.acm.org/doi/10.1145/3489517.3530410 |journal=Proceedings of the 59th ACM/IEEE Design Automation Conference |series=DAC '22 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=61–66 |doi=10.1145/3489517.3530410 |isbn=978-1-4503-9142-9}}</ref>
<ref name="DefReinforcementLearningGoogle">{{cite web |url=https://developers.google.com/machine-learning/reinforcement-learning/terminology |title=Reinforcement Learning Terminology |publisher=Google Developers |accessdate=June 7, 2025}}</ref>
<ref name="DefReinforcementLearningGoogle">{{cite web |url=https://developers.google.com/machine-learning/reinforcement-learning/terminology |title=Reinforcement Learning Terminology |publisher=Google Developers |accessdate=June 7, 2025}}</ref>
<ref name="DefReinforcementLearningMicrosoft">{{cite web |url=https://learn.microsoft.com/en-us/azure/machine-learning/concept-reinforcement-learning |title=What is reinforcement learning (RL)? - Azure Machine Learning |publisher=Microsoft Learn |accessdate=June 7, 2025}}</ref>
<ref name="DefReinforcementLearningMicrosoft">{{Cite web |date=2018-04-25 |title=Reinforcement Learning |url=https://www.geeksforgeeks.org/what-is-reinforcement-learning/ |access-date=2025-06-14 |website=GeeksforGeeks |language=en-US}}</ref>
<ref name="DefReinforcementLearningBerkeley">{{cite web |url=https://bair.berkeley.edu/blog/2018/12/05/rl-bootcamp/ |title=Reinforcement Learning Bootcamp |publisher=Berkeley Artificial Intelligence Research (BAIR) |date=December 5, 2018 |accessdate=June 7, 2025}}</ref>
<ref name="DefReinforcementLearningBerkeley">{{Cite web |title=Deep RL Bootcamp - Lectures |url=https://sites.google.com/view/deep-rl-bootcamp/lectures |access-date=2025-06-14 |website=sites.google.com |language=en-US}}</ref>
<ref name="SynopsysAI2023">{{cite web |url=https://news.synopsys.com/2023-03-29-Synopsys-ai-Unveiled-as-Industrys-First-Full-Stack,-AI-Driven-EDA-Suite-for-Chipmakers |title=Synopsys. ai unveiled as industry’s first full-stack, ai-driven eda suite for chipmakers |year=2023 |publisher=Synopsys |accessdate=June 7, 2025}}</ref>
<ref name="SynopsysAI2023">{{Cite web |title=Synopsys.ai Unveiled as Industry's First Full-Stack, AI-Driven EDA Suite for Chipmakers |url=https://news.synopsys.com/2023-03-29-Synopsys-ai-Unveiled-as-Industrys-First-Full-Stack,-AI-Driven-EDA-Suite-for-Chipmakers |access-date=2025-06-14 |website=news.synopsys.com |language=en}}</ref>
<ref name="Mirhoseini2021Floorplanning">{{cite journal |last1=Mirhoseini |first1=A. |last2=Goldie |first2=A. |last3=Yazgan |first3=M. |last4=Jiang |first4=J. W. |last5=Songhori |first5=E. |last6=Wang |first6=S. |date=June 2021 |title=A graph placement methodology for fast chip design |journal=Nature |volume=594 |issue=7862 |pages=207–212 |doi=10.1038/s41586-021-03544-w}}</ref>
<ref name="Mirhoseini2021Floorplanning">{{Cite journal |last=Mirhoseini |first=Azalia |last2=Goldie |first2=Anna |last3=Yazgan |first3=Mustafa |last4=Jiang |first4=Joe Wenjie |last5=Songhori |first5=Ebrahim |last6=Wang |first6=Shen |last7=Lee |first7=Young-Joon |last8=Johnson |first8=Eric |last9=Pathak |first9=Omkar |last10=Nova |first10=Azade |last11=Pak |first11=Jiwoo |last12=Tong |first12=Andy |last13=Srinivasa |first13=Kavya |last14=Hang |first14=William |last15=Tuncer |first15=Emre |date=2021-06 |title=A graph placement methodology for fast chip design |url=https://www.nature.com/articles/s41586-021-03544-w |journal=Nature |language=en |volume=594 |issue=7862 |pages=207–212 |doi=10.1038/s41586-021-03544-w |issn=1476-4687}}</ref>
<ref name="Xu2021GoodFloorplan">{{cite journal |last1=Xu |first1=Q. |last2=Geng |first2=H. |last3=Chen |first3=S. |last4=Yuan |first4=B. |last5=Zhuo |first5=C. |last6=Kang |first6=Y. |last7=Wen |first7=X. |year=2021 |title=GoodFloorplan: Graph convolutional network and reinforcement learning-based floorplanning |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=41 |issue=10 |pages=3492-3502 |doi=10.1109/TCAD.2021.3137018}}</ref>
<ref name="Xu2021GoodFloorplan">{{Cite journal |last=Xu |first=Qi |last2=Geng |first2=Hao |last3=Chen |first3=Song |last4=Yuan |first4=Bo |last5=Zhuo |first5=Cheng |last6=Kang |first6=Yi |last7=Wen |first7=Xiaoqing |date=2022-10 |title=GoodFloorplan: Graph Convolutional Network and Reinforcement Learning-Based Floorplanning |url=https://ieeexplore.ieee.org/document/9628153 |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=41 |issue=10 |pages=3492–3502 |doi=10.1109/TCAD.2021.3131550 |issn=1937-4151}}</ref>
<ref name="Pei2023AlphaSyn">{{cite conference |last1=Pei |first1=Z. |last2=Liu |first2=F. |last3=He |first3=Z. |last4=Chen |first4=G. |last5=Zheng |first5=H. |last6=Zhu |first6=K. |last7=Yu |first7=B. |year=2023 |date=October 2023 |title=AlphaSyn: Logic synthesis optimization with efficient monte carlo tree search |book-title=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1-9 |publisher=IEEE |doi=10.1109/ICCAD57390.2023.10320889}}</ref>
<ref name="Pei2023AlphaSyn">{{Cite journal |last=Pei |first=Zehua |last2=Liu |first2=Fangzhou |last3=He |first3=Zhuolun |last4=Chen |first4=Guojin |last5=Zheng |first5=Haisheng |last6=Zhu |first6=Keren |last7=Yu |first7=Bei |date=2023-10 |title=AlphaSyn: Logic Synthesis Optimization with Efficient Monte Carlo Tree Search |url=https://ieeexplore.ieee.org/abstract/document/10323856 |journal=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1–9 |doi=10.1109/ICCAD57390.2023.10323856}}</ref>
<ref name="Lu2021RLSizer">{{cite conference |last1=Lu |first1=Y. C. |last2=Nath |first2=S. |last3=Khandelwal |first3=V. |last4=Lim |first4=S. K. |year=2021 |date=December 2021 |title=Rl-sizer: Vlsi gate sizing for timing optimization using deep reinforcement learning |book-title=2021 58th ACM/IEEE Design Automation Conference (DAC) |pages=733-738 |publisher=IEEE |doi=10.1109/DAC18074.2021.9586135}}</ref>
<ref name="Lu2021RLSizer">{{Cite journal |last=Lu |first=Yi-Chen |last2=Nath |first2=Siddhartha |last3=Khandelwal |first3=Vishal |last4=Lim |first4=Sung Kyu |date=2021-12 |title=RL-Sizer: VLSI Gate Sizing for Timing Optimization using Deep Reinforcement Learning |url=https://ieeexplore.ieee.org/abstract/document/9586138 |journal=2021 58th ACM/IEEE Design Automation Conference (DAC) |pages=733–738 |doi=10.1109/DAC18074.2021.9586138}}</ref>
<ref name="DefGenerativeAIMcKinsey">{{cite web |url=https://www.mckinsey.com/featured-insights/artificial-intelligence/what-is-chatgpt-dall-e-and-generative-ai |title=What is ChatGPT, DALL-E, and generative AI? |publisher=McKinsey & Company |date=April 2, 2024 |accessdate=June 7, 2025}}</ref>
<ref name="DefGenerativeAIMcKinsey">{{Cite web |title=What is ChatGPT, DALL-E, and generative AI? {{!}} McKinsey |url=https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai |access-date=2025-06-14 |website=www.mckinsey.com}}</ref>
<ref name="DefGenerativeAIWEF">{{cite web |url=https://www.weforum.org/agenda/2023/01/davos23-explainer-what-is-generative-ai-chatgpt-deepfakes/ |title=What is generative AI? An explainer |publisher=World Economic Forum |date=January 12, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="DefGenerativeAIWEF">{{Cite web |last=Routley |first=Nick |title=What is generative AI? An AI explains |url=https://www.weforum.org/stories/2023/02/generative-ai-explain-algorithms-work/ |archive-url=http://web.archive.org/web/20250512225553/https://www.weforum.org/stories/2023/02/generative-ai-explain-algorithms-work/ |archive-date=2025-05-12 |access-date=2025-06-14 |website=World Economic Forum |language=en}}</ref>
<ref name="DefLLMAWS">{{cite web |url=https://aws.amazon.com/what-is/large-language-models/ |title=What are Large Language Models? - LLMs Explained |publisher=Amazon Web Services |accessdate=June 7, 2025}}</ref>
<ref name="DefLLMAWS">{{Cite web |title=What is LLM? - Large Language Models Explained - AWS |url=https://aws.amazon.com/what-is/large-language-model/ |access-date=2025-06-14 |website=Amazon Web Services, Inc. |language=en-US}}</ref>
<ref name="DefLLMNvidia">{{cite web |url=https://www.nvidia.com/en-us/glossary/large-language-models/ |title=What Is a Large Language Model (LLM)? |publisher=NVIDIA |accessdate=June 7, 2025}}</ref>
<ref name="DefLLMNvidia">{{Cite web |title=What are Large Language Models? {{!}} NVIDIA Glossary |url=https://www.nvidia.com/en-us/glossary/large-language-models/ |access-date=2025-06-14 |website=NVIDIA |language=en-us}}</ref>
<ref name="Liu2023VerilogEval">{{cite conference |last1=Liu |first1=M. |last2=Pinckney |first2=N. |last3=Khailany |first3=B. |last4=Ren |first4=H. |year=2023 |date=October 2023 |title=Verilogeval: Evaluating large language models for verilog code generation |book-title=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1-8 |publisher=IEEE |doi=10.1109/ICCAD57390.2023.10320803}}</ref>
<ref name="Liu2023VerilogEval">{{Cite journal |last=Liu |first=Mingjie |last2=Pinckney |first2=Nathaniel |last3=Khailany |first3=Brucek |last4=Ren |first4=Haoxing |date=2023-10 |title=Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation |url=https://ieeexplore.ieee.org/abstract/document/10323812 |journal=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1–8 |doi=10.1109/ICCAD57390.2023.10323812}}</ref>
<ref name="Lu2024RTLLM">{{cite conference |last1=Lu |first1=Y. |last2=Liu |first2=S. |last3=Zhang |first3=Q. |last4=Xie |first4=Z. |year=2024 |date=January 2024 |title=Rtllm: An open-source benchmark for design rtl generation with large language model |book-title=2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC) |pages=722-727 |publisher=IEEE |doi=10.1109/ASP-DAC58473.2024.10468226}}</ref>
<ref name="Lu2024RTLLM">{{Cite journal |last=Lu |first=Yao |last2=Liu |first2=Shang |last3=Zhang |first3=Qijun |last4=Xie |first4=Zhiyao |date=2024-01 |title=RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model |url=https://ieeexplore.ieee.org/abstract/document/10473904 |journal=2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC) |pages=722–727 |doi=10.1109/ASP-DAC58780.2024.10473904}}</ref>
<ref name="Thakur2023AutoChip">{{cite arxiv |last1=Thakur |first1=S. |last2=Blocklove |first2=J. |last3=Pearce |first3=H. |last4=Tan |first4=B. |last5=Garg |first5=S. |last6=Karri |first6=R. |year=2023 |title=Autochip: Automating hdl generation using llm feedback |eprint=2311.04887 }}</ref>
<ref name="Thakur2023AutoChip">{{Citation |last=Thakur |first=Shailja |title=AutoChip: Automating HDL Generation Using LLM Feedback |date=2024-06-04 |url=http://arxiv.org/abs/2311.04887 |access-date=2025-06-14 |publisher=arXiv |doi=10.48550/arXiv.2311.04887 |id=arXiv:2311.04887 |last2=Blocklove |first2=Jason |last3=Pearce |first3=Hammond |last4=Tan |first4=Benjamin |last5=Garg |first5=Siddharth |last6=Karri |first6=Ramesh}}</ref>
<ref name="Wu2024ChatEDA">{{cite journal |last1=Wu |first1=H. |last2=He |first2=Z. |last3=Zhang |first3=X. |last4=Yao |first4=X. |last5=Zheng |first5=S. |last6=Zheng |first6=H. |last7=Yu |first7=B. |year=2024 |title=Chateda: A large language model powered autonomous agent for eda |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |doi=10.1109/TCAD.2024.3380985}}</ref>
<ref name="Wu2024ChatEDA">{{Cite journal |last=Wu |first=Haoyuan |last2=He |first2=Zhuolun |last3=Zhang |first3=Xinyun |last4=Yao |first4=Xufeng |last5=Zheng |first5=Su |last6=Zheng |first6=Haisheng |last7=Yu |first7=Bei |date=2024-10 |title=ChatEDA: A Large Language Model Powered Autonomous Agent for EDA |url=https://ieeexplore.ieee.org/abstract/document/10485372 |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=43 |issue=10 |pages=3184–3197 |doi=10.1109/TCAD.2024.3383347 |issn=1937-4151}}</ref>
<ref name="Fu2023GPT4AIGChip">{{cite conference |last1=Fu |first1=Y. |last2=Zhang |first2=Y. |last3=Yu |first3=Z. |last4=Li |first4=S. |last5=Ye |first5=Z. |last6=Li |first6=C. |year=2023 |date=October 2023 |title=Gpt4aigchip: Towards next-generation ai accelerator design automation via large language models |book-title=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1-9 |publisher=IEEE |doi=10.1109/ICCAD57390.2023.10320944}}</ref>
<ref name="Fu2023GPT4AIGChip">{{Cite journal |last=Fu |first=Yonggan |last2=Zhang |first2=Yongan |last3=Yu |first3=Zhongzhi |last4=Li |first4=Sixu |last5=Ye |first5=Zhifan |last6=Li |first6=Chaojian |last7=Wan |first7=Cheng |last8=Lin |first8=Yingyan Celine |date=2023-10 |title=GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models |url=https://ieeexplore.ieee.org/abstract/document/10323953 |journal=2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) |pages=1–9 |doi=10.1109/ICCAD57390.2023.10323953}}</ref>
<ref name="Yan2023LLMsForCiM">{{cite conference |last1=Yan |first1=Z. |last2=Qin |first2=Y. |last3=Hu |first3=X. S. |last4=Shi |first4=Y. |year=2023 |date=September 2023 |title=On the viability of using llms for sw/hw co-design: An example in designing cim dnn accelerators |book-title=2023 IEEE 36th International System-on-Chip Conference (SOCC) |pages=1-6 |publisher=IEEE |doi=10.1109/SOCC58487.2023.10278882}}</ref>
<ref name="Yan2023LLMsForCiM">{{Cite journal |last=Yan |first=Zheyu |last2=Qin |first2=Yifan |last3=Hu |first3=Xiaobo Sharon |last4=Shi |first4=Yiyu |date=2023-09 |title=On the Viability of Using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators |url=https://ieeexplore.ieee.org/abstract/document/10256783 |journal=2023 IEEE 36th International System-on-Chip Conference (SOCC) |pages=1–6 |doi=10.1109/SOCC58585.2023.10256783}}</ref>
<ref name="Li2024SpecLLM">{{cite arxiv |last1=Li |first1=M. |last2=Fang |first2=W. |last3=Zhang |first3=Q. |last4=Xie |first4=Z. |year=2024 |title=Specllm: Exploring generation and review of vlsi design specification with large language model |eprint=2401.13266 }}</ref>
<ref name="Li2024SpecLLM">{{Cite web |last=Li |first=Mengming |last2=Fang |first2=Wenji |last3=Zhang |first3=Qijun |last4=Xie |first4=Zhiyao |date=2024-01-24 |title=SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model |url=https://arxiv.org/abs/2401.13266v1 |access-date=2025-06-14 |website=arXiv.org |language=en}}</ref>
<ref name="DefGANOracle">{{cite web |url=https://www.oracle.com/artificial-intelligence/what-is-generative-adversarial-network-gan/ |title=What Is a Generative Adversarial Network (GAN)? |publisher=Oracle Corporation |accessdate=June 7, 2025}}</ref>
<ref name="DefGANOracle">{{Cite web |last=Robinson |first=Scott |last2=Yasar |first2=Kinza |last3=Lewis |first3=Sarah |title=What is a Generative Adversarial Network (GAN)? {{!}} Definition from TechTarget |url=https://www.techtarget.com/searchenterpriseai/definition/generative-adversarial-network-GAN |access-date=2025-06-14 |website=Search Enterprise AI |language=en}}</ref>
<ref name="DefGANAWS">{{cite web |url=https://aws.amazon.com/what-is/gan/ |title=What Are Generative Adversarial Networks (GANs)? |publisher=Amazon Web Services |accessdate=June 7, 2025}}</ref>
<ref name="DefGANAWS">{{cite web |url=https://aws.amazon.com/what-is/gan/ |title=What Are Generative Adversarial Networks (GANs)? |publisher=Amazon Web Services |accessdate=June 7, 2025}}</ref>
<ref name="Alawieh2020GANSRAF">{{cite journal |last1=Alawieh |first1=M. B. |last2=Lin |first2=Y. |last3=Zhang |first3=Z. |last4=Li |first4=M. |last5=Huang |first5=Q. |last6=Pan |first6=D. Z. |year=2020 |title=GAN-SRAF: subresolution assist feature generation using generative adversarial networks |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=40 |issue=2 |pages=373-385 |doi=10.1109/TCAD.2020.3002637}}</ref>
<ref name="Alawieh2020GANSRAF">{{Cite journal |last=Alawieh |first=Mohamed Baker |last2=Lin |first2=Yibo |last3=Zhang |first3=Zaiwei |last4=Li |first4=Meng |last5=Huang |first5=Qixing |last6=Pan |first6=David Z. |date=2021-02 |title=GAN-SRAF: Subresolution Assist Feature Generation Using Generative Adversarial Networks |url=https://ieeexplore.ieee.org/abstract/document/9094711 |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=40 |issue=2 |pages=373–385 |doi=10.1109/TCAD.2020.2995338 |issn=1937-4151}}</ref>
<ref name="Yang2018GANOPC">{{cite conference |last1=Yang |first1=H. |last2=Li |first2=S. |last3=Ma |first3=Y. |last4=Yu |first4=B. |last5=Young |first5=E. F. |year=2018 |date=June 2018 |title=GAN-OPC: Mask optimization with lithography-guided generative adversarial nets |book-title=Proceedings of the 55th Annual Design Automation Conference |pages=1-6 |doi=10.1145/3195970.3196011}}</ref>
<ref name="Yang2018GANOPC">{{Cite journal |last=Yang |first=Haoyu |last2=Li |first2=Shuhe |last3=Ma |first3=Yuzhe |last4=Yu |first4=Bei |last5=Young |first5=Evangeline F. Y. |date=2018-06-24 |title=GAN-OPC: mask optimization with lithography-guided generative adversarial nets |url=https://dl.acm.org/doi/10.1145/3195970.3196056 |journal=Proceedings of the 55th Annual Design Automation Conference |series=DAC '18 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1–6 |doi=10.1145/3195970.3196056 |isbn=978-1-4503-5700-5}}</ref>
<ref name="Rapp2021MLCAD">{{cite journal |last1=Rapp |first1=M. |last2=Amrouch |first2=H. |last3=Lin |first3=Y. |last4=Yu |first4=B. |last5=Pan |first5=D. Z. |last6=Wolf |first6=M. |last7=Henkel |first7=J. |year=2021 |title=MLCAD: A survey of research in machine learning for CAD keynote paper |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=41 |issue=10 |pages=3162-3181 |doi=10.1109/TCAD.2021.3124762}}</ref>
<ref name="Rapp2021MLCAD">{{Cite journal |last=Rapp |first=Martin |last2=Amrouch |first2=Hussam |last3=Lin |first3=Yibo |last4=Yu |first4=Bei |last5=Pan |first5=David Z. |last6=Wolf |first6=Marilyn |last7=Henkel |first7=Jörg |date=2022-10 |title=MLCAD: A Survey of Research in Machine Learning for CAD Keynote Paper |url=https://ieeexplore.ieee.org/abstract/document/9598835 |journal=IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |volume=41 |issue=10 |pages=3162–3181 |doi=10.1109/TCAD.2021.3124762 |issn=1937-4151}}</ref>
<ref name="Gubbi2022Survey">{{cite conference |last1=Gubbi |first1=K. I. |last2=Beheshti-Shirazi |first2=S. A. |last3=Sheaves |first3=T. |last4=Salehi |first4=S. |last5=PD |first5=S. M. |last6=Rafatirad |first6=S. |year=2022 |date=June 2022 |title=Survey of machine learning for electronic design automation |book-title=Proceedings of the Great Lakes Symposium on VLSI 2022 |pages=513-518 |doi=10.1145/3526241.3530834}}</ref>
<ref name="Gubbi2022Survey">{{Cite journal |last=Gubbi |first=Kevin Immanuel |last2=Beheshti-Shirazi |first2=Sayed Aresh |last3=Sheaves |first3=Tyler |last4=Salehi |first4=Soheil |last5=PD |first5=Sai Manoj |last6=Rafatirad |first6=Setareh |last7=Sasan |first7=Avesta |last8=Homayoun |first8=Houman |date=2022-06-06 |title=Survey of Machine Learning for Electronic Design Automation |url=https://dl.acm.org/doi/10.1145/3526241.3530834 |journal=Proceedings of the Great Lakes Symposium on VLSI 2022 |series=GLSVLSI '22 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=513–518 |doi=10.1145/3526241.3530834 |isbn=978-1-4503-9322-5}}</ref>
<ref name="Chen2024AINativeEDA">{{cite arxiv |last1=Chen |first1=L. |last2=Chen |first2=Y. |last3=Chu |first3=Z. |last4=Fang |first4=W. |last5=Ho |first5=T. Y. |last6=Huang |first6=R. |year=2024 |title=The dawn of ai-native eda: Opportunities and challenges of large circuit models |eprint=2403.07257}}</ref>
<ref name="Chen2024AINativeEDA">{{cite arxiv |last1=Chen |first1=L. |last2=Chen |first2=Y. |last3=Chu |first3=Z. |last4=Fang |first4=W. |last5=Ho |first5=T. Y. |last6=Huang |first6=R. |year=2024 |title=The dawn of ai-native eda: Opportunities and challenges of large circuit models |eprint=2403.07257}}</ref>
<ref name="SynopsysAIBrochure">{{cite web |url=https://www.synopsys.com/content/dam/synopsys/solutions/synopsys-ai-brochure.pdf |title=Synopsys.ai – Full Stack, AI-Driven EDA Suite |publisher=Synopsys |accessdate=June 7, 2025}}</ref>
<ref name="SynopsysAIBrochure">{{cite web |url=https://www.synopsys.com/content/dam/synopsys/solutions/synopsys-ai-brochure.pdf |title=Synopsys.ai – Full Stack, AI-Driven EDA Suite |publisher=Synopsys |accessdate=June 7, 2025}}</ref>
Line 205: Line 208:
<ref name="AMDVitisAI">{{cite web |url=https://www.amd.com/en/developer/resources/vitis-ai.html |title=Vitis AI Developer Hub |publisher=AMD |accessdate=June 7, 2025}}</ref>
<ref name="AMDVitisAI">{{cite web |url=https://www.amd.com/en/developer/resources/vitis-ai.html |title=Vitis AI Developer Hub |publisher=AMD |accessdate=June 7, 2025}}</ref>
<ref name="NVIDIADAResearch">{{cite web |url=https://research.nvidia.com/labs/electronic-design-automation/ |title=Design Automation Research Group |publisher=NVIDIA Research |accessdate=June 7, 2025}}</ref>
<ref name="NVIDIADAResearch">{{cite web |url=https://research.nvidia.com/labs/electronic-design-automation/ |title=Design Automation Research Group |publisher=NVIDIA Research |accessdate=June 7, 2025}}</ref>
<ref name="NVIDIAAutoDMPBlog">{{cite web |url=https://developer.nvidia.com/blog/autodmp-optimizes-macro-placement-for-chip-design-with-ai-and-gpus/ |title=AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs |publisher=NVIDIA Developer Blog |date=March 27, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="NVIDIAAutoDMPBlog">{{cite web |last=Agnesina |first=Anthony |last2=Ren |first2=Mark |date=March 27, 2023 |title=AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs |url=https://developer.nvidia.com/blog/autodmp-optimizes-macro-placement-for-chip-design-with-ai-and-gpus/ |accessdate=June 7, 2025 |publisher=NVIDIA Developer Blog}}</ref>
<ref name="GoogleCloudEDAWhitePaper">{{cite web |url=https://services.google.com/fh/files/misc/scaling_your_chip_design_flow_v18.pdf |title=Scaling Your Chip Design Flow |publisher=Google Cloud |accessdate=June 7, 2025}}</ref>
<ref name="GoogleCloudEDAWhitePaper">{{cite web |url=https://services.google.com/fh/files/misc/scaling_your_chip_design_flow_v18.pdf |title=Scaling Your Chip Design Flow |publisher=Google Cloud |accessdate=June 7, 2025}}</ref>
<ref name="IBMEDABlog">{{cite web |url=https://www.ibm.com/products/blog/leveraging-ibm-cloud-for-electronic-design-automation-eda-workloads |title=Leveraging IBM Cloud for electronic design automation (EDA) workloads |publisher=IBM |date=October 31, 2023 |accessdate=June 7, 2025}}</ref>
<ref name="IBMEDABlog">{{cite web |url=https://www.ibm.com/products/blog/leveraging-ibm-cloud-for-electronic-design-automation-eda-workloads |title=Leveraging IBM Cloud for electronic design automation (EDA) workloads |publisher=IBM |date=October 31, 2023 |accessdate=June 7, 2025}}</ref>

Revision as of 21:03, 14 June 2025

AI Driven Design Automation is the use of artificial intelligence (AI) to automate and improve different parts of the electronic design automation (EDA) process. This applies especially to designing integrated circuits (chips) and complex electronic systems. This field has become important because it can help solve the growing problems of complexity, high costs, and the need to release products faster in the semiconductor industry. AI Driven Design Automation uses several methods, including machine learning, expert systems, and reinforcement learning. These are used for many tasks, from planning a chip's architecture and logic synthesis to its physical design and final verification. This article covers the history of AI in EDA, explains its main methods, discusses important applications, and looks at how it affects chip design and the semiconductor industry.

This figure explains how can we train large circuit models by making use of the front-end (depicted in blue) and back-end (depicted in yellow) of the EDA flow in order to either enhance existing EDA tools or to create novel EDA applications.[1]

History

1980s–1990s: Expert systems and early experiments

Using AI for design automation started becoming popular in the 1980s and 1990s, mainly with the creation of expert systems. These systems tried to capture the knowledge and practical rules of human design experts. They used a set of rules and reasoning engines to direct the design process.[2]

A notable early project was the ULYSSES system from Carnegie Mellon University. ULYSSES was a CAD tool integration environment that let expert designers turn their design methods into scripts that could be run automatically. It treated design tools as sources of knowledge that a scheduler could manage.[3] The ADAM (Advanced Design AutoMation) system at the University of Southern California used an expert system called the Design Planning Engine. This engine figured out design strategies on the fly and handled different design jobs by organizing specialized knowledge into structured formats called frames.[4]

Other systems like DAA (Design Automation Assistant) showed how to use approaches based on rules for specific jobs, such as register transfer level (RTL) design for systems like the IBM 370.[2] Research at Carnegie Mellon University also created TALIB, an expert system for mask layout that used over 1200 rules, and EMUCS/DAA, for CPU architectural design with about 70 rules. These projects showed that AI worked better for problems where a few rules could handle a lot of data.[5] At the same time, tools called silicon compilers like MacPitts, Arsenic, and Palladio appeared. They used algorithms and search techniques to explore different design possibilities. This was another way to automate design, even if it was not always based on expert systems.[5] Early tests with neural networks in VLSI design also happened during this time, although they were not as common as systems based on rules.

2000s: Introduction of machine learning

In the 2000s, interest in AI for design automation came back. This was mostly because of better machine learning (ML) algorithms and more available data from design and manufacturing. ML methods started being used for complex problems. For example, they were used to model and reduce the effects of small manufacturing differences in semiconductor devices. This became very important as the size of components on chips became smaller. The large amount of data created during chip design provided the foundation needed to train smarter ML models. This allowed for predicting outcomes and optimizing in areas that were hard to automate before.

2016–2020: Reinforcement learning and large scale initiatives

A major turning point happened in the mid to late 2010s, sparked by successes in other areas of AI. The success of DeepMind's AlphaGo in mastering the game of Go inspired researchers. They began to apply reinforcement learning (RL) to difficult EDA problems. These problems often require searching through many options and making a series of decisions.

In 2018, the U.S. DARPA started the Intelligent Design of Electronic Assets (IDEA) program. A main goal of IDEA was to create a fully automated layout generator that required no human intervention. It needed to produce a chip design ready for manufacturing from RTL specifications in 24 hours.[6] The OpenROAD project, a large effort under IDEA led by UC San Diego with industry and university partners, aimed to build an open source, independent toolchain. It used machine learning, searching multiple options at once, and breaking down big problems into smaller ones to meet its goals.[6]

A clear demonstration of RL's potential came from Google researchers between 2020 and 2021. They created a deep reinforcement learning method for planning the layout of a chip, known as floorplanning. They reported that this method created layouts that were as good as or better than those made by human experts, and it did so in less than six hours.[7] This method used a type of network called a graph convolutional neural network. It showed that it could learn general patterns that could be applied to new problems, getting better as it saw more chip designs. The technology was later used to design Google's Tensor Processing Unit (TPU) accelerators.[7]

2020s: Autonomous systems and agents

Entering the 2020s, the industry saw the commercial launch of autonomous AI driven EDA systems. For example, Synopsys launched DSO.ai™ (Design Space Optimization AI) in early 2020, calling it the first autonomous artificial intelligence application for chip design in the industry.[8][9] This system uses reinforcement learning to search for the best ways to optimize a design within the huge number of possible solutions, trying to improve power, performance, and area (PPA).[9] By 2023, DSO.ai had been used in over 100 commercial chip productions, which showed that the industry was widely adopting it.[10] Synopsys later grew its AI tools into a suite called Synopsys.ai. The goal was to use AI in the entire EDA workflow, including verification and testing.[11][12]

These advancements, which combine modern AI methods with cloud computing and large data resources, have led to talks about a new phase in EDA. Industry experts and participants sometimes call this 'EDA 4.0'.[13][14] This new era is defined by the widespread use of AI and machine learning to deal with growing design complexity, automate more of the design process, and help engineers handle the huge amounts of data that EDA tools create.[13][15] The purpose of EDA 4.0 is to optimize product performance, get products to market faster, and make development and manufacturing smoother through intelligent automation.[14]

AI methods

People are using Artificial intelligence techniques more and more to solve difficult problems in electronic design automation. These methods look at large amounts of design data, learn complex patterns, and automate decisions. The goal is to improve the quality of designs, make the design process faster, and handle the increasing complexity of making semiconductors. Important approaches include supervised learning, unsupervised learning, reinforcement learning, and generative AI.

Supervised learning

Supervised learning is a type of machine learning where algorithms learn from data that is already labeled.[16] This means every piece of input data in the training set has a known correct answer or "label."[17] The algorithm learns to connect inputs to outputs by finding the patterns and connections in the training data.[18] After it is trained, the model can then make predictions on new data it has not seen before.[19]

In electronic design automation, supervised learning is useful for tasks where past data can predict future results or spot certain problems. This includes estimating design metrics like performance, power, and timing. For example, Ithemal estimates CPU performance,[20] PRIMAL predicts power use at the RTL stage,[21] and other methods predict timing delays in circuits by analyzing their structure.[22][23] It is also used to classify parts of a design to find potential problems, like lithography hotspots[24] or predicting how easy a design will be to route.[25] Learning circuit representations that are aware of their function also often uses supervised methods.[26]

Unsupervised learning

Unsupervised learning involves training algorithms on data without any labels. This lets the models find hidden patterns, structures, or connections in the data by themselves.[27] Common tasks are clustering (which groups similar data together), dimensionality reduction (which reduces the number of variables but keeps important information), and association rule mining (which finds relationships between variables).[28]

In EDA, these methods are valuable for looking through complex design data to find insights that are not obvious. For instance, clustering can group design settings or tool configurations, which helps in automatically tuning the design process, as seen in the FIST tool.[29] A major use is in representation learning, where the aim is to automatically learn useful and often simpler representations (features or embeddings) of circuit data. This could involve learning embeddings for analog circuit structures using methods based on graphs[30] or understanding the function of netlists through contrastive learning methods.[31]

Reinforcement learning

Reinforcement learning (RL) is a kind of machine learning where an agent, or a computer program, learns to make the best decisions by trying things out in a simulated environment. The agent takes actions, moves between different states, and gets rewards or penalties as feedback. The main goal is to get the highest total reward over time.[32] RL is different from supervised learning because it does not need labeled data. It also differs from unsupervised learning because it learns by trial and error to achieve a specific goal.[33]

In EDA, RL is especially good for tasks that require making a series of decisions to find the best solution in very complex situations with many variables. Its adoption by commercial EDA products shows its growing importance.[34] RL has been used for physical design problems like chip floorplanning. In this task, an agent learns to place blocks to improve things like wire length and performance.[35][36] In logic synthesis, RL can guide how optimization steps are chosen and in what order they are applied to get better results, as seen in methods like AlphaSyn.[37] Adjusting the size of gates to optimize timing is another area where RL agents can learn effective strategies.[38]

Generative AI

Generative AI means artificial intelligence models that can create new content, like text, images, or code, instead of just analyzing or working with existing data.[39] These models learn the underlying patterns and structures from the data they are trained on. They then use this knowledge to create new and original outputs.[40]

In EDA, generative AI is being used in many ways, especially through Large Language Models (LLMs) and other architectures like Generative Adversarial Networks (GANs).

Large language models (LLMs)

Large Language Models are deep learning models, often based on the transformer architecture. They are pre trained on huge amounts of text and code.[41] They are very good at understanding, summarizing, creating, and predicting human language and programming languages.[42]

Their abilities are being used in EDA for jobs such as:

RTL Code Generation: LLMs are used to automatically write code in a Hardware Description Language (HDL) based on written instructions or requirements. Benchmarks like VerilogEval[43] and RTLLM[44] have been created to check these abilities, and tools like AutoChip aim to automate this process.[45]

EDA Script Generation and Tool Interaction: Agents based on LLMs, like ChatEDA, can turn plain language commands into runnable scripts for controlling EDA tools. This could make complex workflows simpler.[46]

Architectural Design and Exploration: LLMs help in the early stages of design. They can generate high level synthesis code (for example, GPT4AIGChip[47]), explore design options for special hardware like Compute in Memory accelerators,[48] or help create and review design requirements (for example, SpecLLM[49]).

Verification Assistance: Researchers are looking into using LLMs to create verification parts like SystemVerilog Assertions (SVAs) from plain language descriptions.

Other generative models

Besides LLMs, other generative models like Generative Adversarial Networks (GANs) are also used in EDA. A GAN has two neural networks, a generator and a discriminator, which are trained in a competition against each other.[50] The generator learns to make data samples that look like the training data, while the discriminator learns to tell the difference between real and generated samples.[51]

In physical design, GANs have been used for tasks like creating sub resolution assist features (SRAFs) to make chips easier to manufacture in lithography (GAN SRAF[52]) and for optimizing masks (GAN OPC[53]).

Applications

Artificial intelligence (AI) is being used in many stages of the electronic design workflow. It aims to improve efficiency, get better results, and handle the growing complexity of modern integrated circuits.[54] [55] [1] AI helps designers from the very first ideas about architecture all the way to manufacturing and testing.

High level synthesis and architectural exploration

In the first phases of chip design, AI helps with High Level Synthesis (HLS) and exploring different system level design options (DSE). These processes are key for turning general ideas into detailed hardware plans.[54] AI algorithms, often using supervised learning, are used to build simpler, substitute models. These models can quickly guess important design measurements like area, performance, and power for many different architectural options or HLS settings.[54][1] Being able to guess quickly reduces the need for lengthy simulations. This allows for exploring a wider range of possible designs.[54] For example, the Ithemal tool uses deep neural networks to estimate how fast basic code blocks will run, which helps in making processor architecture decisions.[20] Similarly, PRIMAL uses machine learning for guessing power use at the register transfer level (RTL), giving early information about how much power the chip will use.[21] Reinforcement learning (RL) and Bayesian optimization are also used to guide the DSE process. They help search through the many parameters to find the best HLS settings or architectural details like cache sizes.[54][55] LLMs are also being tested for creating architectural plans or initial C code for HLS, as seen with GPT4AIGChip.[47][1]

Logic synthesis and optimization

Logic synthesis is the process of changing a high level hardware description into an optimized list of electronic gates, known as a gate level netlist, that is ready for a specific manufacturing process. AI methods help with different parts of this process, including logic optimization, technology mapping, and making improvements after mapping.[54][55] Supervised learning, especially with Graph Neural Networks (GNNs) which are good at handling data that looks like a circuit diagram, helps create models to predict design properties like power or error rates in approximate circuits.[54][1] These predictions then guide the optimization algorithms. Reinforcement learning is used to perform logic optimization directly. For example, agents are trained to choose a series of logic changes to reduce area while meeting timing goals.[54][1] AlphaSyn uses Monte Carlo Tree Search with RL to optimize logic for smaller area.[37] FlowTune uses a multi armed bandit strategy to choose synthesis flows.[1] AI can also adjust parameters for entire synthesis flows, learning from old designs to recommend the best tool settings for new ones.[54]

Physical design

Physical design turns the list of electronic gates into a physical layout. This layout defines exactly where each component goes and how they are all connected. AI is used a lot in this area to improve PPA metrics.[54][55]

Placement

Placement is the task of finding the best spots for large circuit blocks, called macros, and smaller standard cells. Reinforcement learning has been famously used for macro placement, where an agent learns how to position blocks to reduce wire length and improve timing, as shown by Google's research[35] and the GoodFloorplan method.[36] Supervised learning models, including CNNs that treat the layout like a picture, are used to predict routing problems like DRVs (e.g., RouteNet[25]) or timing after routing directly from the placement information.[54][1] RL Sizer uses deep RL to optimize the size of gates during placement to meet timing goals.[38]

Clock network synthesis

AI helps in Clock Tree Synthesis (CTS) by optimizing the network that distributes the clock signal. GANs, sometimes used with RL (e.g., GAN CTS), are used to predict and improve clock tree structures. The goal is to reduce clock skew and power use.[54][55][56]

Routing

Routing creates the physical wire connections. AI models predict routing traffic jams using methods like GANs to help guide the routing algorithms.[54][1] RL is also used to optimize the order in which wires are routed to reduce errors.[54]

Power/ground network synthesis and analysis

AI models, including CNNs and tree based methods, help in designing and analyzing the Power Delivery Network (PDN). They do this by quickly estimating static and dynamic IR drop. This guides the creation of the PDN and reduces the number of design cycles.[54][55][1]

Verification and validation

Verification and validation are very important to make sure a chip works correctly. AI is used to make these processes, which often take a long time, more efficient.[54] LLMs are used to turn plain language requirements into formal SystemVerilog assertions (SVAs) (e.g., AssertLLM)[1] and to help with security verification.[1] Methods for predicting timing analysis results based on circuit structure, like those first developed by Kahng et al.[22] and improved with transformer models like TF Predictor,[23] make the final timing checks much faster. DeepGate2 provides a way to learn circuit representations that understand function, which can help with verification tasks.[26]

Analog and mixed signal design

AI methods are being used more often in the complex field of analog and mixed signal circuit design. They are helping to choose the circuit structure, determine the size of components, and automate the layout.[54][1] AI models, including Variational Autoencoders (VAEs) and RL, help to explore and create new circuit structures.[1] For instance, graph embeddings can be used to optimize the structure of operational amplifiers.[30] Machine learning substitute models give fast performance estimates for component sizing, while RL directly optimizes the component parameters.[1][54]

Test, manufacturing and yield optimization

AI helps in the stages after the silicon chip is made, including testing, design for manufacturability (DFM), and improving the production yield.[54] In lithography, AI models like CNNs and GANs are used for SRAF generation (e.g., GAN SRAF[52]) and OPC (e.g., GAN OPC[53]) to improve how well the chip prints. AI also predicts lithography problems, known as hotspots, from the layout, as shown by Yang et al.[24] For tuning the broader design flow for manufacturing, FIST uses tree based methods to select parameters.[29]

Hardware-software co-design

Hardware-Software Co-design is about optimizing the hardware and software parts of a system at the same time. LLMs are starting to be used as tools to help with this. For example, they help in designing Compute in Memory (CiM) DNN accelerators, where how the software is arranged and how the hardware is set up are closely connected, as explored by Yan et al.[48][56] LLMs can also create architectural plans (e.g., SpecLLM[49]) or HDL code using benchmarks like VerilogEval[43] and RTLLM,[44] or with tools like AutoChip.[45] Additionally, agents based on LLMs like ChatEDA make it easier to interact with EDA tools for different design stages.[46] For netlist representation, work by Wang et al. focuses on learning embeddings that understand the function of the circuit.[31]

Industry adoption and ecosystem

The use of artificial intelligence in electronic design automation is a widespread trend. Many different players in the semiconductor world are helping to create and use these technologies. This includes companies that sell EDA tools and develop software with AI, semiconductor design companies and foundries that use these tools to make and manufacture chips, and very large technology companies that might design their own chips using AI driven methods.

EDA tool vendors

Major EDA companies are leading the way in adding AI to their tool suites to handle growing design complexity. Their strategies often involve creating complete AI platforms. These platforms use machine learning in many different steps of the design and manufacturing process.

Synopsys provides a set of tools in its Synopsys.ai initiative. This initiative aims to improve design metrics and productivity from the system architecture stage all the way to manufacturing.[12] A main component uses reinforcement learning to improve power, performance, and area (PPA) during the process that goes from the initial design description to the final manufacturing file (DSO.ai). Other parts use AI to speed up verification, optimize test pattern generation for manufacturing, and improve the design of analog circuits in different conditions.[12]

Cadence has created its Cadence.AI platform. The company says it uses "agentic AI workflows" to cut down on the design engineering time for complex SoCs.[57] Key platforms use AI to optimize the digital design flow (Cadence Cerebrus), improve verification productivity (Verisium), design custom and analog ICs (Virtuoso Studio), and analyze systems at a high level (Optimality Intelligent System Explorer).[57][58]

Siemens EDA directs its AI strategy at improving its current software engines and workflows to give engineers better design insights.[59] AI is used inside its Calibre platform to speed up manufacturing tasks like Design for Manufacturability (DFM), Resolution Enhancement Techniques (RET), and Optical Proximity Correction (OPC). AI is also used in its Questa suite to close coverage faster in digital verification and in its Solido suite to lessen the characterization work for analog designs.[59]

Semiconductor design and FPGA companies

Companies that design semiconductor chips, like FPGAs and adaptive SoCs, are major users and creators of EDA methods that are improved with AI to make their design processes more efficient.

AMD offers a suite of tools for its adaptive hardware that uses AI ideas. The AMD Vitis platform is an environment for developing designs on its SoCs and FPGAs. It includes a component, Vitis AI, which has libraries and pre trained models to speed up AI inference.[60][61] The related Vivado Design Suite uses machine learning methods to improve the quality of results (QoR) and help with achieving timing goals and estimating power for the hardware design.[60]

NVIDIA has a specific Design Automation Research group to look into new EDA methods.[62] The group focuses on EDA tools that are accelerated by GPUs and using AI methods like Bayesian optimization and reinforcement learning for EDA problems. One example of their research is AutoDMP, a tool that automates macro placement using multi objective Bayesian optimization and a GPU accelerated placer.[63]

Cloud providers and hyperscalers

Large cloud service providers and hyperscale companies have two main roles. They provide the powerful and flexible computing power needed to run difficult AI and EDA tasks, and many also design their own custom silicon, often using AI in their internal design processes.

Google Cloud, for example, provides a platform that supports EDA workloads with flexible computing resources, special storage solutions, and high speed networking.[64] At the same time, Google's internal chip design teams have contributed to EDA research, especially by using reinforcement learning for physical design tasks like chip floorplanning.[35]

IBM provides infrastructure on its cloud platform that is focused on EDA, with a strong emphasis on secure environments for foundries and high performance computing.[65] Their solutions include high performance parallel storage and tools for managing large scale jobs. These are designed to help design houses manage the complex simulation and modeling tasks that are part of modern EDA.[65]

Limitations and challenges

Data quality and availability

A main challenge for using AI effectively in EDA is the availability and quality of data.[54][1] Machine learning models, especially deep learning ones, usually need large, varied, and high quality datasets to be trained. This ensures they can work well on new designs they have not seen before.[1] However, a lot of the detailed design data in the semiconductor industry is secret and very sensitive. This makes companies unwilling to share it.[54][1] This lack of public, detailed examples makes it difficult for university researchers and for the development of models that can be widely used. Even when data is available, it might have problems like being noisy, incomplete, or unbalanced. For instance, having many more examples of successful designs than ones with problems can lead to biased or poorly performing AI models.[55] The work and cost of collecting, organizing, and correctly labeling large EDA datasets also create big obstacles.[1] Solving these data related problems is key for moving AI forward in EDA. Possible solutions include creating strong data augmentation methods, generating realistic synthetic data, and building community platforms for sharing data securely and for benchmarking.[1]

Integration and compute cost

Putting AI solutions into practice in the EDA field has major challenges. These include fitting the AI into the complex sets of tools that already exist and handling the high cost of computing power.[54][55] Adding new AI models and algorithms into established EDA workflows, which are often made of many connected tools and private formats, takes a lot of engineering work and can have problems working with other tools.[54][1] Also, training and running complex AI models, especially deep learning ones, requires a lot of computing resources. This includes powerful GPUs or special AI accelerators, large amounts of memory, and long processing times.[54][1] These needs lead to higher costs for both creating and using the AI models.[55] Making AI methods able to handle the ever growing size and complexity of modern chip designs, while staying efficient and using a reasonable amount of memory, is still an ongoing challenge.[55][1]

Intellectual property and confidentiality

The use of AI in EDA, especially with sensitive design data, brings up serious worries about protecting secret company information, known as intellectual property (IP), and keeping data private.[54][56] Chip designs are very valuable IP, and there is always a risk when giving this secret information to AI models, particularly if they are made by other companies or run on cloud platforms.[54] It is extremely important to make sure that design data used for training or making decisions is not compromised, leaked, or used to accidentally leak secret knowledge.[56] While strategies like fine tuning open source models on private data are being tried to reduce some privacy risks,[56] it is essential to set up secure data handling rules, strong access controls, and clear data management policies. The unwillingness to share detailed design data because of these IP and privacy worries also slows down collaborative research and the creation of better AI models for the EDA industry.[54][56]

Human oversight and interpretability

Even with the push for more automation, the role of human designers is still vital, and making AI models understandable continues to be a challenge.[54][1] Many advanced AI models, especially deep learning systems, can act like "black boxes," which makes it hard for engineers to understand why they make certain predictions or design choices.[54] This lack of clarity can prevent adoption, as designers might not want to trust or use solutions if their decision making process is not clear, especially in critical applications or when fixing unexpected problems.[55] While AI can automate many jobs, human knowledge is still essential. It is needed to set design goals, check the results from AI, handle new or unusual situations where AI might fail, and provide the specialized knowledge that often guides AI development.[54][1] To effectively use AI in EDA, it means that human engineers and smart tools need to work together effectively. This requires designers to learn new skills for working with and supervising AI systems.[1]

References

  1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z Chen, Lei; Chen, Yiqi; Chu, Zhufei; Fang, Wenji; Ho, Tsung-Yi; Huang, Ru; Huang, Yu; Khan, Sadaf; Li, Min (1 May 2024), The Dawn of AI-Native EDA: Opportunities and Challenges of Large Circuit Models, arXiv, doi:10.48550/arXiv.2403.07257, arXiv:2403.07257, retrieved 14 June 2025
  2. ^ a b Parker, A.C.; Hayati, S. (1987-06). "Automating the VLSI design process using expert systems and silicon compilation". Proceedings of the IEEE. 75 (6): 777–785. doi:10.1109/PROC.1987.13799. ISSN 1558-2256. {{cite journal}}: Check date values in: |date= (help)
  3. ^ Bushnell, M.L.; Director, S.W. (1986-06). "VLSI CAD Tool Integration Using the Ulysses Environment". 23rd ACM/IEEE Design Automation Conference: 55–61. doi:10.1109/DAC.1986.1586068. {{cite journal}}: Check date values in: |date= (help)
  4. ^ Granacki, J.; Knapp, D.; Parker, A. (1985-06). "The ADAM Advanced Design Automation System: Overview, Planner and Natural Language Interface". 22nd ACM/IEEE Design Automation Conference: 727–730. doi:10.1109/DAC.1985.1586023. {{cite journal}}: Check date values in: |date= (help)
  5. ^ a b Kirk, R. S. (1985). The impact of AI technology on VLSI design. Managing Requirements Knowledge, International Workshop on, CHICAGO. p. 125. doi:10.1109/AFIPS.1985.63.
  6. ^ a b Ajayi, T.; Blaauw, D. (2019-01). "OpenROAD: Toward a Self-Driving, Open-Source Digital Layout Implementation Tool Chain". Proceedings of Government Microcircuit Applications and Critical Technology Conference. {{cite journal}}: Check date values in: |date= (help)
  7. ^ a b Mirhoseini, Azalia; Goldie, Anna; Yazgan, Mustafa; Jiang, Joe Wenjie; Songhori, Ebrahim; Wang, Shen; Lee, Young-Joon; Johnson, Eric; Pathak, Omkar; Nova, Azade; Pak, Jiwoo; Tong, Andy; Srinivasa, Kavya; Hang, William; Tuncer, Emre (2021-06). "A graph placement methodology for fast chip design". Nature. 594 (7862): 207–212. doi:10.1038/s41586-021-03544-w. ISSN 1476-4687. {{cite journal}}: Check date values in: |date= (help)
  8. ^ "Synopsys Advances State-of-the-Art in Electronic Design with Revolutionary Artificial Intelligence Technology". news.synopsys.com. Retrieved 14 June 2025.
  9. ^ a b "DSO.ai: AI-Driven Design Applications | Synopsys AI". www.synopsys.com. Retrieved 14 June 2025.
  10. ^ Ward-Foxton, Sally (10 February 2023). "AI-Powered Chip Design Goes Mainstream". EE Times. Retrieved 14 June 2025.
  11. ^ Freund, Karl. "Synopsys.ai: New AI Solutions Across The Entire Chip Development Workflow". Forbes. Retrieved 14 June 2025.
  12. ^ a b c "Synopsys.ai – Full Stack, AI-Driven EDA Suite" (PDF). Synopsys. Retrieved 7 June 2025.
  13. ^ a b Yu, Dan (1 June 2023). "Welcome To EDA 4.0 And The AI-Driven Revolution". Semiconductor Engineering. Retrieved 14 June 2025.
  14. ^ a b "EDA 4.0 And The AI-Driven Revolution" (PDF). unipv.news (reporting on a Siemens presentation). 29 November 2023. Retrieved 7 June 2025.
  15. ^ Dahad, Nitin (10 November 2022). "How AI-based EDA will enable, not replace the engineer". Embedded. Retrieved 14 June 2025.
  16. ^ Belcic, Ivan; Stryker, Cole (28 December 2024). "What Is Supervised Learning? | IBM". www.ibm.com. Retrieved 14 June 2025.
  17. ^ "What is Supervised Learning?". Google Cloud. Retrieved 14 June 2025.
  18. ^ "A guide to machine learning algorithms and their applications". www.sas.com. Retrieved 14 June 2025.
  19. ^ "Supervised Learning". www.mathworks.com. Archived from the original on 12 February 2025. Retrieved 14 June 2025.
  20. ^ a b Mendis, Charith; Renda, Alex; Amarasinghe, Saman; Carbin, Michael (21 August 2018). "Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks". arXiv.org. Retrieved 14 June 2025.
  21. ^ a b Zhou, Yuan; Ren, Haoxing; Zhang, Yanqing; Keller, Ben; Khailany, Brucek; Zhang, Zhiru (2019-06). "PRIMAL: Power Inference using Machine Learning". 2019 56th ACM/IEEE Design Automation Conference (DAC): 1–6. {{cite journal}}: Check date values in: |date= (help)
  22. ^ a b Kahng, Andrew B.; Mallappa, Uday; Saul, Lawrence (2018-10). "Using Machine Learning to Predict Path-Based Slack from Graph-Based Timing Analysis". 2018 IEEE 36th International Conference on Computer Design (ICCD): 603–612. doi:10.1109/ICCD.2018.00096. {{cite journal}}: Check date values in: |date= (help)
  23. ^ a b Cao, Peng; He, Guoqing; Yang, Tai (2023-07). "TF-Predictor: Transformer-Based Prerouting Path Delay Prediction Framework". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 42 (7): 2227–2237. doi:10.1109/TCAD.2022.3216752. ISSN 1937-4151. {{cite journal}}: Check date values in: |date= (help)
  24. ^ a b Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei. "Imbalance aware lithography hotspot detection: a deep learning approach". SPIE Digital Library. doi:10.1117/1.jmm.16.3.033504.short.
  25. ^ a b Xie, Zhiyao; Huang, Yu-Hung; Fang, Guan-Qi; Ren, Haoxing; Fang, Shao-Yun; Chen, Yiran; Hu, Jiang (2018-11). "RouteNet: Routability prediction for Mixed-Size Designs Using Convolutional Neural Network". 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD): 1–8. doi:10.1145/3240765.3240843. {{cite journal}}: Check date values in: |date= (help)
  26. ^ a b Shi, Zhengyuan; Pan, Hongyang; Khan, Sadaf; Li, Min; Liu, Yi; Huang, Junhua; Zhen, Hui-Ling; Yuan, Mingxuan; Chu, Zhufei; Xu, Qiang (2023-10). "DeepGate2: Functionality-Aware Circuit Representation Learning". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323798. {{cite journal}}: Check date values in: |date= (help)
  27. ^ "What Is Unsupervised Learning? | IBM". www.ibm.com. 23 September 2021. Retrieved 14 June 2025.
  28. ^ Yasar, Kinza; Gillis, Alexander S.; Pratt, Mary K. "What is Unsupervised Learning? | Definition from TechTarget". Search Enterprise AI. Retrieved 14 June 2025.
  29. ^ a b Xie, Zhiyao; Fang, Guan-Qi; Huang, Yu-Hung; Ren, Haoxing; Zhang, Yanqing; Khailany, Brucek; Fang, Shao-Yun; Hu, Jiang; Chen, Yiran; Barboza, Erick Carvajal (2020-01). "FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic Design Flow Parameter Tuning". 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC): 19–25. doi:10.1109/ASP-DAC47756.2020.9045201. {{cite journal}}: Check date values in: |date= (help)
  30. ^ a b Lu, Jialin; Lei, Liangbo; Yang, Fan; Shang, Li; Zeng, Xuan (2022-03). "Topology Optimization of Operational Amplifier in Continuous Space via Graph Embedding". 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE): 142–147. doi:10.23919/DATE54114.2022.9774676. {{cite journal}}: Check date values in: |date= (help)
  31. ^ a b Wang, Ziyi; Bai, Chen; He, Zhuolun; Zhang, Guangliang; Xu, Qiang; Ho, Tsung-Yi; Yu, Bei; Huang, Yu (23 August 2022). "Functionality matters in netlist representation learning". Proceedings of the 59th ACM/IEEE Design Automation Conference. DAC '22. New York, NY, USA: Association for Computing Machinery: 61–66. doi:10.1145/3489517.3530410. ISBN 978-1-4503-9142-9.
  32. ^ "Reinforcement Learning". GeeksforGeeks. 25 April 2018. Retrieved 14 June 2025.
  33. ^ "Deep RL Bootcamp - Lectures". sites.google.com. Retrieved 14 June 2025.
  34. ^ "Synopsys.ai Unveiled as Industry's First Full-Stack, AI-Driven EDA Suite for Chipmakers". news.synopsys.com. Retrieved 14 June 2025.
  35. ^ a b c Mirhoseini, Azalia; Goldie, Anna; Yazgan, Mustafa; Jiang, Joe Wenjie; Songhori, Ebrahim; Wang, Shen; Lee, Young-Joon; Johnson, Eric; Pathak, Omkar; Nova, Azade; Pak, Jiwoo; Tong, Andy; Srinivasa, Kavya; Hang, William; Tuncer, Emre (2021-06). "A graph placement methodology for fast chip design". Nature. 594 (7862): 207–212. doi:10.1038/s41586-021-03544-w. ISSN 1476-4687. {{cite journal}}: Check date values in: |date= (help)
  36. ^ a b Xu, Qi; Geng, Hao; Chen, Song; Yuan, Bo; Zhuo, Cheng; Kang, Yi; Wen, Xiaoqing (2022-10). "GoodFloorplan: Graph Convolutional Network and Reinforcement Learning-Based Floorplanning". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (10): 3492–3502. doi:10.1109/TCAD.2021.3131550. ISSN 1937-4151. {{cite journal}}: Check date values in: |date= (help)
  37. ^ a b Pei, Zehua; Liu, Fangzhou; He, Zhuolun; Chen, Guojin; Zheng, Haisheng; Zhu, Keren; Yu, Bei (2023-10). "AlphaSyn: Logic Synthesis Optimization with Efficient Monte Carlo Tree Search". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323856. {{cite journal}}: Check date values in: |date= (help)
  38. ^ a b Lu, Yi-Chen; Nath, Siddhartha; Khandelwal, Vishal; Lim, Sung Kyu (2021-12). "RL-Sizer: VLSI Gate Sizing for Timing Optimization using Deep Reinforcement Learning". 2021 58th ACM/IEEE Design Automation Conference (DAC): 733–738. doi:10.1109/DAC18074.2021.9586138. {{cite journal}}: Check date values in: |date= (help)
  39. ^ "What is ChatGPT, DALL-E, and generative AI? | McKinsey". www.mckinsey.com. Retrieved 14 June 2025.
  40. ^ Routley, Nick. "What is generative AI? An AI explains". World Economic Forum. Archived from the original on 12 May 2025. Retrieved 14 June 2025.
  41. ^ "What is LLM? - Large Language Models Explained - AWS". Amazon Web Services, Inc. Retrieved 14 June 2025.
  42. ^ "What are Large Language Models? | NVIDIA Glossary". NVIDIA. Retrieved 14 June 2025.
  43. ^ a b Liu, Mingjie; Pinckney, Nathaniel; Khailany, Brucek; Ren, Haoxing (2023-10). "Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–8. doi:10.1109/ICCAD57390.2023.10323812. {{cite journal}}: Check date values in: |date= (help)
  44. ^ a b Lu, Yao; Liu, Shang; Zhang, Qijun; Xie, Zhiyao (2024-01). "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model". 2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC): 722–727. doi:10.1109/ASP-DAC58780.2024.10473904. {{cite journal}}: Check date values in: |date= (help)
  45. ^ a b Thakur, Shailja; Blocklove, Jason; Pearce, Hammond; Tan, Benjamin; Garg, Siddharth; Karri, Ramesh (4 June 2024), AutoChip: Automating HDL Generation Using LLM Feedback, arXiv, doi:10.48550/arXiv.2311.04887, arXiv:2311.04887, retrieved 14 June 2025
  46. ^ a b Wu, Haoyuan; He, Zhuolun; Zhang, Xinyun; Yao, Xufeng; Zheng, Su; Zheng, Haisheng; Yu, Bei (2024-10). "ChatEDA: A Large Language Model Powered Autonomous Agent for EDA". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 43 (10): 3184–3197. doi:10.1109/TCAD.2024.3383347. ISSN 1937-4151. {{cite journal}}: Check date values in: |date= (help)
  47. ^ a b Fu, Yonggan; Zhang, Yongan; Yu, Zhongzhi; Li, Sixu; Ye, Zhifan; Li, Chaojian; Wan, Cheng; Lin, Yingyan Celine (2023-10). "GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models". 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD): 1–9. doi:10.1109/ICCAD57390.2023.10323953. {{cite journal}}: Check date values in: |date= (help)
  48. ^ a b Yan, Zheyu; Qin, Yifan; Hu, Xiaobo Sharon; Shi, Yiyu (2023-09). "On the Viability of Using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators". 2023 IEEE 36th International System-on-Chip Conference (SOCC): 1–6. doi:10.1109/SOCC58585.2023.10256783. {{cite journal}}: Check date values in: |date= (help)
  49. ^ a b Li, Mengming; Fang, Wenji; Zhang, Qijun; Xie, Zhiyao (24 January 2024). "SpecLLM: Exploring Generation and Review of VLSI Design Specification with Large Language Model". arXiv.org. Retrieved 14 June 2025.
  50. ^ Robinson, Scott; Yasar, Kinza; Lewis, Sarah. "What is a Generative Adversarial Network (GAN)? | Definition from TechTarget". Search Enterprise AI. Retrieved 14 June 2025.
  51. ^ "What Are Generative Adversarial Networks (GANs)?". Amazon Web Services. Retrieved 7 June 2025.
  52. ^ a b Alawieh, Mohamed Baker; Lin, Yibo; Zhang, Zaiwei; Li, Meng; Huang, Qixing; Pan, David Z. (2021-02). "GAN-SRAF: Subresolution Assist Feature Generation Using Generative Adversarial Networks". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 40 (2): 373–385. doi:10.1109/TCAD.2020.2995338. ISSN 1937-4151. {{cite journal}}: Check date values in: |date= (help)
  53. ^ a b Yang, Haoyu; Li, Shuhe; Ma, Yuzhe; Yu, Bei; Young, Evangeline F. Y. (24 June 2018). "GAN-OPC: mask optimization with lithography-guided generative adversarial nets". Proceedings of the 55th Annual Design Automation Conference. DAC '18. New York, NY, USA: Association for Computing Machinery: 1–6. doi:10.1145/3195970.3196056. ISBN 978-1-4503-5700-5.
  54. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad Rapp, Martin; Amrouch, Hussam; Lin, Yibo; Yu, Bei; Pan, David Z.; Wolf, Marilyn; Henkel, Jörg (2022-10). "MLCAD: A Survey of Research in Machine Learning for CAD Keynote Paper". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 41 (10): 3162–3181. doi:10.1109/TCAD.2021.3124762. ISSN 1937-4151. {{cite journal}}: Check date values in: |date= (help)
  55. ^ a b c d e f g h i j k Gubbi, Kevin Immanuel; Beheshti-Shirazi, Sayed Aresh; Sheaves, Tyler; Salehi, Soheil; PD, Sai Manoj; Rafatirad, Setareh; Sasan, Avesta; Homayoun, Houman (6 June 2022). "Survey of Machine Learning for Electronic Design Automation". Proceedings of the Great Lakes Symposium on VLSI 2022. GLSVLSI '22. New York, NY, USA: Association for Computing Machinery: 513–518. doi:10.1145/3526241.3530834. ISBN 978-1-4503-9322-5.
  56. ^ a b c d e f Chen, L.; Chen, Y.; Chu, Z.; Fang, W.; Ho, T. Y.; Huang, R. (2024). "The dawn of ai-native eda: Opportunities and challenges of large circuit models". arXiv:2403.07257.
  57. ^ a b "Cadence.AI: Transforming Chip Design with Agentic AI Workflows". Cadence Design Systems. Retrieved 7 June 2025.
  58. ^ "What is Electronic Design Automation (EDA)?". Cadence Design Systems. Retrieved 7 June 2025.
  59. ^ a b "A new era of EDA powered by AI". Siemens Digital Industries Software. Retrieved 7 June 2025.
  60. ^ a b "AMD Vivado Design Suite". AMD. Retrieved 7 June 2025.
  61. ^ "Vitis AI Developer Hub". AMD. Retrieved 7 June 2025.
  62. ^ "Design Automation Research Group". NVIDIA Research. Retrieved 7 June 2025.
  63. ^ Agnesina, Anthony; Ren, Mark (27 March 2023). "AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs". NVIDIA Developer Blog. Retrieved 7 June 2025.
  64. ^ "Scaling Your Chip Design Flow" (PDF). Google Cloud. Retrieved 7 June 2025.
  65. ^ a b "Leveraging IBM Cloud for electronic design automation (EDA) workloads". IBM. 31 October 2023. Retrieved 7 June 2025.

Cite error: A list-defined reference named "DefReinforcementLearningGoogle" is not used in the content (see the help page).
Cite error: A list-defined reference named "SynopsysAIBrochure" is not used in the content (see the help page).

The OpenROAD Project – Official website for the open-source autonomous layout generator.

Design Automation Conference (DAC) – Premier academic and industry conference for EDA.

Category:Electronic design automation Category:Applications of artificial intelligence Category:Semiconductor device fabrication Category:Integrated circuits