Q 1: What are the goals of Synthesis ?
There are Mainly three goals of synthesis without changing the functionality
Reduce the area (chip cost reduce)
Increase performance
Reduce the power
Q 2: What are the Tech
dependent inputs in PNR
There are three main tech depended inputs
Physical libraries -->format is
.lef --->given by vendors
Technology file -->format is
.tf --->given by fabrication peoples
TLU+
file
-->format is .TLUP-->given by fabrication people
Q 3: What are the Design
dependent inputs in PNR
There are six main design depended inputs
Logical libraries --> format is
.lib --->given by Vendors
Netlist
--->format is .v -->given by
Synthesis People
Synopsys Design Constraints -->format is .SDC -->given
by Synthesis People
MMMC --> format is .tcl --->given by
Top level
UPF --> format is .upf --->given by Top level
SCAN DEF --> format is .def --->given by Synthesis People
Q 4: What are the types of
cells in PNR
There are Four main PNR cells
Std cells
Hard macro
IO pads
Physical cells (end cap, welltap, tie cells, Decap cells, Filler cells)
Q 5: What are the types of IO
pads
Signal pad
Power / ground pads
Filler pads
Corner pads
Bond pads
Q 6: What is the purpose of IO
pads
Answer: The purpose of IO pad is Electrostatic discharge and level shifting
Electro Static Discharge
(ESD) is sudden flow of static electricity between two electrically charged
objects for a very short duration of time.
A level shifter is an interfacing circuit which can interface low core voltage
to high input- output voltage
Q 7: What is the use of Bound
pad
Answer: A Bound pad is used to connect the circuit on a die to a pin on a packaged
chip
Q 8: How tool differentiate the stdcell, IOpad and Macro
Answer: Tool understand the stdcell, IOpad and Macro by using class
Std cells class CORE
IO pad class PAD
Hard Macro class BLOCK
Q 9: What is difference between soft and hard macro
Answer:
Hard Macro:
Its examples like SRAM memory Analog Macro like PLL, Digital to Analog, Analog
to Digital converters etc.
No change in height and width, No optimization even we cant see internal logic
Physical info available
Soft Macro:
Soft macro available either netlist or rtl
We can change the height and width
No physical info
Flexible change dimensions
Q 10: How tool calculate the
rectilinear blocks area
Answer: Tool calculate the rectangle area by Lower left and Upper right
coordinates
Tool calculate the rectilinear blocks area by converting different small
rectangles then apply same strategy Lower left and Upper right
coordinates to calculate area
Q 11: Can we rotated the Macro in 90 or 270 degrees
Above 90 nm we can rotate the macro. But in the lower node we can’t
because of poly orientation restriction.
Q 12: Assume you have three types of block 7, 9, 12
Metal layers in 28 nm Technology which having more performance and why
12 Metal layers Design having more Metal width so power planning
and clock tree have less IR drop compared to 7, 9 Metal layers so 12 having
more performance.
Q 13: Which inputs files having resistance and
capacitance values
Interconnect technology format ITF file or TLU+ file and Technology file
in Technology file resistance and capacitance values are available but those
are not accurate compared to ITF file
Q 14: We have different RC corners im i right, why we
have different RC corners
TO eatching the metal have small variations in higher technology but it too
much impact on lower technology node so that's why ITF file considering
to encountering eatching we have Cbest, Cworst, RCbest, RCworst, Typical RC
corners.
Q 15: How multi cut via
increase the performance and yeild.
Multi cut via having less IR drop because parallel resistance decreased so
performance increased.
If any one of the via fails then also connection will be present because of the
other via but single cut via fails then the net will be disconnected and it
will cause chip failure.
Q 16: In which stage normal flop converted into scan flop
In synthesis stage we can convert the normal flop to scan flop when we are
setting DFT
Command : compile -scan [in design compiler tool]
Q 17: what is difference
between normal flop and scan flop
Scan flop having two more input
scan input [SI] and scan enable [SE] by using this only we can give the
DFT test vector and test the design [normal flop + mux]
Note: refer the below image more understanding
Q 18: what is scan chain where we are used it
Scan chain means the flop Q pin directly connected to other flop SI pin this
can be enable by scan enable SE pin this is used for DFT testing
Note: refer the below image more understanding
Q 19: what is the formula for
core, die and std cell utilization
Core utilization = {std cell area + Macro area } / Total Core area
Die utilization = {std cell area + Macro area + IO area } / Total Die
area
Std cell utilization = Area of std cell / {Core area - Macro area}
Q 20:what is the formula for
channel spacing
Spacing = {no.of pins X pitch of the higher metal layer } / {available metal
layer / 2}
Q 21: How cell driving strength increase
The driving strength of the cells increase fingering
Let us understand fingering by
comparing it with a real life scenario. Let us say
you have a tap which draws out water. How can
you make the tap pull out more water?? It is by increasing the diameter of the
tap. More is the diameter more is the current that goes out
Increasing the diameter of the tap is analogous to increasing the height or
length of the transistor region so as to deliver more current.
Now i will give a constraint that you cannot increase the diameter of the pipe
to draw out more water. So what do you do to get more water???. Simple. Put
more pipes in parallel so as to increase the water flow.
This is exactly what we are doing in fingering. We keep the transistor of fixed
size so that the height remains constant but then we put more transistors in
parallel so as to deliver more current to the load.
In this case we join the sources and drains in chained fashion so that they act
like taps in parallel. In general when we say 2x and 4x drive strength buffers
what exactly it means is that it could deliver 8 times more current than the
normal 1x buffer by employing 8 fingers or even more.
Another advantage of fingering is that resistance reduces drastically by
fingering. Let us say you have a resistance of R from the given figure. Now
when fingering is done, all these resistances come in parallel hence, the resistance
reduces by a factor of N. This is another remarkable advantage of doing
fingering.
Note: For more information refer the below image.
Q 22: How Fabrication peoples created different types of Vt cells like
HVT, +LVT and SVT etc.
There are to technic to create the different Vt cells
If gate oxide thickness increase the cell act like HVT if gate oxide thickness
decrease the cell act as like LVT based on gate oxide thickness they will
create different Vt cells
Based on Doping concentration also they will created different Vt cells if
Doping concentration is high the cell Vt decreased {acts like LVT} if Doping
concentration low the cell Vt increased {acts like HVT}
Q 23: what is clock gating why we are using clock gating what are the types of
clock gating
🔗 Clock gating is a
power-saving feature in semiconductor microelectronics that enables switching
off circuits. Many electronic devices use clock gating to turn off buses,
controllers, bridges and parts of processors, to reduce dynamic power
consumption.
There are two types of clock gating styles available. They are:
1) Latch-based clock gating
2) Latch-free clock gating.
Latch free clock gating or AND, OR based clock gating : The latch-free clock
gating style uses a simple AND or OR gate (depending on the edge on which
flip-flops are triggered). Here if the enable signal goes inactive in between
the clock pulse or if it multiple times then gated clock output either can
terminate prematurely or generate multiple clock pulses. This restriction makes
the latch-free clock gating style inappropriate for our single-clock flip-flop
based design.
Latch based clock gating : The latch-based clock gating style adds a
level-sensitive latch to the design to hold the enable signal from the active
edge of the clock until the inactive edge of the clock. Since the latch
captures the state of the enable signal and holds it until the complete clock
pulse has been generated, the enable signal need only be stable around the
rising edge of the clock, just as in the traditional ungated design style
Specific clock gating cells are required in the library to be utilized by the
synthesis tools. Availability of clock gating cells and automatic insertion by
the EDA tools makes it a simpler method of low power technique. Advantage of
this method is that clock gating does not require modifications to RTL
description.
Q 24:What is difference between AND,OR
and ICG based clock Gating
AND,OR based clock gating produce the glitch but ICG based clock gating
not produce the glitch for more information check above answer
Q 25: What is pad limited and Core
limited Design how to overcome it
Pad limited Design : A pad limited design is when the die size is determined by
the number of pads rather than by the size of the core. It occurs in cases when
the number of pads is relatively high and therefore requires more silicon space
To improve core utilization in pad limited die, we used IO staggering.
Core limited Design : The area of the Die is decided based on core logic
Note: pad limited and Core limited terms related to chip level not block
level.
Q 26: What is difference between IO PADS and
IO PINS
IO PADS related to chip level it
have both Electrostatic Discharge(ESD) and level shifting but IO PINS related
to block level doesn't have this thing that's why we are using
level shifter cells in block level
Q 27: What is difference between IO PIN and terminal
There is no difference, physical representation of the pin we call the
terminal.
Q 28:What are the ways to place IO pins in design
What are the ways to place IO pins in
design
In ICC2 tool we can place IO pins in three ways
Using pin guide → create_pin_guid and place_pin -self is the command
Using block pin constraint → set_block_pin_constraints,
set_individual_pin_constraints and place_pin -self is the command
Using DEF file from full chip → just source the or read the def file
Q 29: What are the guidelines for pin placement
The pins that are planned to place on top and bottom boundary need to be in
vertical metal layers
The pins that are planned to place on left and right boundary need to be in
horizontal layers
The terminal has to be aligned to the track
Use pin depth as minimum length to avoid min area violation
The terminal doesn't cross the offset region
If you have sufficient track area, try to place with interleaving to overcome
congestion
Q 30: Why we need MMMC file
MMMC analysis is very important to perform, so that the IC can work on
different mode of PVT (Process, Voltage, and Temperature). The variations in
PVT can insert extra delay in the circuits and due to this delay timing
constraints may not be met. Thus the IC must be robustly checked for every
process corners
Q 31: In which input file having
high fanout information
No input file have high fanout
information that's why we need to constrain max_fanout
Q32: In which file noise margin information content
Timing library contains noise margin information
Q 33: In which file cross talk
information contains
SPEF file have cross talk information
Q 34: what are the Macro
guidelines to place in floor plan
Macro should not be
place center of core region it should place boundary of core region
Macro pins need to facing towards std cell region
Macro channel has to be create when placing multiple macros next to each other
Need to avoid criss cross connection
Place the macro which are interact with IO pins those macro near to IO pins
Macro placement should not block the accessibility of IO pins
Try to place Macro belongs to same hierarchy as a group do not split the group
as it may lead to timing issues due to std cell separation
Std cell region need to continuous / avoid trap pocket
Q 35: what are the types of
blockage we used and why
🔗Blockages used in Physical
Design mainly for two purposes, Placement and Routing.
Placement Blockages: There are mainly three types.
Hard Blockage- No cell is allowed to be placed in a specified region.
Soft Blockage- Only buffers/inverters needed for optimization are allowed to be
placed in specified regions.
Partial Blockage- Any type of cells are allowed to place but only for defined
areas of specific regions. (like 60% of specified area)
Routing Blockages: There are mainly two types.
Hard Blockage- Any type of net is not allowed to route through specified
regions.
Signal Blockage- Any type of data and clock signal net is not allowed to route
through a specified region, but power net is allowed to route.
Q 36:what is keep out margin or halo
🔗Keep-out margin: it is a
region around the boundary of fixed cells or macro in a block in which no
other cells are placed. The width of the keep-out margin on each side of the
fixed cell or macro can be the same or different.
Q 37:what is difference
between keep out margin and blockage
🔗 There is no difference
between keep out margin and blockage both are same but only difference is keep
out margin move along with the std cell or macro but block not move along with
std cell or macro
Q 38:After loading Design
what are the sanity check we have to do and what you observe from that
🔗 Check_design -all → it
gives all design related information
Check_timing → any constraint related and timing related warning and errors it
gives
Few more commands like check_netlist, check_lib.
Note : above all commands related to ICC2 tool
Q 39:why we are using
Boundary or End cap cells. If we place this cells after placement what happened
🔗The end cap cells are called
a physical-only cells are placed in the design because of the following
reasons:
To reduce well proximity effect {WPE means vt variation of cell because of n
well not present in either of side}
To protect the gate of a standard cell placed near the boundary from damage
during manufacturing.
To avoid the base layer DRC and terminate the Nwell continuity (Nwell and
Implant layer) at the boundary.
To make the proper alignment with the other block.
Some standard cell libraries have end cap cells which serve as decap cells
also.
If we place end cap cells after std cells placement there is no use because
already std cells will site near to boundary
Q 40:Why we used Well
Tap and Tie cells if we are not use what happened
🔗Well tap cells (or Tap cells)
are used to prevent the latch-up issue in the CMOS design. Well tap cells
connect the Nwell to VDD and p-substrate to VSS in order to prevent the
latch-up issue. There is no logical function in well tap cell rather than
proving a taping to Nwell and p-substrate therefore well tap cell is called a
physical-only cell
The tie cell is a standard cell, designed specially to provide the high or low
signal to the input (gate terminal) of any logic gate. The high/low signal can
not be applied directly to the gate of any transistors because of some
limitations of transistors, especially in the lower node. In the lower
technology node, the gate oxide under the poly gate is a very thin and the most
sensitive part of the transistor. We need to take special care of this thin
gate oxide while fabrication (associated issue is antenna effect) as well as in
operation too. It has been observed that if the polysilicon gate connects
directly to VDD or VSS for a constant high/low input signal, and in case any
surge/glitch arises in the supply voltage it results in damage of sensitive
gate oxide. To avoid the damages mentioned above, we avoid the direct
connection from VDD or VSS to the input of any logic gates. A tie cell is used
to connect the input of any logic to the VDD or VSS.
Q 41:How you can estimate
the power for your design
Power Calculations
🔗Number Of The Core Power Pad
Required For Each Side Of Chip=(Total Core Power)/{(Number Of Side)*(Core
Voltage)*Maximum Allowable Current For a I/O Pad)}
Core Current(mA)=(CORE Power)/(Core Voltage )
Core P/G Ring Width=(Total Core Current)/{(N0.Of.Sides)*(Maximum Current
Density Of The Metal Layer Used For Pg Ring)}
Total Current =Total Power Consumption Of Chip(P)/Voltage(V)
No.Of Power Pads(Npads) = Itotal/Ip
No.Of Power Pins=Itotal/Ip
Where,
Itotal =TOTAL Current
Ip Obtained From Io Library Specification.
Total Power=Static Power+Dynamic Power
=Leakage Power+[Internal Power+Ext Switching Power]
=Leakage Power+[{Short Ckt+Int Power}]+Ext Switching Power]
=Leakage Power+[{(Vdd*Isc)+(C*V*V*F)+(1/2*C*V*V*F)]
Note : some cases we are neglected int power
Q 42:What are
the goals of Power plan
🔗Power planning is typically
part of the floor planning stage, in which a power grid network is created to
distribute the power uniformly to each part of the chip.
• Power planning means to provide power to the every macros and standard cells
and all other cells are present in the design.
• Power planning is also called Pre-routing as the Power Network Synthesis
(PNS) is done before actual signal routing and clock routing.
Power ring is designed around the core.
• Power rings contain both VDD and VSS rings. After the ring is placed a power
is designed such that power reaches all the cells easily, power mesh is nothing
but horizontal and vertical lines on the chip.
During power planning, the VDD and VSS rails also have to be defined.
Objective of power planning is to meet the IR drop budget.
• Power planning involves- calculating number of power pins required, number of
rings and straps, width of rings and straps and IR drop.
Q 43:what are the techniques
used to reduce IR drop
🔗 Method to reduce the
voltage IR drop
Reducing the wire resistance.
Increase the number of Vdd and Vss pads in the chip to reduce the current
consumption for each pair of Vdd and Vss pads.
Reducing the current consumption (Iavg) of logic gates
Q 44:what is formula for
dynamic and static power
🔗 Total Power=Static
Power+Dynamic Power
Static Power = Vleakage * Ileakage
Dynamic Power = Short Ckt + Ext Switching Power
Short Ckt = Vdd*Isc
Ext Switching Power = 1/2*C*V*V*F
Q 45:What is
difference between flip chip and wire bound design
🔗Flip chip means “flipped over
the circuit board” facing downward. Instead of facing the upside. So flip chips
allow for a large number of interconnects with shorter distances than wire,
which greatly reduces the distance and area. The process of attaching a
semiconductor die to a substrate or carrier with the bond pad facing down is
referred to as a flip chip.
On the die bond pad, there is a conductive bump that is used to make the
electrical connection. The stand-off space between the die and substrate is
normally filled with a non-conductive adhesive known as underfill once the die
is connected. Between the die and carrier, the underfill relieves tension,
increases robustness, and shields the component from any moisture infiltration.
Comparing flip-chip bonding to other connectivity techniques can provide a
variety of benefits. Since the entire region of the die can be used for
connections, flip chip bonding can increase the number of I/O's. The speed of a
device can be increased since the connectivity paths are shorter than they
would be with wire bonds. Additionally, the removal of wire bond loops results
in a reduced form factor.
Q 46: Why we are using Decap cells
and filler cells
Decap cells are basically a charge storing device made of the capacitors and
used to support the instant current requirement in the power delivery network.
There are various reasons for the instant large current requirement in the
circuit and if there are no adequate measures taken to handle this requirement,
power droop or ground bounce may occur. These power droop or ground bounce will
affect the constant power supply and ultimately the delay of standard cells may
get affected. To support the power delivery network from such sudden power
requirements, Decap cells are inserted throughout the design
Filler cells primarily are non-functional cells used to continue the VDD and
VSS rails. Filler cells are used to establish the continuity of the N- well and
the implant layers on the standard cell rows. The use of Filler Cells is to
reduce the DRC Violations created by the base(N-Well, PPlus & NPlus)
layers.
Q 47:What are the
Floor plan sanity checks 🌌
🔗 check _pin_placement → check
ports are a line with track or not and also is any missing pins, pin
shorts, technology spacing problems
check_boundary_cells → any boundary cell related violation
check_pg_drc → check for spacing , width and via enclosure related violations
on power and ground nets
check_pg_missing_vias → to report missing vias at insertion point
check_pg-connectivity → to report shorts and opens related to planets
Note: above commands related to ICC2 tool
✨ Q 48:can we do macro
placement after power plan ✨
🔗No we can't because its
created drc violations related to power plan
🔍 Q 49:can we do power plan
after routing🔍
🔗 No, because if you preserve
the top metal layer for the power plan also if it is too congested then there
is a chance proper vias are not to be drop it cause more IR drop not only this
if you do not preserve the top metals then may see more congestion
🌟 Q 50:What is Spare cells Why
we are using Spare cells 🌟
🔗 Spare cells generally
consist of a group of standard cells mainly inverter, buffer, nand, nor, and,
or, exor, mux, flip flops and maybe some specially designed configurable spare
cells. Ideally, spare cells do not perform any logical operation in the design
and act as a filler cell only.
The inputs of spare cells are tied either VDD or VSS through the tie cell and
the output is left floating. Input can not be left floating as a floating input
will be prone to get affected by noise and this could result in unnecessary
switching in space cells which leads to extra power dissipation.
Spare cells reduce the violation or enable us to modify/improve the
functionality of a chip with minimal changes in the mask. We can use already
placed spare cells from the nearby location and just need to modify the metal
interconnect. There is no need to make any changes in the base layers. Using
metal ECO we can modify the interconnect metal connection and make use of spare
cells. We only need to change some metal masks, not the base layer masks.
Q 51: In
which stage we are define spare cells🔥
🔗 Mostly spare cells are
defined in the floor plan stage only in some cases we can define in the routing
stage also.
🌐 Q 52:What is the different
define Spare cells in floor plan and routing stage 🌐
🔗 If you define Spare
cells in floor plan that will be in clustered /group format and if flop is
present in Spare cells it clock pin would be balanced in cts stage
If you define Spare cell in routing it will not be clustered format and its
flop clock pin also not balanced, that is the reason we preferred to place
Spare cells in the floor plan stage
⚙️ Q 53:what are the ways to
place std cells in the core region ⚙️
🔗In ICC2 there are two ways to
place std cells in core reason one is create_floor plan command it just place
the std cell in core reason no legalize and no optimize the other is place_opt
command it optimize as well as legalize also
🔮 Q 54:what are the five
stages of place_opt command and explain 🔮
🔗The five stages of place_opt
commands is
initial_place
initial_drc
initial_opto
final_place
final_opto
Initial_place do cores placement of cells, cells are not legally placed and
there can be overlap and it doesn't have any optimization of cells
Initial_drc do high fanout net synthesis HFNS → tool address max tran and
max cap on high fanout nets all high fanout nets all are buffered
initial_opto this is the first stage of optimization to improve timing QOR fix
setup, Max tran and Max cap fixes on all nets
final_place do legalize the placement of all cells {no overlap,}
final_opto do the incremental optimization to fix the left over new violation
🌟 Q 55:What is global routing
why we are doing global routing 🌟
🔗Whole region is divided in to
an array of rectangular subregion called GRC cell or bin each of
which may accommodated tens of routing tracks in each GRC cells
Global routing is preferred to estimating the interconnect parasitics and
Routing congestion MAP
Height of GRC cells is 2 times of std cells row height
Each GRC having horizontal and vertical tracks
If demand is more than the supply or capacity its called overflow
If demand is less than the supply or capacity its called underflow
Overflow cause the congestion
Q 56: what are the goals of placement 🔥
🔗There are mainly three goals
of placement
Timing aware
Routing aware
Power aware
🌐 Q 57:what are the goals of
floor plan 🌐
🔗Objectives of Floorplan
minimize the area.
minimize the Timing.
Reduce the wire length.
Making routing easy.
Reduce IR drop.
✨ Q 58:What are the reason for
congestion how to fix the congestion✨
🔗Pin density : in a
small area having more pins are called pin density this pin density create
congestion to fix congestion due to pin density apply cell padding or keep out
margin
Insufficient macro channel spacing : because of it cause congestion to fix this
increase the channel spacing
Rectilinear block corner : this one also create congestion to fix the create
placement blockage
At macro corners: this one also create the congestion to overcome this create
placement blockage
🔍 Q 59: How you can control
the std cell placement 🔍
🔗Magnetic placement : pulling
the std cell logic towards the fixed object (eg: IO ports and macro pins can be
assumed as fixed object ) command : magnetic_placement
Bounds or place bounds { in innovus regions } : crate the bound related logic
with or without coordinates command : create_bound
Placement blockage
🌟 Q 60:What are the types of
bounds 🌟
🔗There are two types of bounds
move bound and group bound
Move bounds means with definite coordinate if we created bound that's we
called move bounds this are three types
Soft move bound : In optimization some cells are go out from the bound to meet
timing QOR other logic cell comes and site in that bound
Hard move bound : The cells should not move out and other cells are
allowed in that bound
Exclusive move e bound : The cells not move out and other cells not allowed in
that bound
Group bounds means without any location and coordination's group bounds
are two types soft and hard group bounds
Q 61: If timing is bad in your design after placement stage then what kind of
technique you use to overcome🌌
🔗The timing will be bad in
placement stage because of two reason
Bad placement : if timing violation comes because of bad macro placement change
the macro placement
Too many buffers are added : then check any constraint related problem
To fix timing violations in placement stage
Change the placement timing effort to high
Create group paths and give weightage to high frequency clock
Create a bounds if the violation are not fix by above solution
⚡ Q 62:Can we do optimization
in placement stage without cell swapping upsizing and adding buffers ⚡
🔗 No without swapping
upsizing and adding buffers we can't to optimization we can reduce net
length but that's not impact much
🔮 Q 63: Why we are checking
setup only in placement stage why we are not checking hold🔮
🔗 Before CTS we
don't know the skew tool is working on local skew which impacts timing.
The local skew depends on launch and capture delay; this delay depends on
placement of the flop. Most skew will be positive; it is good for setup but bad
for hold timing. For ideal clock skew is zero it is pessimistic for setup
timing analysis not for hold time analysis that's why we check only setup
in placement stage.
🔍 Q 64: Why we are doing IO
buffering if we are not do IO buffering what happened🔍
🔗 To maintain good
transition in our block we will do IO buffering if we are not do IO
buffering the chase of bad transition it impact the cell delays cause the setup
timing violations
🌟 Q 65: What is pipeline
concept why we are using🌟
🔗 If more logic is there
between into reg or reg to out there is difficult to meet timing then we add
the register to meet timing
Q 66:What is congestion hotspot 🔥
🔗Any region with too many GRC
overflows that region we called the congestion hotspot.
🌐 Q 67: What are the sanity
check you did in placement stage and why🌐
🔗report_timing → check setup
timing
report_qor → check setup, max tran , max cap in different scenario
Analyze_design_violation → To clearly check max tran , max cap
Report_congestion → to check congestion in design
Report_utilization → how much core utilized in placement we can observe
Report_desgn → complete design related information will report
Check_legality → what ever placed cells are legally placed or not we have to
check
✨ Q 68:why we have to do scan chain reorder in placement if not what happened✨
🔗Scan stitching will happened
during synthesis stage based on the connectivity of flop
But during the actual placement the flop may not be placed close to each other.
Due to this there are chances of increase in overall routing length of scan
chain related nets.
This may lead to congestion in routing critical design
To improve routability, we can enable scan reordering which will try to reduce
route length as the reordering of scan chain will be based on physical location
of flops (NOTE:To enable scan reordering SCAN DEF is must)
🔍 Q 69:You don't have any pin
& cell density and macro placement also good still your getting
congestion what would be the reason🔍
🔗If above mentioned all are
clean still your getting congestion means the maximum signal routing will set
less layer (example your design have 9 metal layers but if maximum routing
layer set as 3 then you definitely see congestion)
To overcome this change maximum routing layer (ICC2 command : set_ignor_layer)
🌟 Q 70:What are the types of
CTS🌟
🔗There are two types of CTS
one is balanced CTS and unbalanced CTS again balanced CTS is two types
OCV aware CTS and without OCV CTS again OCV aware CTS is two types H tree and
clock spine
Hungry for more enlightenment? 🧠 Engage, enlighten, and share as we continue our exploration
of physical design engineering. Together, we're pushing the boundaries of
innovation and learning! 🚀
Q
71:What is difference between CPPR and CRPR ⚙️
🔗CPPR is caused mainly by OCV fluctuations,
whereas CRPR is an architectural artifact.Many times your chip is overdesigned
due to undue pessimism in timing calculations. Pessimism in timing analysis
makes it difficult for designs to close timing and it is imperative that
analysis is not overly pessimistic. There is a clock path related to pessimism
observed in timing calculated in on-chip-variation mode, and EDA tools have the
capability to automatically remove the pessimism while analysis.
Common Path Pessimism Removal (CPPR) A timing path consists of launch and
capture paths. The launch path has further components – the launch clock path
and the data path.In
the below circuit snippet, the launch path is c1->c2->c3 -> CP-to-Q of
FF1 -> c5 -> FF2/DThe capture path is c1->c2->c4->FF2/CP late and
early derates are set for cells and nets while doing timing analysis in
on-chip-variation mode. For setup analysis, STA tool does late check for launch
clock path and the data path, and early check for the capture clock path.
However, part of both capture and launch clock paths are the same, till node
n1. In the image given, numbers in red denote the max delays(late delay
numbers) and numbers in green are min delays(early delay numbers). Let us
assume the net delays are included in the numbers.
For setup analysis, the launch clock path delay is:
`c1->c2->c3 ->FF1/CP`
`1ns+1ns+1ns+ = 3ns`
The capture clock path delay is
`c1->c2->c4->FF2/CP`
`0.8ns+0.8ns+0.8ns = 2.4ns`
However, part of the clock paths is common, till node n1. It is not realistic
that these have two different delays for the same analysis. Using the late and
early timing numbers for the common path creates unwanted pessimism in timing
analysis leading to difficulties in timing closure or overdesign. Hence removal
of this pessimism is necessary.
For the example noted above we can see a “CPPR adjustment” of 0.4, i.e. the
skew between the clock paths will be 0.2ns, instead of 0.6ns.
`+ CPPR Adjustment 0.4`
Clock Reconvergence Pessimism some case clocks reconverge after taking
different paths. In the circuit given below, the clock splits into two
different combinational logic before converging through mux m1.The worst case
analysis will have the launch clock path through c3->c4->m1 whereas the
capture clock path through c1->m1->c5. However, if this is not a
possibility by design, reconvergence pessimism should be also removed so as to
avoid the over design. In hold check, since the timing check is at the same
clock edge, this pessimism should always be removed in the analysis. The clock
convergent point in m1/Y.
Launch clock till convergence:
`c3->c4->m1 `
`1ns+1ns+1ns = 3ns`
Capture clock till m1/Y:
`c1->m1`
`0.8ns+0.8ns = 1.6ns`
Clock reconvergence pessimism: 1.4ns
Activate
to view larger image,
Activate
to view larger image,
Q 72: Why we are
building CTS after placement only.⚙️
🔗To build CTS we need fixed
flop location to get fixed flop location after placement only during placement
also flops are moved to final place only those are fixed that is the main
reason we are building CTS after placement only.
🌌 Q 73: On which bases you
will say your skew is good🌌
🔗 My skew will good or bad i
will decided based on the my clock period (mostly we consider less than 10%
clock period value is a good skew )
✨ Q 74:Consider you have two designs one have more skew and less latency one
have less skew and more latency which one you consider and why✨
🔗To answer this kind of Qs we
need to consider design characteristics like design is timing critical or power
critical
If design is timing critical then we consider bad skew with good latency we
can't meet the timing in this condition i will choose the design with good skew
with bad latency
If design is power critical then we consider good skew with bad latency then my
power consumption is too high i can't meet power requirement in this condition
i will choose the design with bad skew but good latency
Generally this kind of Q they will ask for how your critical thinking you're
thinking in all scenarios are not they check that's it
🔍 Q 75:My skew is Zero is good
or bad 🔍
🔗 If my skew is zero good
with respect to timing point of view but bad with respect to power point of
view because all flops switch at a time my dynamic power consumption is too
high it may damage the power routing also thats the main reason we are not
going with zero skew we have to maintain some target
Q 76:what are the types of skews and tool is work on which ske🔥
🔗Skew means the delta difference between clock arriving latencies
of two flops there are two types of skew local and global skew
Local skew : The delta difference between clock arriving latencies of
capture and launch flop is called local skew tool work on local skew
Global skew: The delta difference between clock arriving latencies of
maximum path in design to the minimum path in design is called global skew
🌐 Q 77:Consider I have two design one have more latency one have less
latency which one you choose and why🌐
🔗Less latency design I will choose because based on latency only my
buffers levels are added if latency high means too many buffers added in
my design for those buffer power consumption and area also high so we have to
consider less latency design only.
⚡ Q 78:On which reference CTS will be build⚡
🔗Based on sink pin the clock will be build there are three types of
pins
Sink pin : all flop or macro cp pins are sink pins here clock propagation stop
and consider for balancing
Through pin : The is flop cp pin only but Q pin connected to other flop
cp pin not consider for balancing or combinational cell output is connected to
cp pin also comes under through pin
Ignore pin: D pin, Reset and set pins are comes under ignore pins not
consider for balancing
🔮 Q 79:If I'm not define clock can my CTS build or not🔮
🔗Yes CTS will build tool consider default frequency and it build but it
should not be meet timing with actual frequency
We are defining clock in design to consider this based on build the clock
it is like more pessimistic if we are define also it build but we can't
guarantee it will work
🌟 Q 80:What is inter clock balancing🌟
🔗If a clock group will balancing with other clock group if timing path
present in between them then we called interclock balancing
Q
81:What is useful skew how it is help for timing🔥
🔗Using Skew to address setup or hold violation is called the useful
skew means we are playing with launch or capture clock path to meet timing
With respect to start point the positive slack is more then only we introduce
value this all process tool do automatically by using CCD/CCOPT (concurrent
clock optimization ) it automatic algorithm
🌐 Q 82:Flop having setup and hold value I'm I right are those values
constant or it can change🌐
🔗No those values are not constant those are depend on data and clock
transition for better understanding refer below image
⚡ Q 83:Are the flop setup and hold values negative⚡
🔗 Setup and Hold times in digital circuit design are typically
specified as non-negative values. The setup time refers to the minimum amount
of time before the clock edge at which the input data must be stable and
available for the correct operation of the circuit. The hold time refers to the
minimum amount of time after the clock edge during which the input data must
remain stable.
However, in some cases, you may encounter negative setup or hold times. This
can happen due to a variety of reasons, such as:
Clock Skew: Clock skew refers to the variation in arrival times of the clock
signal at different parts of the circuit. If the clock signal arrives earlier
at a particular flip-flop compared to the arrival time at the data input, it
can result in a negative setup time. Similarly, if the clock signal arrives
later at the flip-flop compared to the arrival time at the data input, it can
result in a negative hold time.
Process Variations: Process variations in semiconductor manufacturing can cause
variations in the electrical properties of transistors and interconnects. These
variations can lead to negative setup and hold times in some cases.
Timing Analysis Methodology: Some timing analysis methodologies or tools may
allow for negative setup and hold times to be specified. This can be useful in
certain situations where specific timing constraints need to be met, such as in
certain high-speed designs.
Negative setup and hold times are not common in most digital circuit designs
and are typically avoided. They can introduce additional challenges in timing
analysis and make the design more sensitive to variations. Designers strive to
ensure positive setup and hold times to ensure reliable and robust circuit
operation.
hashtag#PhysicalDesignEngineering hashtag#EngineeringExcellence hashtag#InterviewPrep hashtag#ChipDesign hashtag#ClockNetworkSynthesis hashtag#ClockDomainCrossing
Activate
to view larger image,
Q 84:What is
difference between normal buffer and clock buffe🔮
🔗In digital circuit design, both normal buffers and clock buffers are
used to amplify and shape signals. However, they serve different purposes and
have specific characteristics related to their usage.
Normal Buffer: A normal buffer, also known as a data buffer or signal buffer,
is used to amplify and buffer a general data signal. It is typically used to
drive signals over longer distances, improve signal integrity, and isolate the
source from the load. Normal buffers are used for non-clock signals in the
circuit, such as data inputs, control signals, or any other signals that need
to be propagated through the circuit.
Clock Buffer: A clock buffer, as the name suggests, is specifically designed to
handle clock signals. Clocks are critical for synchronization in digital
systems, and clock buffers are used to distribute clock signals throughout the
circuit. They ensure that the clock signal maintains its integrity and has
consistent characteristics across different parts of the design. Clock buffers
are optimized for low skew, low jitter, and fast rise/fall times to ensure
accurate clock propagation and synchronization.
The key differences between normal buffers and clock buffers can be summarized
as follows:
Purpose: Normal buffers are used for general data signals, while clock buffers
are specifically designed for clock signals.
Signal Characteristics: Clock buffers are designed to minimize skew, jitter,
and provide fast edges to maintain the integrity and synchronization of clock
signals. Normal buffers do not have the same stringent requirements.
Timing Considerations: Clock buffers play a crucial role in meeting setup and
hold time requirements in synchronous digital systems. They are carefully
placed and sized to ensure proper clock distribution and synchronization.
Normal buffers do not have the same timing considerations.
Design Optimization: Clock buffers are often optimized for low power
consumption, low noise, and high performance to meet the stringent requirements
of clock distribution. Normal buffers may prioritize other design
considerations.
It's important to note that while normal buffers and clock buffers have
different design considerations, they can both be implemented using similar
circuit topologies (e.gCMOS inverters) with appropriate sizing(clock buffer
PMOS width is 2.5 times high compared to normal buffer) and optimization for
their specific purposes.
Q 85:For CTS building which one you choose clock buffer or clock
inverter 🌟
🔗 In clock tree synthesis (CTS) in VLSI design, the choice between
using clock buffers or clock inverters depends on several factors, including
the design specifications, timing requirements, power considerations, and the
specific CTS methodology being employed. Both clock buffers and clock inverters
can be used in CTS, and the selection depends on the design goals and
constraints.
Clock Buffers: Clock buffers are commonly used in CTS for clock distribution. They
are designed to provide low skew, low jitter, and fast rise/fall times, which
are essential for maintaining the integrity and synchronization of clock
signals. Clock buffers help amplify the clock signal and drive it to multiple
clock sinks (flip-flops) in the design. They are typically used in clock trees
where the goal is to minimize clock skew and ensure reliable clock
distribution.
Clock Inverters: Clock inverters, as the name suggests, invert the clock
signal. In certain CTS methodologies, clock inverters can be used for balancing
the clock tree and achieving better clock skew control. By strategically
placing clock inverters, the path lengths of different branches of the clock
tree can be adjusted to reduce clock skew. Clock inverters are used to equalize
the delay of different branches and ensure that the clock signal arrives at the
flip-flops with minimal skew.
The choice between clock buffers and clock inverters in CTS depends on various
considerations, such as:
Clock Skew Control: If minimizing clock skew is a critical objective, clock
inverters may be used strategically to balance the clock tree and equalize path
lengths.
Design Constraints: The specific design constraints, such as power consumption,
area, and timing requirements, may influence the choice of clock buffers or
clock inverters.
CTS Methodology: Different CTS methodologies may have different recommendations
or preferences for using clock buffers or clock inverters. The chosen
methodology and associated tools may guide the decision-making process.
It's worth noting that in many CTS implementations, a combination of clock
buffers and clock inverters may be used to optimize the clock tree and achieve
the desired objectives. The selection of clock buffers, clock inverters, or a
combination thereof should be made based on careful analysis of the design
requirements, timing constraints, and the specific CTS methodology being
employed.
Q 86:What is CTS SPEC
file what it contain🌟
🔗 In VLSI design, a CTS (Clock Tree Synthesis) SPEC file, also known as
a CTS constraints file, is a file that contains the specifications and
constraints for performing clock tree synthesis. It provides important
information to the CTS tool regarding the desired characteristics and
requirements of the clock tree.
The CTS SPEC file typically includes the following information:
Clock Netlist: The CTS SPEC file includes the netlist or connectivity
information related to the clock network. It specifies the clock source(s),
clock sinks (typically flip-flops), and the interconnections between them.
Clock Constraints: The file contains clock-related constraints, such as the
desired clock frequency, clock waveform specifications (rise/fall times, duty
cycle), and any other timing requirements specific to the clock network.
Clock Tree Topology: The CTS SPEC file provides information about the desired
clock tree topology, including the number of levels, the types of buffers or
inverters to be used, and their placement locations.
Buffer Sizing and Placement Constraints: It includes constraints related to
buffer sizing, placement locations, and any specific rules or guidelines for
buffer insertion in the clock tree.
Clock Skew and Jitter Constraints: The file may specify constraints related to
allowable clock skew (the variation in arrival times of the clock signal at
different points in the clock tree) and clock jitter (the variation in the
timing of clock edges).
Power and Area Constraints: The CTS SPEC file may include constraints related
to power consumption and area utilization of the clock tree. This information
helps guide the optimization process during clock tree synthesis.
Miscellaneous Constraints: Any additional constraints or guidelines specific to
the clock tree synthesis process may be included in the CTS SPEC file.
The CTS SPEC file serves as input to the CTS tool, which uses this information
to perform the clock tree synthesis process. The tool analyzes the input
specifications, optimizes the clock tree topology, inserts buffers or
inverters, and performs various optimizations to meet the specified constraints
and requirements.
The specific format and syntax of the CTS SPEC file may vary depending on the
CTS tool and methodology used in the design flow. The tool's documentation or
user guide typically provides information on the required format and the
available options for creating the CTS SPEC file.
Q 87: If skew is bad
how you can overcome🌟
🔗When the arrival times of the clock signal vary at different points in
a circuit, it is called clock skew, which is generally undesirable in VLSI
design. To overcome or minimize clock skew, there are a few approaches:
Balancing the Clock Paths: By adjusting the delays in different branches of the
clock network, such as using buffers or inverters strategically, the path
lengths can be equalized. This helps ensure that the clock signal reaches
various parts of the circuit at similar times, reducing clock skew.
Adding Buffers: Placing buffers at specific locations along the clock paths can
amplify the clock signal and control its arrival time. This helps in reducing
clock skew by making the clock signal more consistent across the circuit.
Considering Skew in Placement: During the physical design phase, careful
placement of circuit elements, such as flip-flops or clock sinks, can help
minimize clock skew. By considering the clock network structure and arranging
these elements close to each other, the impact of delays and variations can be
reduced.
Skew-Aware Routing: Similar to placement, routing techniques can be employed
that take clock skew constraints into account. By optimizing the paths that the
clock signals take, the effects of delays and variations can be minimized,
resulting in lower clock skew.
Compensation Techniques: In some cases, additional circuitry, like delay
elements or phase-locked loops (PLLs), can be used to actively adjust the clock
arrival times and compensate for skew. These techniques can be effective but
may introduce complexity and increased power consumption.
Optimizing Clock Distribution: Improving the clock distribution network itself,
such as using efficient metal layers, reducing parasitic capacitance, and
carefully routing the clock signals, can help reduce clock skew. These
optimizations aim to minimize delays and variations in the clock paths.
It's important to note that completely eliminating clock skew may not always be
possible, especially in complex designs. The goal is to minimize skew to an
acceptable level that meets the design requirements. The specific techniques
used will depend on the design constraints, available resources, and the
trade-offs between performance, power, and size.
Q 88: If latency is
bad how you can overcome
Answer 88:When latency, which is the delay in signal propagation, is considered
undesirable in VLSI design, there are several ways to overcome or reduce it:
Pipeline Design: Breaking down complex operations into smaller stages or steps
helps reduce latency. Each step can be executed concurrently, allowing multiple
operations to be processed simultaneously.
Parallel Processing: Using multiple processing units or functional units in
parallel can reduce latency. This means that different parts of the data are
processed simultaneously, speeding up computations.
Optimized Circuit Design: Careful design techniques, such as optimizing
gate-level implementations and minimizing long interconnects, can help reduce
latency. These optimizations focus on improving the speed and efficiency of
individual logic gates and connections.
Increasing Clock Frequency: Raising the clock frequency can potentially reduce
latency by allowing circuits to operate at higher speeds. However, there are
limitations due to power consumption and timing constraints that need to be
considered.
Memory Hierarchy and Caching: Implementing memory hierarchy and caching
techniques can improve latency in accessing data. Frequently used data is
stored in faster and closer memory levels, reducing the time required for data
retrieval.
Algorithmic Optimization: Analyzing and optimizing algorithms used in the
design can reduce latency. More efficient algorithms or algorithmic
improvements can decrease the number of computational steps or simplify
operations, leading to lower latency.
Balancing Trade-offs: Consider the trade-offs between latency, area utilization,
and power consumption. Reducing latency may require additional hardware, which
can increase area usage and power consumption. Finding the right balance is
important.
It's important to note that reducing latency often involves a combination of
these approaches. The specific techniques used depend on the design
requirements, constraints, and the trade-offs between performance, power, area,
and timing considerations.
Q 89:How Skew effect
on setup and hold time
Answer 89:Positive skew means capture delay is more compared to launch it
helps the setup time but violated the hold time
Negative skew means launch delay is more compared to capture it helps the hold
time but violated the setup time
Q 90: Why we are using NDR in CTS
Answer 90:NDR means Non Default Rule in this we used double width and double
spacing
Double spacing used for reduce coupling capacitance because of that cross talk
will be reduce
Double width used for to overcome electro migration
Q 91:You have nine metal layers in your design which metal layer you preferred
for CTS and why
Answer 91:Top two metal layers preferred for power after that means 6 and 7
metal layer i preferred for CTS routing because clock is very important in
timing analysis i have to ensure that my net delay should be less then only i
will meet the timing
Q 92: what are the default clock skew groups in design
Answer 92: Tool created default clock skew group for every master clock
if you want to overwrite that create_clock_skew_group is a ICC2 command
Q 93:Explain clock_opt command
Answer 93: Clock_opt command have three stages
Build_clock → it add the clock buffers based on flop location
Route_clock → it route the clock buffers
Final_opt → optimize the clock tree
Q 94:If clock is not propagate what happened in CTS
📈 Answer 94: If clock is not propagated then the io latencies are
consider zero so what ever optimization done that not effective optimization
🔍 Q 95:What are the sanity check you did after CTS
📈 Answer 95:report_timing → check setup timing
report_qor → check setup, max tran , max cap in different scenario
Analyze_design_violation → To clearly check max tran , max cap
Report_congestion → to check congestion in design
Report_utilization → how much core utilized in placement we can observe
Report_desgn → complete design related information will report
Check_legality → what ever placed cells are legally placed or not we have to
check
Report_clock_qor → for know clock tree related violation
Report_clock_timing → for report either skew or latency there is a option like
-type by using this we will report
🔍 Q 96:What is clock mesh were we used it
📈 Answer 96:How we do power mesh same like we create mesh for clock it
is used to minimize the skew for high frequency design
🔍 Q 97: What is multi source CTS why we used it
📈 Answer 97:We are creating multiple source point in design for cts to
minimize the clock latency
Q 99: What is antenna
effect how to reduce
🧠 Answer 99: The charge accumulated on metal shape during CMP (chemical
mechanical polishing) step of fabrication can damage the gate oxide which will
lead to chip failure
Fixing1 : layer hoping or layer jumping
Fixing 2 : Antenna diode
🎯 Q 100: What is antenna ratio where this information present
🧠 Answer 100: Tool know antenna effect based on antenna ratio this
information present in LEF file
Antenna ratio = metal shape area / gate area
🎯 Q 101: what are the sanity checks you did in routing stage
🧠 Answer 101: report_timing → check setup timing
report_qor → check setup, max tran , max cap in different scenario
Analyze_design_violation → To clearly check max tran , max cap
Report_utilization → how much core utilized in placement we can observe
Report_desgn → complete design related information will report
Check_legality → what ever placed cells are legally placed or not we have to
check
Report_clock_qor → for know clock tree related violation
Report_clock_timing → for report either skew or latency there is a option like
-type by using this we will report
Report_routes → To report routing related warnings and errors
Check_lvs → To reports opens and shorts in design
🎯 Q 102: What are the types of routing
🧠 Answer 102: There are three types of routing power routing,clock tree
routing and signal routing
🎯 Q 103: How many phases routing will don
🧠 Answer 103: Routing will don in three phases
Global routing or Trail route
Track assignment
Detail routing
Q 104: What is
detour how its effect timing
📊 Answer 104: Due to the GRC overflow tool creating a long net for
routing due to long net RC parameters affecting the signal is called Detour.
Due to Detour net delay increase and it created setup violation
🔍 Q 105:What is area recovery and leakage recovery
📊 Answer 105: Area Recovery: If any timing path having more positive
slack margin then cells can be downsize to recovery some area
Leakage Recovery: parallel few cells in timing paths with positive slack
can be swapped to svt/hvt cells to reduce leakage power.
🔍 Q 106:What are the types of functional ECO and why we are doing it
📊 Answer 106: There are two types of functional eco Pre mask and post
mask ECO
Pre mask ECO : In this case we can change the base layers and add the cells to
build a logic and change the routing to achieve timing as well as adding a new
logic to design
Post mask ECO: This is used mostly for fixing the timing violation after base
layer fabricated then by using spare cells and metal connection change we can
fix the violation in this case we can't touch the base layer we can't add the
any cells to the design
🔍 Q 107: What are the Inputs required for STA
📊 Answer 107:Netlist
MMMC
SPEF file
SDC
TLUplus file
Lib file
🔍 Q 108: What is the order of timing fixing
📊 Answer 108: The order of timing fixing is
Max cap
Max tran
Max fanout
Setup
Data to Data check
Recovery
Hold
Clock gating check
Removal
Noise or glitch fixing
Q 109: How to fix Max
cap
🌟 Answer 109: If long nets is there add buffers
If weak driver is there upsize the cell
If high fanout is there split the fanout
🔍 Q 110:How to fix Max Tran
🌟 Answer 110: If long nets is there add buffers
If weak driver is there upsize the cell
If high fanout is there split the fanout
🔍 Q 111:How to fix Max fanout
🌟 Answer 111: Most of the fanouts are fixed in placement stage in that
high fanout net synthesis stage all fanouts are fixed if any few fanout
violations is there split the fanout if too many fanout violations is there go
back to placement stage
🔍 Q 112:How to fix Setup violation
🌟 Answer 112: The setup violation comes because of data path have more
delay we have to reduce data path delay by difference ways
Remove redundant buffer or extra buffer
Increase drive strength of cells in data paths
Swap hvt/svt cells to lvt
Delay the clock capture path
Reduce net delay by routing in high metal layer (set_routing_rules)
🔍 Q 113:How to fix data to data check violation
🌟 Answer 113:Data to Data check is don in between two data
signals arrival at a time mostly this is happened in between
set and reset signals of flop at this condition we have to be delay any of the
signal with respect to other signal otherwise both will apply at a time it goes
to metal stable state
Q 114: How to fix
Recovery check violation
Answer 114: It's like a setup check
Remove redundant buffer or extra buffer
Increase drive strength of cells in data paths
Swap hvt/svt cells to lvt
Delay the clock capture path
Reduce net delay by routing in high metal layer (set_routing_rules)
Q 115: How to fix Hold violation
Answer 115: Hold violation occurs because of data path having less delay so to
overcome this we have to add delay in data path
Add delay buffers
Decrease drive strength of cells in data paths
Swap lvt/svt cells to hvt
Delay the clock launch path
increase net delay by routing in low metal layer (set_routing_rules)
Q 116: How to fix Removal violation
Answer 116:Removal check is like hold check
Add delay buffers
Decrease drive strength of cells in data paths
Swap lvt/svt cells to hvt
Delay the clock launch path
increase net delay by routing in low metal layer (set_routing_rules)
Q 117:How to fix Crosstalk and noise
Answer 117: Upsize the driver of victim nets
Downsize the driver of aggressor nets
Increase Spacing between aggressor and victim nets
Adding buffers on long nets
Q 118:How to fix clock gating check
Answer 118: Clock gating check done between enable signal and clock signal
There are two types of clock gating checks inferred:
• Active-high clock gating check: Occurs when the gating cell has an
and or a nand function.
• Active-low clock gating check: Occurs when the gating cell has an or
or a nor function. (for more information refer J.Bhasker book)
Q 119:How to fix
antenna violation
Answer 119: There are two ways to fix antenna violation
Adding antenna diode in reverse bias
Layer hoping or layer jumping
Q 120: what is positive and negative cross talk
Answer 120: Positive crosstalk: The aggressor net has a rising transition at
the same time when the victim net has a falling transition. The aggressor net
switching in the opposite direction increases delay for the victim. The
positive crosstalk impacts the driving cell as well as the net interconnect.
The delay for both gets increased because charge required for the coupling
capacitance is more. For more understanding refer below image1.
Negative crosstalk: The aggressor net is rising transition at the same time as
the victim net. The aggressor net switching in the same direction decreases
delay of the victim. The negative crosstalk impacts the driving cell as well as
the net interconnect - the delay for both gets decreased because charge
required for the coupling capacitance is less. For more understand refer below
image2.
Q 121:
Answer 121: Layout Versus Schematic (LVS) checking compares the extracted
netlist from the layout to the original schematic netlist to determine if they
match. The comparison check is considered clean if all the devices and nets of
the schematic match the devices and the nets of the layout. Optionally, the
device properties can also be compared to determine if they match within a
certain tolerance. When properties are compared, all the properties must match
as well to achieve a clean comparison. For more understand refer below image3
The LVS check reports this
Extraction Errors
Text short and open
Device extraction error
Missing device terminal
Extra device terminal
Unused text
Duplicate structure placement
Compare Errors
Unmatched nets in the layout/schematic
Unmatched devices in the layout/schematic
Property errors
Port swap errors
Q 122:What is ERC check
Answer 122:ERC, Electrical rule check strongly verifies electrical properties
of the layout. usually violations includes
Floating devices, gates, pins and nets
Connecting high voltages to the thin gates
Floating wells
Allowed series pass gates
Minimum n-well widths
Q 123:What is LEC check
Answer 123:Logical equivalent check don in between golden and revised netlist
LEC can be run in multiple stages
STAGE GOLDEN REVISED
Synthesis RTL gate level netlist
PNR Synthesis Netlist Post routed netlist
ECO synthesis Netlist ECO Netlist
Tap out synthesis Netlist Tap out netlist
Note: if functional change happens in any stage then synthesis netlist is no
more golden then ECO netlist becomes golden.
Q 124:What is BEC check
Answer 124:Boolean equivalent check it is another name of LEC check
Q 125:How to overcome shorts and opens in your design
Answer 125:In ICC2 there are few shortcuts for fixing shorts and opens
Shift+L → split the net
Shift+W → stretch and connect
shift+R → custom router
S → Stretch
M → move
Shift+C → To automatically insert the via
Q 126:what is sequential merging
Answer 126:Sequential merging is when the synthesis tool merges two or more
flops into one because they have the exact same function. Sequential constant
is when the synthesis tool optimizes away flops that are always tied to 1'b1 or
1'b0.
Q 127:What is
combinational merging
Answer 127:Combinational merging is when the
synthesis tool merges two or more combinational cells into one because
they have the exact same function. Like one AND gate and NOT gate connected
back to back then we can replace those cells with NAND gate this is called the
combinational merging
Q 128:What is SVF file where we used SVF file
Answer 128: svf - Automated setup file. This file
helps Formality process design changes caused by other tools used in the design
flow. Formality uses this file to assist the compare point matching and
verification process. This information facilitates alignment of compare points
in the designs that you are verifying.
Q 129:If LVS clean means we can guarantee our
design functionality
Answer 129: If a user removes a net based on that
netlist only LVS checks happen so it is almost the same. That's why if LVS
passes also we are not guaranteed design functionality.
Q 130:How to fix IR drop
Answer 130:Use the proper width of Metal according
to current density.
Use more parallel metal wire strips
Spread the logic if hotspots are in congested
areas.
Add more proper vias
Use proper CTS structure.
Add buffers if the run length of the wire is too
long.
Avoid the jogs in metals
Using clock gating techniques
Q 131: How to fix EM
Answer 131:Increase the metal width to reduce the
current density
Reduce the frequency
Lower the supply voltage
Keep the wire length sort
Reduce the buffer size in clock lines
Q 132: How LEC work
Answer 132:LEC works in two stages one is Mapping
and another one is comparing, first it maps the names of golden and revised
netlist, later it compares the logic with respect to flop D pin fan in for
golden and received netlist if it is passed then we guarantee our functionality
is correct.
Q 133:What are the ways to reduce the Dynamic
power consumption
Clock gating
Answer 133:Multi voltage design
Switching ON/OFF power domains
Q 134:What is isolation cells and what are the
types of isolation cells
Answer 134: The isolation cells will be used
in a design, a signal is passed from ON/OFF domain to always on domain when
domain OFF there is no signal will be passed from ON/OFF domain to always ON
domain then the chance of data missing and the cell will be go into the metal
stability state to overcome this condition we used Isolation cells. Isolation
cells are three types
Constant 1 (OR based isolation cells)
Constant 0 (AND based Isolation cells)
Most recent value (Latch based isolation cells)
Q 135:Isolation cell information present in which
file
Answer 135:Isolation cell information present in
UPF or CPF file
Q 136:What is power switch why we used power
switch what are the types of power switches
Answer 136:Power switches are used for turn ON or OFF
the power supply for ON/OFF domain. We used power switches for low power
designs. There are two types of power switches Header stitches and
Footer switches
Q 137:What is Level Shifter why we used it
Answer 137: Level Shifters (LS) are special
standard cells used in Multi Voltage designs to convert one voltage level to
another. As Multi Voltage designs have more than one voltage domain, level
shifters are used for all the signals crossing from one voltage domain to
another voltage domain.
Q 138: What is Enable Level Shifter
Answer 138:It is combination of Level Shifter and
Isolation cell
Q 139: What Information UPF/CPF file contains
Answer 139:Power domains and its definition
Power nets and its definition
Associated supply nets for ON/OFF and always ON
power domains
Power stitches information
Isolation cell information
Level shifter information
Q 140:What is always ON buffer where we used it
Answer 140: When a always on signal passing
through a switchable domain then due to long net they may signal get weak and
transition will increase to overcome this we used always ON buffer in
Switchable domain it have two power supplied.
Q 141:What is the Purpose of Double Patterning
Answer 141: To increase the routing resources in
lower metal layers
There will be two masks used for better eatching
of the metal.
Q 142:What is min Vt
violation how to overcome it.
Answer 142:Vt Spacing or width should be minimum
in lower node such as 7 or 5nm technologies that may LVT min Vt violation or
SVT min Vt violation will be present when swapping cells from LVT to SVT
or vice versa. To overcome this do legality placement in lower node
technology(it will add the filler cell regarding that min Vt violation)
Q 143:How we can analyze clock tree and skew in
placement stage only
Answer 143:Update_io_latency is the command in
ICC2. By using this command we can estimate how our clock tree is going to
build in the placement stage.
Q 144:What we did in physical verification stage
Answer 144:In the physical verification stage we
check our design having DRC,LVS,Antenna and ERC related violations. If
violations are present then we try to fix those violations.
Q 145:How we get full GDS
Answer 145:PNR GDS (only metal and std cell
placement information present) does not have base layer information we are
getting a std cell GDS and Macro GDS from foundry by adding PNR and std
+ Macro GDS generated Merged GDS this Merged GDS we add Fill only GDS which
contain Dummy metal filler then our Full GDS developed.
Q 146:What is power gating here we used it
Answer 146:Power gating is a cell which is used
to shut down the power of a block it is used to reduce the static power.
Q 147:How to reduce
static power in Design
Answer 147: Multi voltage design
Minimize the usage of LVT cells
Power gating
Increase the metal width
Q 148: What is difference between DRC and DRV
Answer 148: DRC are related to physical violation
like minimum spacing, width, pitch, cut rules shorts and opens this all are
comes under DRC
DRV are related to Timing violation like Max
transition and Max capacitance etc.
Q 149:can you plz explain timing ECO flow
Answer 149: For answer refer below image
Q 150:What is empty module why we used it
Answer 150:If the RTL module definitions don't
have any logical content such as wire, inputs, or outputs, then such modules
are called empty modules. Used to Add the logic in future stages.
Q 151:Why CMOS technology not allowed Floating
inputs and Multi driven inputs
Answer 151: Floating inputs are not allowed in
CMOS technology because if floating input beside if any net passed because of
it a floating input gate a meta stability value neither zero nor one then the
PMOS and NMOS both are ON at a time the result chip failure
In CMOS technology Multi driven inputs are not
allowed because if a net driven by a two drivers which may generated different
signal like 0 and 1 then the cells doesn't know which input it will take
because of this the cell goes to metastability state then the PMOS and NMOS
both are ON at a time the result chip failure
Comments
Post a Comment