DFT: 'Shift' Left...Less...All...None
'Shift Left': Though I heard the phrase first from Tessent, it is seen in other domains, especially software. Tessent promotes RTL level integration for DFT hardware, in other DFT tools (Synopsys/Cadence) - it is gate level integration. RTL level integration does not really dispense off the equivalence check (with RTL) post scan insertion. To quote, 'Logic is tested by configuring sequential elements in the design into many shift registers called scan chains that a tester then loads and unloads. This lets ATPG (automatic test pattern generation) efficiently and automatically test any type of design' (emphasis mine). From a design cycle time estimation, if not reduction, there is only one question - is the design a derivative (from previous) or 'brand new'? If it is 'next version' design, fewer surprises are expected. Scripts and 'flow' can be mostly reused and in most cases, just re-run with 'revised' inputs. 'Shift Left' cannot be just RTL integration of DFT hardware, it has to be coupled with tools that can give a preview with a good corelation to the later stages. RTL analysis tool - Spyglass (earlier Atrenta) is from Synopsys, where insertion of compression hardware, OCC etc are done directly at gate level. Maybe both Siemens/Synopsys may have got a wind that customer may not be foot the bill for a tool that does not give a 'whimper, forget bang, for the buck' - so pre-scan DRC is mooted in Tetramax (it was first imo) and now in Tessent also. When I used Genus for scan insertion (Cadence) and Tessent for ATPG - pre-scan DRC did not match with post-scan drc (slides in User2User 2019). In case of Synopsys, it looked ok but project managers had not shown interest on pre-scan DRC, forget new RTL lint/analysis tool, which means 'training', 'resources' etc!
'Shift Less': Recently I had the chance to explain DFT to a PD manager and she was shocked by my assertion that for all that 'talk of test time reduction', 99.5% of test time is meaningless because 99.5% of test time is just shift in/out of scan chains - capture is the phase that really tests the design and it is < 0.5% of test time! So if test time reduction is important, we have to look at reducing the number of shifts only and also increase speed of shift. ATPG tools have now progressed quite a bit that we can actually look at 'partial scan'. I had done some experiments using ISCAS designs 10 yrs ago and could see that 10-40% flops can be easily left out of scan chain, without affecting coverage. As mentioned in an earlier post, non-scan flops are mostly flops whose value cannot be directly initialized (ie D) through shift, but clocks come from the same source. ATPG tools are now able to initialize non-scan flops, whose clocks can be controlled. Increasing sequential depth can ensure that there is no fall out in coverage. Sure, flops being out of scan chain can affect debug, but if tools can generate patterns with non-scan flops, debug should not be difficult, even if it is WIP. Going for a partial scan approach to reduce flops in scan chain (and thereby test time) may work best going 'bottom up' starting from smaller blocks in the hierarchy. This may be counter intuitive especially when 'adding more flops' to scan chain in the name of 'test points' is mooted. Not surprisingly, all project managers, I worked with, did not want me to 'rock the boat'.
Recommended by LinkedIn
'Shift All': Concurrency can reduce test time but IO sharing, increasingly common feature, not just in pin limited designs, promotes 'mode-ing' which hampers concurrency. Testing homogenous and heterogenous blocks concurrently using share IOs is a feature whose time has come. DFT architectures have to be intelligently created so that pattern retargeting/translation to higher hierarchies does not compromise debug or alternatives in case of test power issues. Programmable delays are used not just in FPGA designs but also in DFT - so that shift power can be mitigated, though reaching higher shift speeds (>100 Mhz) for concurrently testing 'as many blocks as possible'. Since DFT is agnostic, if not blind, to physical design, timing, power and more, DFT architectures and strategies need not just a 'go/no-go' from physical design but also an alternative (backup) plan in case things do not work out in silicon ('we can run all sims - sdf and otherwise - but silicon is silicon').
'Shift None': It may seem ideal to have 0 shifts, so that all test is just exercising functionality, but I had seen some interesting results. FPGAs have not moved into DFT at all and still depend on functional patterns and test suites. I initially started with one LUT and could see that Tetramax could test it 100% (yes, no shifts!). Similar results followed up to 13x13 but beyond that, tool conked out. I had also presented a poster in ITC 2014 with similar exercises for a block (using Tessent) and again saw good results. Scalability of the tool is of course the issue, but I tend to think that this could be the way to go. Till recently, I have been part of non-DFT discussions, especially on RTL/verification, where I could see ways DFT can actually leverage on the functionality of the design. Leveraging functionality and using, if not exploiting, the same in DFT could be the real 'Shift Left'.