Hybrid Code-Based Test Data Compression and Decompression for VLSI Circuits

Doctoral Thesis / Dissertation 2018 211 Pages

Computer Science - Applied







1.5.1 VLSI Development Process Design verification Yield and reject rate
1.5.2 Electronic System Manufacturing Process System-level operation
1.6.1 Test Generation
1.6.2 Fault Models
1.7.1 Automatic Test Equipment
1.7.2 Automatic Test Pattern Generation
1.7.3 Fault Simulation
1.7.4 Design For Testability Stand-alone BIST Hybrid BIST
1.8.1 Automatic Test Equipment
1.8.2 Test Stimulus Compression Code-based schemes Dictionary code (fixed-to-fixed) Huffman code (fixed-to-variable) Run length code (variable-to-fixed) Golomb code (variable-to-variable) Linear-decompression-based schemes Sequential linear decompressors. Broadcast-scan-based schemes
1.8.3 Test Response Compaction Space compaction Time compaction Mixed time and space compaction

2.2.1 Block to Block Codes
2.2.2 Block-to-Variable Codes
2.2.3 Variable-to-Block Codes
2.2.4 Variable-to-Variable Codes
2.3.1 Combinational Linear Decompressors
2.3.2 Sequential Linear Decompressors
2.3.3 Combined Linear and Nonlinear Decompressors
2.4.1 Static Reconfiguration
2.4.2 Dynamic Reconfiguration
2.5.1 The Dictionary Based Compression Techniques
2.5.2 Huffman Coding Based Compression Techniques
2.5.3 Run Length Based Compression Techniques

3.2.1 Huffman Code Properties
3.2.2 Prefix Property
3.4.1 Compression Algorithm
3.4.2 Decompression Algorithm

4.3.1 Compatible Block Coding
4.3.2 Run Length Coding
4.3.3 CCBRLC Compression Algorithm
4.3.4 CCBRLC Decompression Algorithm

5.2.1 Modified Run Length Coding
5.2.2 Multilevel Selective Huffman Coding Algorithm
5.4.1 Decoder architecture
5.5.1 Compression Algorithms
5.5.2 Decompression Logic FPGA design simulation FPGA design synthesize ASIC design using cadence

6.4.1 BDPRLC Compression Algorithm
6.4.2 BDPRLC Decompression Algorithm
6.5.1 BDPRLC Compression algorithm
6.5.2 BDPRLC Decompression Logic FPGA design simulation FPGA design synthesize ASIC design using cadence





Higher circuit densities in system-on-chip (SOC) designs have led to drastic increase in test data volume for testing of Intellectual property (IP) cores. Larger test data size demands higher memory requirements. Test data compression addresses this problem by reducing the test data volume without disturbing the overall system performance during testing in BIST environment. Test data compression involves adding some additional on-chip hardware before and after the scan chains. This additional hardware decompresses the test stimulus coming from the tester, it also compacts the response after the scan chains and before it goes to the tester. This permits storing the test data in a compressed form on the tester. It is also easier to adopt in industry because it’s compatible with the conventional design rules and test generation flows for scan testing.

Test data compression provides two benefits. First, it reduces the amount of data stored on the tester, which can extend the life of older testers that have limited memory. Second is the more important benefit, which applies even for testers with plenty of memory it can reduce the test time for a given test data bandwidth. Test vectors are highly compressible because typically only 1% to 5% of their bits are specified (care) bits.

Test compression is an effective method for reducing test data volume and memory requirements with relatively small cost. Code based scheme is one of the suitable test data compression technique to encode the test cubes. The techniques partition the original data into symbols, and then each symbol is replaced with a code word to form the compressed data. The decompression is performed by reversing the process using a decoder that simply converts each code word into the corresponding symbol.

The proposed algorithms are used to improve compression ratio by combining various code based schemes. The research concentrates on reducing test data volume using different techniques are considered for the following schemes.

- A Mixed Selected Selective Huffman and Run Length Coding Techniques Algorithm(SSHRLC)
- Combined Compatible Block and Run Length Coding Algorithm(CCBRLC)
- A Modified Run Length Coding Technique Based On Multi-Level Selective Huffman Coding Algorithm(MRLMHC)
- A Hybrid of Bitmask Dictionary and 2n Pattern Run Length Coding Algorithm(BDPRLC)

A Mixed Selected Selective Huffman Coding and Run length Coding technique (SSHRLC) is proposed to improve the compression ratio. The technique is developed by Combining Selected Selective coding and run length coding schemes which utilize don’t care bits and redundant bits of test set to enhance the compression ratio. Both compression and decompression algorithms are implemented using MATLAB and experimented using ISACAS benchmark circuits. Time complexity of the algorithm is devised and validated using same benchmark circuits. Compression ratio is evaluated for the circuits and compared with existing coding methods to prove the performance of proposed algorithm. Most of the circuits achieve better compression ratio and average compression ratio for given benchmark circuits are given as 58%. Decompression logic is also implemented and validated for the same benchmark circuits and results shows that decompressed vectors matched with original test set. Hence the proposed method is lossless compression method

The Combined Compatible block coding and run length coding (CCBRLC) compression algorithm is proposed as II method to reduce the large test data volume with better compression ratio. Compatible and incompatible block identification and replacement of repeated sequence by symbol and occurrence frequency are the concepts used in this method. Compression and decompression algorithms are implemented using MATLAB and experimented with ISCAS benchmark circuits. Compression results are obtained for various benchmark circuits and compared with existing and proposed method I (SSHRLC) to prove the effectiveness of the algorithm. It shows that major circuits got better compression ratio and average compression ratio is improved over the existing and SSHRLC. Average compression ratio achieved with enhancement is 71%. Decompression algorithm results shows that original test vector is recovered without any loss of bits which assures performance of testing is not degraded in the proposed compression technique.

A Modified Run Length Coding Technique is combined with Multi-Level Selective Huffman Coding to develop next proposed compression algorithm (MRLMHC) to achieve further improvement in compression ratio. Most repeated sequences in modified RLC and limited number of selective codewords in Huffman coding are used to compress the data in this method. Decompression algorithm is also developed for this combined algorithm. Algorithm is implemented using MATLAB and experimented with ISCAS benchmark circuits. Results show that the test data volume is reduced significantly and better compression ratio is achieved. Results are compared with existing coding schemes and proposed methods I & II to prove the performance of the algorithm. Average compression ratio for the proposed method is obtained as 77% which is larger than that of SSHRLC and CCBRLC. Decompression algorithm is implemented using VHDL to simulate and synthesize for the circuit s208. Design parameters such as power, area and propagation delay are obtained for 180nm technology using Cadence EDA tool. This shows that proposed decompression logic is feasible to implement in BIST environment of advanced VLSI testing system.

An efficient code compression technique is proposed by combining Bitmask-Dictionary and 2n Pattern Run Length Coding (BDPRLC) to enhance the performance of compression algorithm further. Compression algorithm is implemented using both MATLAB and VHDL to analyse the performance in two different tools. Same benchmark circuits are used to validate the compression results. Bitmask dictionary and 2n Pattern RLC algorithms are also implemented separately using MATLAB and results are compared with proposed hybrid method. It shows that concept of hybrid works out well and average compression ratio is increased upto 12% for proposed method. Results also show that MATLAB performs better than VHDL in compression ratio as former has versatile and efficient built in function in it. Comparison results with existing and other proposed methods I, II, III show that proposed method IV BDPRLC enhanced the compression ratio for maximum number of circuits. Average compression ratio for this method is evaluated as 86% which is higher among all the proposed methods. Decompression logic for s208 is implemented in VHDL and synthesized in 180nm technology Cadence EDA tool. VLSI design parameter results are compared with that of MRLMHC and inference shows that area and propagation delay are minimized at the negligible cost of power. This proves that proposed efficient compression algorithm is also more suitable for BIST environment of testing.

After analysing all the proposed methods, it is found that BDPRLC method outperforms other proposed methods and is more suitable for testing of latest SOC environment. As technology drives the growth of every country, the VLSI ICs with proposed test data compression technique also contributes towards the development of our Nation through innovative technology of electronic industries.


First and foremost, I would like to thank God, the Almighty for having made everything possible by rendering me the strength and courage to do this work.

I would like to thank my parents for their love and encouragement at each moment of my life. I am very grateful to my husband Mr.V.Sundaraj and sons P.S.SelvaGanesh and P.S.Selvakarthi for their relentless support and endless patience.



Table 1.1 Comparisons of Functional Testing and Structural Testing

Table 1.2 Worst Case Time Complexity

Table 1.3 Time Complexities for Popular Sorting Algorithms:

Table 3.1 Test Set Data Partitioning and Occurrence Frequency

Table 3.2 Test Set Profile for ISCAS’89 Benchmark Circuits using Mintest

Table 3.3 Performance Characteristics of SSHRLC

Table 3.4 Comparison of Compression Ratios Obtained by SSHRLC with Existing Methods

Table 4.1 Coding Scheme of Compatible Data Block

Table 4.2 Performance Characteristics of CCBRLC and Compatible Block Coding.

Table 4.3 Comparison of Compression Ratios of Combined Compatible Block and Run Length Coding and Various Existing Methods

Table 5.1 Compression Results for Different Block Sizes in a MRLMHC Algorithm

Table 5.2 Compression Results for Different Block Sizes in a MRLMHC Algorithm.

Table 5.3 Performance Results of MRLMHC Algorithm.

Table 5.4 Percentages of Compressed Ratios of the MRLMSH Coding Method with Others Compression Methods

Table 5.5 Complete details of the synthesis report of MRLMHC in FPGA- XC2S600E-6FG676

Table 5.6 Key Parameters of the Synthesis Report of MRLMHC in FPGA- XC2S600E-6FG676

Table 5.7 Decompression Results of MRLMHC

Table 6.1 Performance Results of a BDPRLC Algorithm and 2n PRL and Bitmask Dictionary

Table 6.2 Comparative Analysis of BDPRLC Algorithm with Existing Methods

Table 6.3 Complete Details of the Synthesis Report of BDPRLC in FPGA- XC2S600E-6FG676

Table 6.4 Key Parameters of the Synthesis Report of BDPRLC in FPGA- XC2S600E-6FG676

Table 6.5 Decompression Results of a Hybrid of Bitmask Dictionary and 2nPattern Runlength Coding Methods for circuit s208


Figure 1.1 Basic Testing Method

Figure 1.2 VLSI Development Process

Figure 1.3 Design Hierarchies

Figure 1.4 Manufacturing Process

Figure 1.5 Block Diagram of Test Data Compression and Decompression for CUT

Figure 1.6 Adhoc DFT Test Points using Multiplexers

Figure 1.7 91Transforming a Circuit for Scan Design

Figure 1.8 Block Diagram of BIST

Figure 1.9 Block Diagram of a Dictionary Code

Figure 1.10 Example of Huffman Coding

Figure 1.11 Block Diagram of Run-Length Code

Figure 1.12 Block Diagram of Fixed-Length Sequential Linear & Decompressors

Figure 1.13 Block Diagram of Variable- Length Sequential Linear & Decompressors

Figure 1.14 Block Diagram of Combined Linear and Nonlinear Decompressors

Figure 3.1 Test Data Compression and Decompression Methods of Combined Selected Selective Huffman and Run Length Coding

Figure 3.2 Comparisons of Compression Ratios Obtained by SSHRLC with Existing Methods for Benchmark Circuits

Figure 3.3 Average Compression Ratios of SSHRLC and Existing Methods

Figure 4.1 Test Data Compression/Decompression Method of Combined Compatible Block and Run Length Coding

Figure 4.2 Comparison of Compression Ratio of CCBRLC with Existing Methods

Figure 4.3 Average Compression Ratios of SSHRLC, CCBRLC and Existing Methods

Figure 5.1 Test Data Compression/Decompression Method of Combined Modified Run Length Coding Technique and Multi Level Selective Huffman Coding

Figure 5.2 Block Diagram of Decoder Architecture of a MRLMHC

Figure 5.3 State Diagrams for a Decoder of MRLMHC

Figure 5.4 Comparison of Compression Ratio of the MRLMHC with Other Methods

Figure 5.5 Average Compression Ratios of MRLMHC and Existing Methods

Figure 5.6 Simulation of Waveform Window for Decompression Logic of MRLMHC

Figure 5.7 RTL Schematic of MRLMHC

Figure 5.8 Synthesis Report for Decompression Logic of MRLMHC in FPGA- XC2S600E-6FG676

Figure 6.1 Encoding Format for Bit Masking Compression Technique

Figure 6.2 Bitmask Based Code Compression Technique

Figure 6.3 Compression using Bitmask and Dictionary

Figure 6.4 Code Word Formats for 2n PRL and Its Example

Figure 6.5 Test Data Compression/Decompression of BDPRLC Algorithm

Figure 6.6 Block Diagram of Decompression Architecture

Figure 6.7 Comparison of Compression Ratio of Proposed BDPRLC with Existing Methods

Figure 6.8 Average in Percentage of Compression Ratios of Proposed BDPRLC and Existing Methods

Figure 6.9 Simulation of Waveform Window for Decompression Logic of BDPRLC

Figure 6.10 RTL Schematic of BDPRLC

Figure 6.11 Synthesis Report for Decompression Logic of BDPRLC in FPGA- XC2S600E-6FG676


Abbildung in dieser Leseprobe nicht enthalten




The role of testing is to detect the problems in a circuit and the role of diagnosis is to determine where the problem has occurred. Correctness and effectiveness of testing are the most important characteristics of quality products. According to Moore’s law, the number of transistors integrated per square inch on a die has been doubled every 18 months since the invention of integrated circuit. The growing size and complexity of the transistors pose many new challenges that make the testing of Very-Large-Scale-Integrated (VLSI) circuits, relevant. As these trends continue with the development of semiconductor manufacturing technology, the requirements of digital VLSI circuits have also brought in many challenges during the manufacturing test.

This is due to the large and complex chips that require enormous amount of test data and dissipate a significant amount of power during the test thus resulting in considerable increase in test cost. This chapter introduces certain important concepts in testing of digital VLSI circuits and highlights the significance of minimizing test data volume during the test.


Test data compression is an effective method for reducing test data volume and memory requirement with relatively small cost. An effective test structure for embedded hard cores is easy to implement and it is also capable of producing high-quality tests as part of the design flow.

The purpose of Test data compression intends to reduce Test data volume by using Test Stimulus Compression such as Code-based schemes, Linear-decompression-based schemes and Broadcast-scan-based schemes.


The research work addresses the problem of the test data volume and memory requirements. The primary objective of this study is to introduce novel techniques that improve the compression ratio by reducing test data volume during at-speed test in scan designs. This in turn diminishes the tester memory requirement and hence chip area is reduced for Built-in-Self Test environment.

The aim of this research is to introduce various compression algorithms by combining the existing data compression techniques. The algorithms are designed to reduce the volume of test patterns of input that is essential to guarantee an acceptable level of fault coverage which is a key parameter to evaluate the quality of testing.


In the 1980s, the term “VLSI” was used in chips that had more than 100,000 transistors and continued to be in use until chips with hundreds of millions of transistors were introduced. In 1986, the first megabit Random Access Memory (RAM) contained more than 1 million transistors and Microprocessors, produced in 1994 contained 3 million transistors as reported by Arthistory (2005). It is very common to use several millions of transistors with VLSI devices in a computer which is an electronics appliance.

This is a direct result of the steadily decreasing dimensions, referred as feature size of the transistors and interconnecting wires from tens of microns to tens of nanometers, with the current submicron technologies based on a feature size of less than 100 nanometers (100 nm). The feature size is too small and it automatically increased the operating frequencies and clock speeds. The reduction in feature size increases the probability that a manufacturing defect in the IC will result in a faulty chip. A very small defect can simply result in a faulty transistor or interconnecting wire when the feature size is less than 100 nm. Furthermore, it considers only one faulty transistor or wire to make the entire chip fail to function properly or at the required operating frequency. Yet, defects created during the manufacturing process are unavoidable, and in reality some numbers of ICs are expected to be faulty.

Hence, testing is an essential deed to guarantee fault free products, whether the product is a VLSI device or an electronic system composed of many VLSI devices. It is also required to test components at various stages during the manufacturing process. For instance, to produce an electronic system, ICs are produced and it is used to assemble printed circuit boards (PCBs) and then PCBs are used to assemble the system. There is a general agreement with the rule of ten, which predicts that the cost of detecting a faulty IC increases by an order of magnitude. It changes through each stage of manufacturing, from device level to board level and system level to system operation in the field. Electronic testing includes IC testing, PCB testing, system testing at various manufacturing stages and during system operation.

Testing is used not only to find the fault-free devices, PCBs, and systems but also to improve production yield in various stages of manufacturing by analysing the origins of the defects and the source of defects. Periodic testing is performed to ensure fault-free system operation and to start repair procedures when faults are detected. Hence VLSI testing is important to designers, product engineers, test engineers, managers, manufacturers, and end-users as discussed by Jha et al. (2003).


Testing typically consists of applying a set of test stimuli to the inputs of the Circuit Under Test (CUT) and analysing the output responses, as illustrated in Figure 1.1. Circuits that produce correct output responses for all input stimuli pass the test and it is considered as fault free.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.1 Basic Testing Method

Those circuits that fail to produce a correct response at any point during the test sequence are considered to be faulty.

1.5.1 VLSI Development Process

The VLSI development processes are shown in Figure1.2. Customer’s project pre-requisites require a VLSI device which is determined and framed as a design specification.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.2 VLSI Development Process

Designers verify a circuit that fulfils the design specification and synthesizing a design. Design verification is a predictive investigation that ensures the synthesized design to accomplish the necessary functions during manufacturing. When a design error occurs, adjustments to the design are necessary and the design verification must be repeated. As a result, design verification can be deliberated as a form of testing. Once verified, the VLSI designs will go for fabrication. At the same time, test engineers develop a test technique based on the design specification and fault models related to the implementation technology. A defect is a flaw or physical imperfection that may lead to a fault.

Due to inevitable statistical flaws in the materials, masks are used to fabricate ICs and it is impossible for any particular kind of IC to be completely defect-free. Thus, the first testing performed during the manufacturing process is done to test the ICs fabricated on the wafer in order to determine the defective devices. The chips that pass the wafer-level test are extracted and packaged. The packaged devices are retested to reject the damaged pieces during the packaging process or put into defective packages. Additional testing is used to guarantee the final quality before successful entry to market. This final testing consists of measurement of parameters such as input/output timing specifications, voltage and current. In addition to all these testing, burn-in or stress testing is often performed where chips are exposed to high temperatures and supply voltage.

The purpose of burn-in testing is to accelerate the effect of defects that may lead to failures in the early stages of operation of the IC. Failure Mode Analysis (FMA) is typically used in all stages of IC manufacturing testing to identify improvements existing in processes that will result in an increase in the production of defect-free devices. Design verification

The different levels of abstraction in VLSI design flow are shown in Figure 1.3.The design process is essentially needed to transform a higher level description of a design to a lower level description. Starting from a design specification, a behavioural (architecture) level description is developed in Very High Speed Integrated Circuit Hardware Description Language (VHDL) or using Verilog or a C program. It is simulated to determine whether it is functionally equivalent to the specification.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.3 Design Hierarchies

The design is then defined at the Register-Transfer Level (RTL), which contains more structural information in terms of the sequential and combinational logic functions that are to be performed in the data paths and control circuits. The RTL description must be verified with respect to the functionality of the behavioural description before proceeding with synthesis to the logical level. A logical-level implementation is automatically synthesized from the RTL description to produce the gate-level design of the circuit. The logical-level implementation should be verified as considerable feature as possible to guarantee the perfect functionality of the final design. In the final step, the logical-level description must be transformed to a physical-level description for acquiring the physical placement and interconnection of the transistors in the VLSI device prior to fabrication. Yield and reject rate

Certain percentage of the manufactured ICs is expected to be faulty due to the manufacturing defects. The yield of a manufacturing process is defined as the percentage of acceptable parts among all parts that are fabricated as defined in Equation (1.1).

Yield = Abbildung in dieser Leseprobe nicht enthalten (1.1)

There are two types of yield loss: catastrophic and parametric. Catastrophic yield loss occurs is due to random defects, and parametric yield loss happens due to process dissimilarities. Automation of enhancements in a VLSI fabrication process line, drastically reduces the particle density that produces random defects over time. Consequently, parametric variations are caused due to process fluctuations develop which is a dominant reason for yield loss.

When ICs are tested, the following two undesirable situations may occur:

1. A faulty device appears to be a good part for passing the test.
2. A good device fails the test and appears as faulty.

These two outcomes are often done due to a poorly designed test or due to lack of Design For Testability (DFT). As a result of the first case, even if all the products pass acceptance test, some faulty devices will still be found in the manufactured electronic system. When these faulty devices are returned to the IC manufacturer, they undergo FMA for considerable improvements to the VLSI development and manufacturing processes. The ratio of field-rejected parts to all parts passing quality guarantee testing is referred as the reject rate and it is also named as the defect level as given in Equation (1.2).

Reject rate = Abbildung in dieser Leseprobe nicht enthalten (1.2)

The reject rate provides an indication of the overall quality of the VLSI testing Process (Bushnell et al. 2000).

Testing is required to ensure the system availability. This testing may be done in the form of online testing, offline testing or a combination of both. Online testing is performed concurrently with normal system operation in order to detect failures as rapidly as possible. Offline testing can be done when the system or a portion of the system is to be taken out of service in order to perform the test. As a consequence, offline testing is performed periodically and during low-demand periods of system operation. In many cases, when online testing detects a failure, offline test techniques are used for diagnosis (location and identification) of the replaceable component to improve the subsequent repair time.

When the system is repaired, the system, or a portion of it, is retested using offline techniques to verify that the repair is successful before placing the system back in service for normal operation. The faulty components (PCBs, in most cases) replaced during the system repair procedure, are sometimes sent to the manufacturing facility or a repair facility for further testing. This typically consists of board-level tests. The objective is to determine the location of the faulty VLSI devices on the PCB for replacement or repair. The PCB is then retested to verify complete repair prior to shipment. It is checked and used as a replacement component for future system repairs. It should be noted that this PCB test, diagnosis, and repair scenario are viable only when it is cost effective, as in the case of expensive PCBs. The important point to be noted is that the testing goes on for a long period of time after the VLSI development process and it is performed throughout the life cycle of many VLSI devices.

1.5.2 Electronic System Manufacturing Process

An electronic system consists of one or more units comprised of PCBs in which one or more ICs are mounted. Steps involved in manufacturing an electronic system are illustrated in Figure 1.4 and are susceptible to defects. As a result, testing is essential at various stages to verify that the final product is fault-free. The PCB fabrication process is a photolithographic process similar to the VLSI fabrication process in certain ways.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.4 Manufacturing Process

The bare PCBs with expensive VLSI components are tested in order to discard defective boards before assembly. After assembly, the PCB is tested again including the placement of components and wave soldering; however, the PCB test includes testing of various components such as VLSI devices which are mounted on the PCB to verify the components to make sure that they are properly mounted and not damaged during the PCB assembly process. Tested PCBs are assembled in units and systems that are tested before shipment for field operation, but unit-level and system-level testing may not utilize the same tests as those used for the PCBs and VLSI devices. System-level operation

When a manufactured electronic system is shipped to the field, it may undergo testing as part of the installation process to ensure that the system is fault-free before placing the system into operation. A number of events can result in a system failure during the system operation; these events include single-bit upsets, electro migration and material aging.

Reliability is defined as the probability that a system will operate normally until time t is given in Equation (1.3).

Abbildung in dieser Leseprobe nicht enthalten

Here λ is the failure rate and Tn is duration of normal system operation. Because a system is composed of number of components, the overall failure rate of the system is the sum of the individual failure rates λi for each of the k components.


The physical implementation of a VLSI device is very complicated. Any small piece of dust or abnormality of geometrical shape can result in a defect. Defects are caused by process variations or random localized manufacturing imperfections. Process variations are affecting transistor channel length, transistor threshold voltage, metal interconnect width and thickness and inter metal layer dielectric thickness will impact logical and timing performance. Random localized imperfections can result in resistive bridging among metal lines, resistive opens in metal lines, improper via formation, etc. Recent advances in physics, chemistry and Materials Science have allowed production of nanometer-scale structures using sophisticated fabrication techniques. It is widely recognized that nanometer-scale devices will have much higher manufacturing defect rates compared to the conventional Complementary Metal Oxide Semiconductor (CMOS) devices. They will possess much lower current drive capabilities and it will be more sensitive to noise-induced errors such as crosstalk.

They will be more susceptible to failures of transistors and wires due to soft (cosmic) errors, process variations, electro migration and material aging. As the integration scale increases, more transistors can be fabricated on a single chip, thus reducing the cost per transistor. However, the difficulty of testing each transistor increases due to the increased complexity of the VLSI device and increased potential for defects, as the difficulty of detecting the faults is produced by those defects. This trend is further emphasized by the competitive price pressures of the high-volume consumer market, as well as by the emergence of System-On-Chip (SOC) implementation, mixed-signal circuits and systems, including Radio Frequency (RF) and Micro Electro Mechanical Systems (MEMSs).

1.6.1 Test Generation

A defect is the unintended difference between the implemented hardware and its intended design. Defects happen either during the manufacturing or while using the devices. Various types of defects are given below.

Extra and missing material - It is primarily caused by dust particles on the mask or wafer surface or in the processing chemicals.

Oxide breakdown - It is primarily caused by insufficient oxygen at the interface of silicon (Si) and silicon dioxide (SiO2), chemical contamination and crystal defects.

Electro migration - It is primarily caused by the transport of metal atoms when a current flows through the wire.

Aluminum has large self-diffusion properties that increase its electro migration liability caused due to low melting point.

Fault is a representation of a defect at the abstracted function level error and a wrong output signal produced by a defective system. An error is caused by a fault or a design error that could result in a system failure.

To test a circuit with ‘n’ inputs and ‘m’ outputs, a set of input patterns is applied to the Circuit Under Test (CUT), and its responses are compared to the known good responses of a fault-free circuit. Each input pattern is called a test vector.

In order to test a circuit completely, many test patterns are required. However, it is difficult to know how many test vectors are needed to guarantee a satisfactory reject rate. If the CUT is an n-input combinational logic circuit, all 2n possible input patterns can be applied for testing stuck-at faults. This approach is called exhaustive testing. If a circuit passes exhaustive testing, it may assume that the circuit does not contain functional faults regardless of its internal structure. Unfortunately, when exhaustive testing is not practical when ‘n’ is large.

Furthermore, when applying all 2n possible input patterns to an ‘n’ input, sequential logic circuit will not be sure whether all possible states have been visited. However, this example of applying all possible input test patterns to an n-input combinational logic circuit also illustrates the basic idea of functional testing. Every entry in the truth table for the combinational logic circuit is tested to determine whether it produces the correct response. In practice, functional testing is considered by many designers and test engineers that are used only for testing the CUT as thoroughly as possible in a system-like mode of operation. In either case, one problem is the absence of a quantitative measure of the defects that will be detected by the set of functional test vectors.

A more practical approach is to select specific test patterns based on circuit structural information and a set of fault models. This approach is called structural testing. Structural testing saves time and improves test efficiency, as the total number of test patterns is decreased because the test vectors target specific faults that will result from defects in the manufactured circuit. Structural testing cannot guarantee the detection of all possible manufacturing defects, as the test vectors are generated based on specific fault models; however, the use of fault models provides a quantitative measure of the fault-detection abilities of a given set of test vectors for a targeted fault model. This measure is called fault coverage. Comparison of Functional testing and Structural Testing is given in Table 1.1.

Table 1.1 Comparisons of Functional Testing and Structural Testing

Abbildung in dieser Leseprobe nicht enthalten

Fault coverage (FC) is the measure of the ability of a test (a collection of test patterns) to detect the given faults that may occur on the device under test is given in Equation (1.4).

FC =Abbildung in dieser Leseprobe nicht enthaltenAbbildung in dieser Leseprobe nicht enthalten (1.4)

Defect level (DL) is the ratios of faulty chips among the chips that pass tests stated in Equation (1.5).

- DL is measured as defects per million (DPM).
- DL is a measure of the effectiveness of tests.
- DL is a quantitative measure of the manufactured product quality. For commercial VLSI chips, DL greater than 500 DPM is considered as unacceptable.

DL Abbildung in dieser Leseprobe nicht enthalten1Abbildung in dieser Leseprobe nicht enthalten (1- FC) and 0 < DL Abbildung in dieser Leseprobe nicht enthalten 1-Y (1.5)

Quality level (QL) is the fraction of good parts among the parts that pass all the tests and are shipped as per Equation (1.6).

QL= DL Abbildung in dieser Leseprobe nicht enthalten (1- FC) and 0 < QL < 1 (1.6) As a result, fault coverage affects the quality level.

1.6.2 Fault Models

It is challenging to generate tests for real defects, because of its diversity. Fault models are essential for generating and evaluating a set of test vectors. Generally, a good fault model should satisfy two criteria namely:

1. It should accurately reveal the behavior of defects.

2. It should be computationally effective in terms of fault simulation and test pattern generation.

Many fault models have been discussed by Abramovici et al. (1994). On the other hand, no single fault model accurately reveals the behaviour of all possible defects that can occur. As a consequence, a combination of different fault models is often used in the generation of test vectors and evaluation of testing methodologies for VLSI devices. For a given fault model, there will be k different types of faults that can take place at each potential fault site (k = 2 for most fault models). The given circuit contains n possible fault sites, depending on the fault model. The total number of possible single faults is referred as the single-fault model, assuming that there can be only one fault in the circuit as given in Equation (1.7).

Number of single faults = k×n (1.7)

In case if Multiple faults occur in the circuit, the total number of possible combinations of multiple faults are referred as the multiple-fault model given as in Equation (1.8).

Number of multiple faults = (k+1)n −1 (1.8)

In multiple-fault model, each fault site can have one k possible faults or be fault-free, hence the (k+1) term. It is noted that the latter term in Equation (1.8) (the “−1”) represents the fault-free circuit, where all n fault sites are fault-free. While the multiple-fault model is more accurate than the single-fault assumption, the number of possible faults becomes impractically large other than for a small number of fault types and fault sites. It is highly evident that multiple fault model is more reliable than the single fault assumption. Fortunately, it has been shown that high fault coverage obtained under the single-fault assumption will result in high fault coverage for the multiple-fault model (Bushnell et al. 2000). Therefore, the single-fault assumption is typically used for test generation and evaluation. Under the single-fault assumption, two or more faults may result in identical faulty behaviour for all the possible input patterns. These faults are called equivalent faults and can be represented by any single fault from the set of equivalent faults. As a result, the number of single faults considered for test generation for a given circuit is usually much less than k×n. This reduction of the entire set of single faults by removing equivalent faults is referred as fault collapsing. Fault collapsing helps to reduce both test generation and fault simulation times.

Why Model Faults

- I/O function tests are inadequate for manufacturing (functionality versus component and interconnection testing).
- Real defects (often mechanical) are too numerous and often not analyzable.
- A fault model to identify targets for testing.
- A fault model makes analysis possible.
- Effectiveness is measurable by experiments.

Types of Fault Models

Stuck-at fault models have are fixed (0 or 1) value to a net, Stuck at-0 and Stuck at-1.

Two types in these categories are

- Single stuck-at fault model.
- Multiple stuck at fault model.

Single Stuck-At Fault

The given line has a constant value (0/1) independent of other signal values in the circuit Properties.

- Only one line is faulty.
- The faulty line is permanently set to 0 or 1.
- The fault can be at an input or output of a gate.
- Simple logical model is independent of technology details.
- It reduces the complexity of fault-detection algorithms

One stuck-at fault can model more than one kind of defects.

Transistor Faults

MOS transistor is considered as an ideal switch in which two types of faults are modelled as Stuck - open where a single transistor is permanently stuck in the open state and stuck-on where a single transistor is permanently shorted irrespective of its gate voltage. Detection of a stuck-open fault requires two vectors and detection of a stuck-short fault requires the measurement of quiescent current (I DDQ).

Open and Short Faults

Defects in VLSI devices can include opens and shorts in the wires that interconnect the transistors that form the circuits. Opens in wires tend to behave like transistor stuck-open faults, when the faulty wire segment is interconnecting transistors to form gates. Bridging Fault is a short between groups of nets.

Delay Faults and Crosstalk

Delay fault model increases the input to output delay of one logic gate at a time. A delay fault causes excessive delay along the path that the total propagation delay falls outside the specified limit. Delay faults have become more prevalent with decreasing feature sizes. The use of nanometer technologies increase cross-coupling capacitance and inductance between interconnects, leading to severe crosstalk effects that may result in improper functioning of a chip. Crosstalk effects can be separated into two categories:

- Crosstalk glitches.
- Crosstalk delays.

A crosstalk glitch is a pulse that is provoked by coupling effects among the interconnecting lines. The magnitude of the glitch depends on the ratio of the coupling capacitance to the line-to-ground capacitance.

Crosstalk delay is a signal delay that is provoked by the same coupling effects among interconnects lines, but it may be produced even if line drivers are balanced and have large loads. Because cross talks delay the process in addition to normal gate and interconnect delays, it is difficult to estimate the true circuit delay, which may lead to severe signal delay problems.


VLSI testing includes two processes: test generation and test application. The objective of test generation is to produce test patterns for efficient testing (Laung-Terng et al.2006). Test application is the process of applying those test patterns to the CUT and analysing the output responses. Test application is performed either by Automatic Test Equipment (ATE) or test facilities in the chip itself. Deploying reliable integrated circuits depends strongly on testing to eliminate defective circuits affected by the manufacturing process. Manufacturing test is performed after a circuit comes out of the manufacturing line to screen defective parts. The basic principle of manufacturing testing with its three basic components: circuit under test (CUT), automatic test equipment (ATE) and ATE memory to store test patterns or test vectors and expected responses are obtained by Automatic Test Pattern Generation (ATPG) tools. To test a digital circuit, several test vectors are applied to its inputs and then, CUT response is analyzed. If the CUT responses match the fault-free responses, then the circuit is considered to be functioning accurately. The input test vectors and their responses are stored in an Automatic Test Equipment (ATE) which applies the tests to the CUT and analyses its responses. Larger test data size requires high memory as well as takes more testing time. Test data compression will resolve this problem by decreasing the test data volume without upsetting the overall system performance. Test data compression is an effective method for reducing test data volume and test application time with relatively small cost. The processes of test data compression and decompression for CUT are shown in Figure.1.5. The original test data is compressed and stored in the memory, thus, the memory size is significantly reduced. An on-chip decoder decodes the compressed test data from the memory and delivers the original uncompressed set of test vectors. Test data compression consists of adding some additional on-chip hardware before and after the scan chains. This additional hardware decompresses the test stimulus coming from the tester; it also compacts the response after the scan chains and before it goes to the tester. This permits storing the test data in a compressed form on the tester.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.5 Block Diagram of Test Data Compression and Decompression for CUT

Testing Principle has three components, input patterns, CUT and stored correct response. When the chip is digital, the stimuli are called test patterns or test vectors and ATE carries out this process.


1.7.1 Automatic Test Equipment

Automatic test equipment (ATE) is computer-controlled equipment used in the production testing of ICs (both at the wafer level and in packaged devices) and PCBs. Test patterns are applied to the CUT and the output responses are compared to stored responses for the fault-free circuits. In 1960’s, when ICs were first introduced, it was predicted that testing would become a bottleneck to high-volume production of ICs unless the tasks normally performed by technicians and laboratory instruments could be automated. An IC tester controlled by a mini computer was developed in mid-1960’ s and the ATE industry was established. Since then, with advances in VLSI and computer technology, ATE industry has developed electronic subassemblies (PCBs and backplanes), test systems, digital IC testers, analog testers and SOC testers. A custom tester is often developed for testing a particular product, but a general-purpose ATE is often more flexible as it enhances the productivity of high-volume manufacturing. Generally, ATE consists of the following parts:

1 —a powerful computer is the heart of any ATE for central control and making the test and measurement flexible for different products and test purposes.

2 —ATE architectures can be divided into two major subcomponents, the data generator and the pin electronics. The data generator supplies the input test vectors for the CUT, while the pin electronics are responsible for formatting these vectors to produce waveforms of the desired shape and timing. The pin electronics are also responsible for sampling the CUT output responses at the desired time. In order to actually touch the pads of an IC on a wafer or the pins of a packaged chip during testing, it is necessary to have a fixture with probes for each pin of the IC under test. Current VLSI devices may have over 1000 pins and require a tester with as many as 1024 pin channels. As a result, the pin electronics and fixtures constitute the most expensive part of the ATE.

3 — in conjunction with the pin electronics, ATE contains waveform generators that are designed to change logic values at the setup and hold times associated with a given input pin. A test pattern containing logic-1’s and 0’s must be translated to these various timing formats. Also, ATE captures primary output responses, which are then translated to output vectors for comparison with the fault-free responses. These translations and some environment settings are controlled by the central computer; therefore, a test program, usually written in a high-level language, becomes an important ingredient for controlling these translations and environment settings. Algorithmically generated test patterns may consist of subroutines, pattern and test routine calls or sequenced events. The test program also specifies the timing format in terms of the tester . An edge set is a data format with timing information for applying new data to a chip input pin and it includes the input setup time, hold time, and waveform type.

4 (DSP)—Powerful 32-bit DSP techniques have been widely applied to analog testing for capturing analog characteristics at high frequencies. Digital signals are converted to analog signals and applied to the analog circuit inputs, while the analog output signals are converted to digital signals for response analysis by the DSP.

5 —ATE precision is a performance metric, specifying the smallest measurement that the tester can resolve in a very low noise environment, especially for analog and mixed-signal testing. For example, a clock jitter (phase noise) of no more than 10 ps is required to properly test ICs that realize more than 100 Mb/s data rates. This requires even higher for today’s high performance ICs. The application of vectors to a circuit with the intent of verifying the timing compliance depends on the operational frequency of the ATE. Ideally, the ATE operational frequency should be much higher than that of the ICs under test. Unfortunately, this is a difficult problem because the ATE itself is constructed from ICs and limited by their maximum operating frequency.

Automatic test equipment can be very expensive. To satisfy the needs of advanced VLSI testing, the following features form the basis for keeping ATE costs under control:

1. —Modular systems provide users the flexibility to purchase and use only those options that is suitable for the products under test.
2. —Test system configurability is essential for many test platforms. As testing needs change, users can reconfigure the test resources for specific products and continue to use the same basic framework.
3. —Testing multiple devices parallelly improves the throughput and productivity of the ATE. Higher throughput means lower overall test cost.
4. — Use of third-party hardware and software permits adopting the best available equipment and approaches, thus giving rise to competition that lowers test cost over time.

From a test economics point of view, there has been a systematic decrease in the capital cost of manufacturing a transistor over the past several decades, as it continue to deliver more complex devices; however, testing capital costs per transistor have remained relatively constant. As a result, test costs are becoming an increasing portion of the overall industry capital requirement per transistor, to the extent that currently it costs almost as much for the test as manufacturing a transistor.

On the other hand, from a test technology point of view, ATE in the early 1980s had resolution capabilities well in excess than the expected component requirements. In 1985, for example, when testing a fast 8-MHz 286 microprocessor,1ns accuracy with edge placement was available in ATE with very low yield loss due to tester tolerances. Later, while testing 700-MHz Pentium III microprocessors, only 100-ps edge placement accuracy was available in ATE. Thus, the hundredfold increase in CUT speed was accompanied by only a tenfold increase in the tester accuracy (Gelsinger 2000).

1.7.2 Automatic Test Pattern Generation

In the early 1960’s, structural testing was introduced and the stuck-at fault model was employed. A complete ATPG algorithm, called the D-algorithm was first published by Roth (1966). D-algorithm uses a logical value to represent both the “good” and the “faulty” circuit values simultaneously and can generate a test for any stuck-at fault, as long as a test for that fault exists. Although the computational complexity of the D-algorithm is high, its theoretical significance is widely recognized. The next landmark effort in ATPG is the PODEM algorithm by Goel (1981). PODEM searches the circuit primary input space based on simulation to enhance computation efficiency. Since then, ATPG algorithms have become an important topic for research and development, many improvements have been proposed and many commercial ATPG tools appeared in the market. For example, FAN by Fujiwara et al. (1983) and SOCRATES by Schulz et al. (1988), are remarkable contributions in accelerating the ATPG process. Underlying many current ATPG tools, a common approach is to start from a random set of test patterns. Fault simulation then determines the number of potential faults that are detected. With the fault simulation results used as guidance, additional vectors are generated for hard-to-detect faults to obtain the desired or reasonable fault coverage. The International Symposium on Circuits and Systems (ISCAS) has announced combinational logic benchmark circuits in (Brglez et al. 1985) and sequential logic benchmark circuits in (Brglez et al. 1989) to assist in ATPG research and development in the international test community.

A major problem in large combinational logic circuits with thousands of gates is the identification of undetectable faults. In 1990’s, very fast ATPG systems were developed using advanced high-performance computers which provided a speed-up of five orders of magnitude from the D-algorithm with 100% fault detection efficiency. As a result, ATPG for combinational logic is no longer a problem; however, ATPG for sequential logic is still difficult due to the propagation effect of a fault to a primary output. So it can be observed and detected. A state sequence must be traversed with the fault undertaken. For large sequential circuits, it is difficult to reach 100% fault coverage in reasonable computational time and it may cost more unless DFT techniques are adopted (Breuer et al. 1987).

1.7.3 Fault Simulation

A fault simulator emulates the target faults in a circuit in order to determine which faults are detected by a given set of test vectors. Because there are many faults to emulate for fault detection analysis and fault simulation time is much greater than the design verification. Improved approaches have been developed in the following order in order to accelerate the fault simulation process. Parallel fault simulation uses bit-parallelism of logical operations in a digital computer. Thus, for a 32-bit machine, 31 faults are simulated simultaneously. Deductive fault simulation deduces all signal values in each faulty circuit from the fault-free circuit values and the circuit structure in a single pass of true-value simulation augmented with the deductive procedure.

Concurrent fault simulation is essentially an event driven simulation, to emulate faults of a circuit in a most efficient way. Hardware fault simulation accelerators based on parallel processing are also available to provide a substantial speed-up over purely software-based fault simulators. For analog and mixed-signal circuits, fault simulation is traditionally performed at the transistor level using circuit simulators such as HSPICE. Unfortunately, analog fault simulation is a very time-consuming task and for simple circuits, a comprehensive fault simulation is normally not feasible.

This problem is further complicated by the fact that acceptable component variations must be simulated along with the faults to be emulated, which requires many Monte Carlo simulations to determine whether the fault will be detected. Macro models of circuit components are used to decrease long computation time. Fault simulation approaches that use high-level simulators can simulate analog circuit characteristics based on differential equations but it is usually avoided due to lack of adequate fault models.

1.7.4 Design For Testability

Test engineers usually have to construct test vectors after the completing a design. This consistently requires a substantial amount of time and effort that can be avoided if testing is considered early in the design flow to make the design more testable. As a result, integration of design and test, referred as Design For Testability (DFT), was proposed in the 1970s.

To structurally test circuits, it is essential to control and observe logic values of internal lines. Unfortunately, some nodes in sequential circuits can be very difficult to control and observe; for example, activity on the most significant bit of an n bit counter can only be observed after 2n−1 clock cycles. Testability measures of controllability and observability are first defined by Goldstein (1979) to help find those parts of a digital circuit that will be most difficult to test and assist in test pattern generation for fault detection and many DFT techniques have been proposed by Mc Cluskey (1985 &1986). DFT techniques generally fall into one of the following three categories:

1. techniques.
2. (LSSD) or
3. (BIST).

Ad hoc methods were the first DFT techniques introduced in the 1970s. The goal was targeted only those portions of the circuit that would be difficult to test and add circuitry to improve the controllability or observability.

Ad hoc techniques typically use test point insertion to access internal nodes directly. An example of a test point is a multiplexer inserted to control or observes an internal node as illustrated in Figure 1.6.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.6 Adhoc DFT Test Points using Multiplexers

Level-sensitive scan design, also referred as scan design is the most important DFT technique proposed by Eichelberger et al. (1977) and it is latch based. In a flip-flop-based scan design, testability is improved by adding extra logic to each flip-flop in the circuit to form a shift register or scan chain, as illustrated in Figure 1.7. During the scan mode, the scan chain is used to shift in (or scan in) a test vector in order to be applied to the combinational logic. During one clock cycle in the system mode of operation, the test vector is applied to the combinational logic and the output responses are clocked into flip-flops. The scan chain is then used in the scan mode to shift out (or scan out) the combinational logic output response to the test vector while shifting in the next test vector in order to be applied. It resulted in the Problem of testing of sequential logic to combinational logic, thus facilitating the use of ATPG developed for combinational logic.



ISBN (eBook)
ISBN (Book)
File size
2.6 MB
Catalog Number
Institution / College
Anna University
Test data compression hybrid coding scheme




Title: Hybrid Code-Based Test Data Compression and Decompression for VLSI Circuits