Abstract- is most expensive and important. Fault localization


Software fault localization that identifies locations of
faults in a program is tiresome, dull, costly and still crucial for program
debugging. As scale and complexity of software increases, locating faults
manually becomes infeasible and hence demand for automatic fault localization
techniques increases, each of which makes fault localization process more
effective. In this paper, we present an analysis of Spectrum-based fault
localization and DStar (*) technique. Both these methods are effective fault
localization techniques and indicate suspicious locations for fault
localization automatically without the need of any previous information about
structure and semantics of program.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now


Successful debugging is important for producing good quality
software. Since the number of bugs to be fixed often outruns the size of the
development team, thus manual debugging becomes error-prone. Therefore, for making
software more feasible and authentic, automatic debugging processes are in
great demand. Among the wide variety of debugging activities, software fault
localization is most expensive and important.

Fault localization techniques includes a large family of
Spectrum-based fault localization techniques (SBFL).  SBFL techniques examine program spectra, which
can be defined as a collection of program traces collected during execution of
a program to show an association between failures and program elements that are
liable for these failures. SBFL techniques allocate suspiciousness scores to
program elements which are then ranked on the basis of these scores. Finally,
the obtained ranked list is handed over to the developers to guide them to the
root cause of these failures.

Another effective fault localization technique is DStar
Method 2 which has its origins rooted in binary similarity coefficient-based
analysis and its a modified version of Kulezynski coefficient.

Spectrum-Based Techniques-

Various Spectrum-Based Techniques are motivated by the
probabilistic and statistical-based casuality models. Spectrum of a program
encovers the execution information of a program from various perspectives which
include execution information for conditional branches or loop-free intra-procedural
paths. Such spectra can be used for the tracking of program behaviour and
software fault localization. When a program does not executes as expected, the
related information can be used to locate suspicious code that is liable for
the error.


P                a

number of failed test cases that cover a statement

number of failed test cases that do not cover a statement

number of successful test cases that cover a statement

number of successful test cases that do not cover a statement

  total  number of test cases that cover a

total  number of
test cases that do not cover a statement

     total number of successful test cases

    total number of failed test cases

ti                         the ith test



Code coverage/ Executable Statement Hit Spectrum
(ESHS)-  indicates parts of program
covered during an execution and by using this information the components
involved in a failure can be identified. This process narrows the search for
faulty component that led the program in a failed state. SBFL works by using a
set of test cases where one the test cases fails. Better results can be
achieved by using both the successful and failed test cases and focusing the
contrast between them. The Set union emphasizes the source code which is
executed by the failed test but not by any of the successful tests. This type
of code is more suspicious as compared to others. The set intersection keeps
the code executed by failed test and ignore the code executed by all the
successful test cases.

Nearest neighbour is an ESHS-based
technique, which distinguish a failed test with a successful test that most resembles
to the failed one, based on distance between them. If the bug pops up in the
difference set, it is analysed otherwise the method proceed by first developing
a program dependence graph (PDG). Then adjacent un-checked nodes are added and
analysed in the graph step by step until the whole list of nodes in the graph
are checked.

ESHS-based Similarity Coefficient-based
measures are used to compute the closeness of the execution pattern of a
statement with the failure pattern of all test cases and this degree of
closeness can depict the suspiciousness of the statements. Hence, the closer
the execution pattern of a statement is to the failure pattern, the more
suspicious the statement appears to be and vice versa. A popular similarity
coefficient-based technique is Tarantula, which computes the suspiciousness of
each statement by using coverage and execution results. Its formula is as




On the basis of suspiciousness computed by
Tarantula, studies like 5, 6 make use of colours for providing visual
mapping of involvement of each program statement while the test suite excutes. When
more failed test cases executes a statement, the colour assigned to it becomes brighter.
Debroy et al. 7 revised the Tarantula technique by first grouping together
the statements executed by same number of failed test cases and then ranking
these groups in descending order on the basis of failed test cases. Within each
group, the statements are ranked by using Tarantula.

When compared to previously discussed
techniques such as set union, set intersection, nearest neighbour, Tarantula is
a more effective fault localization technique because it investigates less code
before the first faulty statement is identified.

The Ochiai similarity coefficient-based
technique is even more effective than Tarantula.

Its formula is as follows:




Ochiai is different from nearest neighbour
model as Ochiai utilizes multiple failed test cases while nearest neighbour
uses a single failed test case. Also, Ochiai includes all successful test cases
while nearest neighbour model only focuses the successful test cases that most

relates to failed test cases. Ochiai to
extended to Ochiai2, and its formula is as follows:




Program Invariants Hit Spectrum (PIHS)- records
the coverage of program invariants, which can be defined as the properties of
program that do not change as the program executes. To locate bugs, these
techniques focuses on finding violations of program properties in failed
program executions. Invariants can be classified as likely invariants and
unlikely invariants where the former ones are properties that hold in some sets
of successful executions while the later one may not hold for all possible
executions. Automatic identification of necessary program properties required
for fault localization is the major obstacle in using these techniques. To
overcome this problem, invariant spectrum of successful executions are
considered as program properties.


Predicate Count Spectrum (PRCS)- records the execution
of predicates and track behaviours of program that are more likely to create
failures. Since the PRCS information is evaluated using statistical methods,
these techniques are called statistical debugging techniques.


Method calls Sequence Hit Spectrum (MCSHS)-
collects information related to the sequence of method calls ( how an object is
used ) encountered during program execution. Equal consideration is given to
both incoming and outgoing method calls.


Time Spectrum- records execution time of methods
in successful or failed executions and those collected only from successful
executions are used to create observed behaviour models. Deviations from these
observed models in failed executions are observed and are rated as possible
causes of failures.


Xie et al. 3 compared 30 different SBFL
formulas and proved that two families of formulas among them outperform others.
They referred these families as ER1 and ER5. Two members of ER1 are ER1a and
ER1b  . Three members of ER5
are ER5a, ER5b and ER5c . The formulas for
these members are as follows:


ER1a(e) =  


ER1b(e) = NCF –


ER5a(e) = NCF


ER5b(e) =


ER5c(e) =


The effectiveness of a SBFL formula is evaluated
on the basis of EXAM score. Lower EXAM score denotes better performance. The
formula for calculating this score is as follows:


EXAM score=


Tien-Duy B. Le 1 compared the EXAM
score of Tarantula, Ochiai, ER1 and ER5 families of formulas and observed that
Ochiai has the lowest EXAM score.



W. Eric Wong 2 proposed DStar method
for effective fault localization which realises the following intuitions
related to the suspiciousness of a statement. Firstly, the suspiciousness of a
statement increases if it is covered by more failed tests. Secondly, the
suspiciousness of a statement decreases if it is covered by more successful
tests. Thirdly, a statement is considered less suspicious if test cases fail
without covering it. Lastly, more importance is given to the statements covered
by failed tests than those covered by successful tests or those which are not
covered by failed tests.

As stated earlier DStar method is an
extended version of the Kulezynski coefficient since, the later one accomplish
only first three intuitions while the former one is able to realise each of the
four intuitions and is also capable of handling multiple bugs. The formula for
both these methods are as follows:

                                    Suspiciousness( Kulezynski )=

                                    Suspiciousness( DStar )= , where * is greater than
or equal to 1.

W. Eric Wong 2 evaluated D* across nine
different sets of programs (Siemens suite, Unix suite, gzip, grep, make, Ant,
space, flex, sed) that corresponds to 24 subject programs consisting of
multiple different faulty versions. They further compared D* with 38 other
techniques including 31 similarity coefficient-based techniques and 7 other
contemporary techniques. On the basis of EXAM score, D* method is superior than
other techniques. The effectiveness of D* increases with the value of * and
then it level offs on reaching a critical value.

However, no technique among the wide
range of fault localization techniques claims to outperform all others under every
circumstance. Hence, it can be stated that an optimum Spectrum-based technique
for finding locations of faults is currently not present.




Tien-Duy B. Le, Ferdian Thung, and David
Lo, “Theory and Practice, Do They Match? A Case With Spectrum-Based Fault Localization,”
in 2013 IEEE International Conference on Software Maintenance.

W. Eric Wong, Vidroha Debroy, Ruizhi Gao,
and Yihao Li, “The DStar Methof for Effective Fault Localization,” in IEEE
Transactions on Reliability, vol. 63, no. 1, march 2014.

X. Xie, T. Chen, F.-C. Kuo, and B. Xu, “A
theoretical analysis of the risk evaluation formulas for spectrum-based fault
localization,” TOSEM, 2013.

W. Eric Wong, Ruizhi Gao, Yihao Li, Rui
Abreu, and Franz Wotawa, “A Survey on Software Fault Localization,” IEEE
Transactions on Software Engineering, vol. 42, no. 8, august 2016.

J. A. Jones, M. J. Harrold, and J.
Stasko, “Visualization for fault localization,” in Proc. Workshop Softw. Vis., 23rd
Int. Conf. Softw. Eng., Ontario, BC, Canada, May 2001, pp. 71-75.

J. A. Jones, M. J. Harrold, and J.
Stasko, “Visualization of test information to assist fault localization,” in
Proc. Int. Conf. Softw. Eng., Orlando, FL, USA, May 2002, pp. 467-477.

V. Debroy, W. E. Wong, X. Xu, and B Choi,
“A grouping-based strategy to improve the effectiveness of fault localization
techniques,” in Proc. Int. Conf. Softw., Zhangjiajie, China, Jul. 2010, pp.



I'm Gerard!

Would you like to get a custom essay? How about receiving a customized one?

Check it out