Re-engineering vs. Refactoring in software development


Enterprise legacy software systems tend to be large and complex. The analysis of system architecture therefore becomes a difficult task. To solve the problem, it would be better if legacy software architecture can be decomposed to reduce the complexity associated with analyzing large scale architecture artifacts. Architecture decomposition is an efficient way to limit the complexity and risk associated with the re-engineering activities of a large legacy system. It divides the system into a collection of meaningful modular parts with low coupling, high cohesion, and minimizes the interface, thus to facilitate the incremental approach to implement the progressive software re-engineering process.

A legacy code can be considered as any code that was written before today. Traditional approach is to start performing changes in a secured manner because it is not sure what’s really going to happen when a data structure is changed or update a variable i.e. adding a wrapper on top of existing code or copying the code from another place which already works. In the cases mentioned in the previous lines, the code will blot and maintainability, testability and understandability will be a big problem in future. But, for people who deal with it day in and day out "legacy code" is a Pandora’s box: sleepless nights and anxious days poring through bad structure, code that works in some incomprehensible way. Martin Fowler defines refactoring legacy code as “a change made to the internal structure of the software to make it easier to understand and cheaper to modify without changing its observable behavior.

Most of the tasks in the evolution and servicing phases require program comprehension, understanding how and why a software program functions in order to work with it effectively. Effective comprehension requires viewing a legacy program not simply as a product of inefficiency or stupidity, but instead as an artifact of the circumstances in which it was developed. This information can be an important factor in determining appropriate strategies for the software program's transition from the evolution stage to the servicing or phase-out stage.

This article will talk about the definition of re-engineering and refactoring and also present the situation in which this process need to be used in an effective manner.

Refactoring is the process of changing a software system in such a way that the external behavior of the code is unchanged but its internal structure and architecture are improved. It is a behavior-preserving source code transformation.






      






Programmers hold onto software designs for long duration even after they have become unwieldy. The life span of the legacy code will be alive only as long as product is running at customer place. We reuse code that is no longer maintainable because it still works in some way and are bit scared for modification. But is it really cost effective to do so?. When we remove redundancy, eliminate unused functionality and rejuvenate obsolete designs, we are in the process of refactoring the code that are not maintainable.
Refactoring throughout the entire project life cycle saves time and improves quality. During this phase, a series of question will arise for all the programmers like:
·         “Changing the design/code might break the system!”
o        Solution: Use tests to prove behavior preservation
·         “I don't understand how it works now!”
o        Solution: Learn through the process and Build documentation as you refactor and simplify
·         “I don't have the time to refactor!”
o        Solution: Refactoring will pay for itself later
The below mentioned graphical picture will depict the cost saving if continuous refactoring followed during the lifecycle of a project/product: [Reference: http://www.jacoozi.com/blog/?p=11 ]

 















But it is important for the programmers to understand that refactoring process will help to improve readability, flexibility, extensibility, understandability and improve performance. Refactoring process can be applied during application, maintenance, testing, coding and during framework development. Below mentioned section will explain the refactoring cycle that can be used for refactoring code under maintenance:
·         Program source code should go through expansion and contraction phases.
o        Expansion phase: code is added to meet functional requirements
o        Contraction phase: code is removed and refactored to better address those requirements and plan for the future.
·         This cycle will be repeated many times during a program's lifetime.
The objective of refactoring is to keep the design simple as time goes on and also avoid clutter and complexity in the legacy code. Refactoring is the process which will help in cleaning up the code which is easier to understand, modify and extend. In the longer run, it will groom the system which is well defined and more maintainable.
There is certain amount of Zen to refactoring. It is hard at first because the design which is envisioned and working has to set off and accept the design that was serendipitously identified while refactoring. It is important to know that the design envisioned was competent but is obsolete now. Before implementing this process, it is better to remove the notions about what the system should or should not be and try to see the new design emerge as code changes take place.
The number of refactoring that would be beneficial to any code is infinite. Some of the refactor techniques that are used in java development are
·         Organize imports
·         Rename {field, method, class, package}
·         Move {field, method, class}
·         Extract {method, local variable, interface}
·         Inline {method, local variable}
·         Reorder method parameters
·         Push members down
Legacy systems might have written in different architecture which in turn written in different computer languages. The key issue will be in maintenance and the integration of these systems. Companies that optimize business processes must often change legacy information systems to support the new processes. The required changes can involve new features, porting, performance optimization, or bug fixes. Changes for the legacy systems often require replacement of the existing code but also of supporting tools (e.g., compilers and editors), development processes (testing and version control).
This change requires discarding part or all of the existing system, modifying existing parts, writing new parts and purchasing new or improved parts from external vendors. Based on this criteria, we can termed in two different forms and the same is mentioned below
·         If the change is accomplished primarily through discarding the existing system and buying or building new parts then those systems are termed as a rewrite or redevelopment.
·         If the change is accomplished primarily by modifying the existing system, the project is termed as a reengineering project.
Rewriting and reengineering are the extremes along a spectrum of strategies for change but in reality most major upgrades are accomplished by some combination of the two.
Reengineering and refactoring might look quite similar at the beginning however, reengineering deals with the examination and alteration of a system to reconstitute it in a new form, and the subsequent implementation of the new form.
The primary benefits that reengineering will include:
·         Reduced operating and maintenance costs caused by overheads of older applications.
·         Improved application maintainability, even if there is limited application knowledge, high staff turnover, lack of qualified resources, outdated documentation, or obsolete application platform support.
·         Improved access to legacy applications in case of a merger or organizational change.
Based on the process followed in reengineering project, the lifecycle involves two major steps in reengineering processes and the same is mentioned below:













Forward reengineering: Forward engineering starts with system specification and involves the design and implementation of a new system.









Text Box: System specification





Text Box: New System






 



Reverse reengineering: Reverse engineering is the process of analyzing a subject system to identify the systems’ components and their interrelationships and create representations of the system in another form or at a higher level of abstraction












Text Box: Existing legacy system



Text Box: Re-engineered system






 



Perspective
Reengineering
Refactoring
Scope
Reengineering always affects the entire system or part of the system (in this case we will take hybrid approach).
Refactoring has typically (many) local effects
Process
Reengineering follows a disassembly / reassembly approach in technical domain.
Refactoring is a behavior-preserving, structure transforming process.
Result
Reengineering can create a whole new system – with different structure and possibly a different behavior.
Refactoring improves the structure of an existing system – leaving its behavior

Cost
Cost is higher when compared with refactoring.
Continuous refactoring will decrease the total cost of ownership.

Below are the some of the scenario in which reengineering will suitable:
·         System’s documentation is missing or obsolete.
·         Team has only limited understanding of the system, its architecture and implementation
·         Bug fix in one place causes bugs in other places.
·         New system level requirements and functions cannot be addressed or integrated appropriately.
·         Code is becoming ‘brittle’ and difficult to update.


Legacy software systems are an ongoing challenge for software developers. Refactoring according to Martin Fowler, is a controlled technique for improving the design of an existing code base. It is important to maintain the health code for better maintainability by refactoring.

Developing a custom built system requires a lot of effort and cost. Hence, organizations need to maintain their old systems in order to reduce the cost and increase the lifetime of the old system. For these purpose re-engineering becomes a useful way to convert old, obsolete systems to efficient and streamlined systems. The intent of reengineering to create version of existing programs that are of high quality and easier to maintain.

StAX and XML Accelerators tutorial


XML Technology is gradually becoming the standard for Data Interchange. Most organizations in the world use XML is some form or the other. XML forms the basis of many future inventions in the field of information technology.
In spite of all these very lucrative advantages, the very basis of technology is under threat due to reduced Performance aspects that solutions have to live with due to the very nature of the parsing and processing technologies.
In the world of Java, there are primarily three options that are provided to parsing of XML Structures namely DOM (Document Object Model), SAX (Simple API for XML) and STAX (Streaming API for XML).
DOM and SAX have traditionally being used for parsing XML Structures. STAX is a relatively newer member of XML Parsing technology in the Java World.
STAX is built upon the concept of Pull model in which an application queries the parser for the next parsing event, but never surrenders control to the parser during the process. Stated differently, StAX essentially turns the SAX processing model upside down. Instead of the parser controlling the application's flow, and the application reacting to parsing events, it is the application that controls the flow by pulling events from the parser.

Pull Parsing Model in StAX allows for
a)      Control over the Parsing Engine
b)     Greater Programmatic control over the XML Data Structure
c)      Reduces heavy memory footprints, which are required due to usage of DOM Parsing techniques.
d)     Simple processing model such as used with SAX
e)      Event based processing control (this is called Pipelining) on XML Documents
f)       The StAX cursor model is the most efficient way to parse XML since it provide a natural interface by which the parser can compute values lazily
g)      It is more optimized for Speed and Performance in comparison to DOM and SAX

In spite of the advent of the STAX as member of the Java Technology, still a lot of debate exists in adoption of XML Technologies, mainly due to performance overheads.

XML Accelerators are the newest mechanism appearing in the industry. Currently there are primarily three options available in dealing with Improving XML Performance
a)      Microprocessor based acceleration: This option takes into account the fact that faster microprocessors will process XML data faster than not so fast microprocessor.
b)     Standalone XML Accelerator Engine: This devices hook into the individual applications and reduce the XML data beings transmitted across applications. What these don’t attempt to is improve the performance of XML processing on Individual Application.
c)      PCI Hardware boards for XML Accelerators: These hardware boards actually separate XML processing from the application thereby improving performance.  Figure below gives an example of PCI based Hardware Board processing mechanism

StAX is definitely is a much better solution implementation option as compared to DOM and SAX. However in order to boost XML Performance use of XML accelerator solutions is still evolving. Meanwhile, choice of the PCI Hardware based XML Accelerators for today may good option to enhance XML processing and implement the much needed SOA solutions.

Configuring JMS Server on Websphere 6.1 Server using Service Integration Bus (SIB)


Abstract
This document is methodology documentation for configuring JMS Server in Websphere 6.1 which does not come with inbuilt JMS Server. This is an extract from migrations done for a client for set of applications
Introduction
This document does a case study of an Application named Middle Tier Application (MTA) which uses MQ Queues. This application is migrated to Websphere 6.1 in RAD 7.0. Websphere 6.1 does not have inbuilt JMS Server. To overcome this we configured JMS using System Integration Bus (SIB).
Problem
Configure JMS Server on Websphere 6.1. Websphere 6.1 does not come with inbuilt JMS Server.
Approach
We will configure JMS Server using Service Integration Bus (SIB). Before configuring SIB, first we should develop an understanding what a SIB is?
Service Integration Bus
A service integration bus is a group of one or more application servers or server clusters in a WebSphere Application Server cell that cooperate to provide asynchronous messaging services. The application servers or server clusters in a bus are known as bus members.
A service integration bus provides the following capabilities:
  • Any application can exchange messages with any other application by using a destination to which one application sends, and from which the other application receives.
  • A message-producing application, that is, a producer, can produce messages for a destination regardless of which messaging engine the producer uses to connect to the bus.
  • A message-consuming application, that is, a consumer, can consume messages from a destination (whenever that destination is available) regardless of which messaging engine the consumer uses to connect to the bus.
A service integration bus comprises a SIB Service, which is available on each application server in the WebSphere Application Server environment. By default, the SIB Service is disabled. This means that when a server starts it does not have any messaging capability. The SIB Service is automatically enabled when we add the server to a service integration bus. We can choose to disable the service again by configuring the server.
A service integration bus supports asynchronous messaging; that is, sending messages asynchronously. Asynchronous messaging is possible regardless of whether the consuming application is running or not, or if the destination is available or not. Point-to-point and publish/subscribe messaging are also supported.
After an application has connected to the bus, the bus behaves as a single logical entity and the connected application does not need to be aware of the bus topology. In many cases, connecting to the bus and defining bus resources is handled by an application programming interface (API) abstraction, for example the administered JMS connection factory and JMS destination objects.
The service integration bus is sometimes referred to as the messaging bus if it is used to provide the messaging system for JMS applications using the default messaging provide.
Many scenarios require a simple bus topology; perhaps, for example, a single server. By adding multiple servers to a single bus, we can increase the number of connection points for applications to use. By adding server clusters as members of a bus, we can increase scalability and achieve high availability. Servers, however, do not have to be bus members to connect to a bus. In more complex bus topologies, multiple buses are configured, and can be interconnected to form complex networks. An enterprise might deploy multiple interconnected buses for organizational reasons. For example, an enterprise with several autonomous departments might want to have separately administered buses in each location.
Bus Members
The members of a service integration bus can be application servers or server clusters. Bus members that are application servers or server clusters contain messaging engines, which are the application server components that provide asynchronous messaging services.
To use a service integration bus, we must add at least one member that is an application server or server cluster.
Adding a bus member automatically creates a messaging engine for that bus member. Each messaging engine has its own data store, used for example to store persistent messages and maintain durable subscriptions. By default a messaging engine associated with a server is configured with an in-process, Cloudscape-based data store. In other cases, we are asked to provide the Java Naming and Directory Interface (JNDI) name of a Java Database Connectivity (JDBC) data source for use by the messaging engine.
When the bus member is an application server, it can have only one messaging engine. If the bus member is a server cluster, it can have additional messaging engines to provide high availability or workload sharing characteristics.
To host queue-type destinations, the messaging engine can hold messages until consuming applications are ready to receive them. Each messaging engine also has a data store where it can hold messages so that if the messaging engine fails, messages are not lost.
When we define a queue-type destination, we assign it to a bus member. When that bus member is an application server or a server cluster, the messaging engine (or engines) in that bus member holds the messages.
If required, we can remove members from a bus. However, this action deletes any messaging engines that are associated with a bus member, including knowledge of any messages held by the data store for those messaging engines. Therefore, we must plan this action carefully.
When a bus member is deleted, the data source associated with this bus member is not automatically deleted, because users often associate their own data source with a bus member. This also applies to bus members created using the default data source: the data source is not automatically deleted and you must remove it manually.
If we do not delete the data source manually and another messaging bus member is created, the messaging engine will fail to start.
Bus Destinations
A bus destination is a virtual location within a service integration bus, to which applications attach as producers, consumers, or both to exchange messages.
Bus destinations can be either "permanent" or "temporary":
  • A permanent destination is defined by an administrator for use by one or more applications over an indeterminate period of time. Such destinations remain until explicitly deleted by the administrator or by some administrative command or script.
  • A temporary destination is created and deleted by an application, or the messaging provider, for use by that application during a session with a service integration bus. The destination is assigned a unique name.
The following are the main types of destination:
1. Queue
A destination for point-to-point messaging.
2. Topic space
A destination for publish/subscribe messaging.
3. Alias
An alias destination makes a destination available by another name and, optionally, overrides the parameters of the destination. Applications can use an alias destination to route messages to another destination in the same bus or in another (foreign) bus.
4. Foreign
A foreign destination provides a mapping to a destination of the same name on a different bus and enables applications on one bus to access directly the destination on another bus. We can set its own destination properties which will override the destination defaults.
We can configure queue, topic space, and alias destinations with one or more mediations that refine how messages are handled by the destination.
Case Study
We have configured SIB in an application for a client, steps for which are provided in the attached document. Application provides batch as well as real time support through messaging server i.e. JMS.


References:
1.           Websphere 5.1 to 6.1 migration.doc by Kapil Naudiyal
2.           Web Links
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html

RAPID Implementation in ERP implentaion methodology


Rapid Implementation

Earlier, ERP implementations used to frighten the clients due to the time and cost of an implementation…

When ERP first came on the scene, most implementations were complex affairs with consulting costs that often ran three to five times the cost of the applications. Scope creep was extensive before a bit of benefit could be measured.  As ERP evolved, consulting costs began to fall more into line and implementation times were reduced. These accelerations (Rapid Implementations), when accomplished with the right strategy and tools can be of tremendous benefit, including a reduction of costs and reduced time-to-benefit. However, without the right strategy and tools, implementation acceleration carries the risk of abbreviated end user training and change management, a lack of post implementation planning, over-engineering of business processes, and other problems that in fact lead to higher over-all cost of ownership and the wearing out of business benefit.


 Earlier ERP Implementation Methodologies

Prior to 1997, methodologies relied heavily on the As-Is and To-Be phases. In the As-Is phase, a company’s current business processes were inventoried, charted, and scripted. In the To-Be phase, a company’s future business processes were designed, charted, and scripted. Ideally, these steps went as follows:

As-Is described the status quo of business processes
To-Be described the direct transfer of the as-is process into a to-be process that eliminates the weak points and achieves the intended benefit.

The key weakness of these methodologies was attention to the As-Is phase in which lower-level business processes were charted and scripted at an inflated cost to clients and with little or no benefit for the To-Be phase. This aspect was one of the key drivers to highly-publicized cost over-runs in the mid 1990’s.

From 1997 onwards, new methodologies started emerging that more directly addressed enterprise software implementations and all stressed speed through a more direct approach, the use of conference room pilots, the deployment of templates, and greater leverage of best practices (i.e. the re-use of business processes that had demonstrably done the job).
In order to address client concerns about the high cost of implementations, many of these methodologies were branded as rapid. For example, Deloitte’s “Fast Track”, Oracle’s “Fast Forward”, and KPMG’s “Rapid Return on Investment” (also labeled R2i).

Fundamentals of Rapid Implementation

The most crucial element of acceleration is the re-use of existing and proven assets. As the business flows, or processes, of firms within an industry are nearly identical, pre-configured processes can be easily implemented. For example, an order to cash business process that has already proven viable for hundreds of consumer packaged goods firms will probably be a good fit for another consumer packaged goods firm. In similar fashion, how much will sales order entry differ for a firm that sells automotive parts from a firm that sells aircraft parts?
Re-usability depends upon a client willingness to adapt itself to new business processes rather than bending the software to adapt to custom processes. This is one of the key drivers why almost all the consulting firms are focusing more and more on Knowledge Management initiatives. These assets play a very important part in Rapid implementation. The closer a client adheres to this principle, the faster the implementation due to:

  1. A major reduction in the business process design and software configuration phases, which normally comprise more than half of the consulting effort expended
  2. Higher level of re-usability of scripts, templates, set-up tools, reports, and user documentation
  3. A reduction in scope management.

The rise of industry-focused solutions has resulted from the thousands of ERP implementations that have occurred over the past fifteen years and is a major step in the evolution of enterprise applications.

Fig 1: Elements of Acceleration


  1. Industry specific Processes - For example an order to cash cycle will be almost similar across all the manufacturing industries. Industry specific processes for industries like Pharmaceuticals, Textile, Manufacturing, Automobile, Aerospace, Industrial etc. can be a very powerful asset for any consulting firm
  2. Proven Methodology – Another major asset that plays a crucial role in rapid implementation. Well-proven methodology used in one implementation can definitely be a benefit for the future implementations
  3. Test Scripts, User Training/Documentation – Training documentation/user-manuals are an example of another re-usable component which is an essential component of every implementation
  4. Re-usable tools, reports and Templates – Industry specific templates can be utilized across same industries. Certain re-usable reports and tools can certainly speed-up implementations
  5. Best-Practices -  Best practices are the most efficient and effective way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large number of similar implementations

The Benefits of Rapid Implementation

Having a look at the concept of Rapid implementation, what it is all about and what are the basic elements that play an important role in accelerations, let’s have a look why enterprises should go for Rapid implementation.

Key benefits that can be derived from a rapid implementation:
  1. Reduced time and cost
  2. Minimal interference to customer’s existing operations
  3. Reduced probability of over-engineering
  4. Accelerated time to benefit


Key Decision Factors

Is rapid implementation a right choice? Key question for almost everyone is “to go” or “not to go”?

Here are five factors to consider when deciding which approach to take:

  1. Necessity: Companies with an immediate need threatening their viability or an issue that relates to customer responsiveness and competitive pressures should consider rapid ERP.
  2. Cost: Fast implementations by definition should cost less. The time needed to gain benefits is also reduced and the resulting efficiencies mean lower cost.
  3. Scope: The best candidates for an enterprise keeping an implementation well within the scope of the project are willing to align their expectations with industry best practices, are not expecting to fix everything at once, and are looking for flexibility for future expansion. Such enterprises know exactly what issues they are seeking to address to drive their business forward.
  4. Internal Readiness: Enterprises must be well aware of how much training will be needed by the implementation. They must be willing to commit high-quality internal resources to the project and should be aiming at not interrupting operational resources.
  5. Expertise of Consulting Firm: Enterprises should be looking for vendors and partners with deep industry segment and geographic knowledge, as well as expertise with mature and proven tools and methodologies.


How does Rapid Implementation works for JD Edwards?
After getting a feel of what is rapid implementation, its benefits and key decision factors, now the questions that come to my mind are – Does it work for JD Edwards? Do we have some Business accelerators for rapidly implementing JD Edwards? Do we have some success stories for the same?
And the answer to all of the above mentioned questions is “Yes”. Rapid implementation does work well for JD Edwards. Many consulting firms have come up with Rapid implementation methodologies and Business accelerators including Deloitte, Oracle and KPMG.  
Business Accelerators for JD Edwards EnterpriseOne
Oracle Business Accelerator solutions are available for five major modules: customer relationship management, distribution, financials, human capital management, and manufacturing.
Oracle Business Accelerators for JD Edwards EnterpriseOne include:
  1. Configured JD Edwards EnterpriseOne application software, including business processes, user roles, technical set-up, and a rapid installation.
2.      Questionnaire wizards that capture your process requirements and configure the JD Edwards EnterpriseOne environment to your business needs.
3.      Engineered hardware configurations.
4.      A complete package of open standards infrastructure software, including application server, portal, database, and security and technology tools.
5.      Implementation services from Oracle Consulting or an authorized Oracle partner.
6.      Training to get users up-to-speed and productive as quickly as possible.
Success stories…
 At Levy/Latham Global, J.D. Edwards OneWorld financials and distribution was implemented in forty-five days, On-time and on-budget. As per the implementers’, some of the key success factors for the same include:
1.      Correct selection of Hardware and Infrastructure - is the hardware and infrastructure up to the job?  If you are re-using existing hardware, is it sufficient for the task at hand?  Did you buy enough horsepower at both the client and server ends?
2.      Strong leadership – You should have very strong leadership and a capable Project Manager with a strong vision towards the goal – Leader and the team must be very single-minded during the project.  Everyone in the team should know exactly what the goal is and that the deadline is not optional.
3.      Application training - How much is enough?  Or, how much should you spend?  As per the implementers’ one should budget somewhere between $1 to $2 for training for every $1 one spend on user licenses.  And this has to happen during your implementation, particularly on a rapid deployment



RUN A IBM DB2 QUERY THROUGH JCL


DB2 QUERY RUN THROUGH JOB



//XITSUIDB JOB 'DSNTEP2 ',CLASS=A,MSGCLASS=X,
// NOTIFY=&SYSUID,MSGLEVEL=(1,0),REGION=4096K
// SET DB2SS=YDB2
//STEP010 EXEC PGM=IKJEFT01
//STEPLIB DD DISP=SHR,DSN=&DB2SS..DSNLOAD
// DD DISP=SHR,DSN=SYS2.CEE.SCEERUN
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD DSN=XITSUID.QUERY.OUTPUT,DISP=SHR
//* DISP=(NEW,CATLG,DELETE),
//* SPACE=(CYL,(10,5),RLSE),
//* DCB=(LRECL=80,BLKSIZE=800,RECFM=FB)
//*SYSPRINT DD SYSOUT=*,OUTLIM=100000
//SYSOUT DD *
//SYSTSIN DD *
DSN SYSTEM(YDB2)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP2) LIB('YDB2.RUNLIB.LOAD')
END
/*
//SYSIN DD *
SELECT RET_UNIT_CDE, PARTITION_ID ------------------> Query Starts (Comment - Please delete in Job)
FROM DBKN01.VBKPT001 WHERE
RET_UNIT_CDE = 100009 OR
RET_UNIT_CDE = 200007
WITH UR -------------------------------> Query Ends (Comment - Please delete in Job)
/*
//* UR MEANS UNCOMMITED READ.
Note: Change YDB2 to JDB2 for SY2

Junit tuorial | Junit concept | Junit tutorial step by step

Junit tutorial

  • The testing problems
  • The framework of JUnit
  • A case study
  • JUnit tool
  • Practices


 The testing problems
The framework of JUnit
A case study
JUnit tool
Practices

class Money {    

    private int fAmount;    
    private String fCurrency;
public Money(int amount, String currency) {
         fAmount= amount;        
         fCurrency= currency;      }
public int amount() {         return fAmount;     }
    
    public String currency() {         return fCurrency;     }


public Money add(Money m) {    
        return new Money(amount()+m.amount(), currency()); }
    }




public class MoneyTest extends TestCase {    
//…    
   public void testSimpleAdd() {        

         Money m12CHF= new Money(12, "CHF");  // (1)        

         Money m14CHF= new Money(14, "CHF");
                
         Money expected= new Money(26, "CHF");
        
         Money result=  m12CHF.add(m14CHF);    // (2)
        
         Assert.assertTrue(expected.equals(result));     // (3)    
    }
}
      (1) Creates the objects we will interact with during the test. This    testing context is commonly referred to as a test's fixture. All we need for the testSimpleAdd test are some Money objects.
     (2) Exercises the objects in the fixture.
     (3) Verifies the result



assertEquals(expected, actual)
assertEquals(message, expected, actual)
assertEquals(expected, actual, delta)
assertEquals(message, expected, actual, delta)
assertFalse(condition)
assertFalse(message, condition)
Assert(Not)Null(object)
Assert(Not)Null(message, object)
Assert(Not)Same(expected, actual)
Assert(Not)Same(message, expected, actual)
assertTrue(condition)
assertTrue(message, condition)

setUp()
       Storing the fixture's objects in instance variables of your TestCase subclass and initialize them by overriding the setUp method

tearDown()
       Releasing the fixture’s

run()
       Defining how to run an individual test case.
       Defining how to run a test suite.

testCase()

public class MoneyTest extends TestCase {    
     private Money f12CHF;    
     private Money f14CHF;
       
     protected void setUp() {        
           f12CHF= new Money(12, "CHF");        
           f14CHF= new Money(14, "CHF");     }
 
     public void testSimpleAdd() {    
           Money expected= new Money(26, "CHF");    
           Money result= f12CHF.add(f14CHF);    
           Assert.assertTrue(expected.equals(result)); }

    TestCase test= new MoneyTest("simple add") {    
           public void runTest() {         testSimpleAdd();     }
    }
}
The real world scenarios
The number boundaries
Smaller than 0 such as –1, -2, …, -100, …
0
Bigger than 0 such as 1, 2, …, 100…
class Money {    

    private int fAmount;    
    private String fCurrency;   

public Money(int amount, String currency) {
         fAmount= amount;        
         fCurrency= currency;      }
public int amount() {         return fAmount;     }
    
    public String currency() {         return fCurrency;     }

public Money add(Money m) {
        if (m.amount()<=0) throw exception;    
        return new Money(amount()+m.amount(), currency());
}
}
TestCase test= new MoneyTest("simple add")
{    
    public void runTest() {        
         testSimpleAdd();    
    }
}

public static Test suite() {    
    TestSuite suite= new TestSuite();    
    suite.addTest(new MoneyTest("testEquals"));    
    suite.addTest(new MoneyTest("testSimpleAdd"));    
    return suite;
}



JUnit supports two ways of running single tests:
static
dynamic

TestCase test= new MoneyTest("simple add")
{    
    public void runTest() {        
         testSimpleAdd();    
    }
}


TestCase test= new MoneyTest("testSimpleAdd");

Since JUnit 2.0 there is an even simpler dynamic way. You only pass the class with the tests to a TestSuite and it extracts the test methods automatically.

public static Test suite() {  return new TestSuite(MoneyTest.class); }
1. Download the latest version of JUnit from http://download.sourceforge.net/junit/
2. Installation
unzip the junit.zip file
add junit.jar to the CLASSPATH. For example: set classpath=%classpath%;INSTALL_DIR\junit3\junit.jar
3. Testing
           Test the installation by using either the batch or the graphical TestRunner tool to run the tests that come with this release. All the tests should pass OK.
for the batch TestRunner type:     java junit.textui.TestRunner junit.samples.AllTests
for the graphical TestRunner type:     java junit.awtui.TestRunner junit.samples.AllTests
for the Swing based graphical TestRunner type:     java junit.swingui.TestRunner junit.samples.AllTests

Notice: The tests are not contained in the junit.jar but in the installation directory directly. Therefore make sure that the installation directory is on the class path

Important: Don't install the junit.jar into the extension directory of your JDK installation.
If you do so the test class on the files system will not be found. JUnit plug-in for Eclipse