StAX and XML Accelerators tutorial


XML Technology is gradually becoming the standard for Data Interchange. Most organizations in the world use XML is some form or the other. XML forms the basis of many future inventions in the field of information technology.
In spite of all these very lucrative advantages, the very basis of technology is under threat due to reduced Performance aspects that solutions have to live with due to the very nature of the parsing and processing technologies.
In the world of Java, there are primarily three options that are provided to parsing of XML Structures namely DOM (Document Object Model), SAX (Simple API for XML) and STAX (Streaming API for XML).
DOM and SAX have traditionally being used for parsing XML Structures. STAX is a relatively newer member of XML Parsing technology in the Java World.
STAX is built upon the concept of Pull model in which an application queries the parser for the next parsing event, but never surrenders control to the parser during the process. Stated differently, StAX essentially turns the SAX processing model upside down. Instead of the parser controlling the application's flow, and the application reacting to parsing events, it is the application that controls the flow by pulling events from the parser.

Pull Parsing Model in StAX allows for
a)      Control over the Parsing Engine
b)     Greater Programmatic control over the XML Data Structure
c)      Reduces heavy memory footprints, which are required due to usage of DOM Parsing techniques.
d)     Simple processing model such as used with SAX
e)      Event based processing control (this is called Pipelining) on XML Documents
f)       The StAX cursor model is the most efficient way to parse XML since it provide a natural interface by which the parser can compute values lazily
g)      It is more optimized for Speed and Performance in comparison to DOM and SAX

In spite of the advent of the STAX as member of the Java Technology, still a lot of debate exists in adoption of XML Technologies, mainly due to performance overheads.

XML Accelerators are the newest mechanism appearing in the industry. Currently there are primarily three options available in dealing with Improving XML Performance
a)      Microprocessor based acceleration: This option takes into account the fact that faster microprocessors will process XML data faster than not so fast microprocessor.
b)     Standalone XML Accelerator Engine: This devices hook into the individual applications and reduce the XML data beings transmitted across applications. What these don’t attempt to is improve the performance of XML processing on Individual Application.
c)      PCI Hardware boards for XML Accelerators: These hardware boards actually separate XML processing from the application thereby improving performance.  Figure below gives an example of PCI based Hardware Board processing mechanism

StAX is definitely is a much better solution implementation option as compared to DOM and SAX. However in order to boost XML Performance use of XML accelerator solutions is still evolving. Meanwhile, choice of the PCI Hardware based XML Accelerators for today may good option to enhance XML processing and implement the much needed SOA solutions.

Configuring JMS Server on Websphere 6.1 Server using Service Integration Bus (SIB)


Abstract
This document is methodology documentation for configuring JMS Server in Websphere 6.1 which does not come with inbuilt JMS Server. This is an extract from migrations done for a client for set of applications
Introduction
This document does a case study of an Application named Middle Tier Application (MTA) which uses MQ Queues. This application is migrated to Websphere 6.1 in RAD 7.0. Websphere 6.1 does not have inbuilt JMS Server. To overcome this we configured JMS using System Integration Bus (SIB).
Problem
Configure JMS Server on Websphere 6.1. Websphere 6.1 does not come with inbuilt JMS Server.
Approach
We will configure JMS Server using Service Integration Bus (SIB). Before configuring SIB, first we should develop an understanding what a SIB is?
Service Integration Bus
A service integration bus is a group of one or more application servers or server clusters in a WebSphere Application Server cell that cooperate to provide asynchronous messaging services. The application servers or server clusters in a bus are known as bus members.
A service integration bus provides the following capabilities:
  • Any application can exchange messages with any other application by using a destination to which one application sends, and from which the other application receives.
  • A message-producing application, that is, a producer, can produce messages for a destination regardless of which messaging engine the producer uses to connect to the bus.
  • A message-consuming application, that is, a consumer, can consume messages from a destination (whenever that destination is available) regardless of which messaging engine the consumer uses to connect to the bus.
A service integration bus comprises a SIB Service, which is available on each application server in the WebSphere Application Server environment. By default, the SIB Service is disabled. This means that when a server starts it does not have any messaging capability. The SIB Service is automatically enabled when we add the server to a service integration bus. We can choose to disable the service again by configuring the server.
A service integration bus supports asynchronous messaging; that is, sending messages asynchronously. Asynchronous messaging is possible regardless of whether the consuming application is running or not, or if the destination is available or not. Point-to-point and publish/subscribe messaging are also supported.
After an application has connected to the bus, the bus behaves as a single logical entity and the connected application does not need to be aware of the bus topology. In many cases, connecting to the bus and defining bus resources is handled by an application programming interface (API) abstraction, for example the administered JMS connection factory and JMS destination objects.
The service integration bus is sometimes referred to as the messaging bus if it is used to provide the messaging system for JMS applications using the default messaging provide.
Many scenarios require a simple bus topology; perhaps, for example, a single server. By adding multiple servers to a single bus, we can increase the number of connection points for applications to use. By adding server clusters as members of a bus, we can increase scalability and achieve high availability. Servers, however, do not have to be bus members to connect to a bus. In more complex bus topologies, multiple buses are configured, and can be interconnected to form complex networks. An enterprise might deploy multiple interconnected buses for organizational reasons. For example, an enterprise with several autonomous departments might want to have separately administered buses in each location.
Bus Members
The members of a service integration bus can be application servers or server clusters. Bus members that are application servers or server clusters contain messaging engines, which are the application server components that provide asynchronous messaging services.
To use a service integration bus, we must add at least one member that is an application server or server cluster.
Adding a bus member automatically creates a messaging engine for that bus member. Each messaging engine has its own data store, used for example to store persistent messages and maintain durable subscriptions. By default a messaging engine associated with a server is configured with an in-process, Cloudscape-based data store. In other cases, we are asked to provide the Java Naming and Directory Interface (JNDI) name of a Java Database Connectivity (JDBC) data source for use by the messaging engine.
When the bus member is an application server, it can have only one messaging engine. If the bus member is a server cluster, it can have additional messaging engines to provide high availability or workload sharing characteristics.
To host queue-type destinations, the messaging engine can hold messages until consuming applications are ready to receive them. Each messaging engine also has a data store where it can hold messages so that if the messaging engine fails, messages are not lost.
When we define a queue-type destination, we assign it to a bus member. When that bus member is an application server or a server cluster, the messaging engine (or engines) in that bus member holds the messages.
If required, we can remove members from a bus. However, this action deletes any messaging engines that are associated with a bus member, including knowledge of any messages held by the data store for those messaging engines. Therefore, we must plan this action carefully.
When a bus member is deleted, the data source associated with this bus member is not automatically deleted, because users often associate their own data source with a bus member. This also applies to bus members created using the default data source: the data source is not automatically deleted and you must remove it manually.
If we do not delete the data source manually and another messaging bus member is created, the messaging engine will fail to start.
Bus Destinations
A bus destination is a virtual location within a service integration bus, to which applications attach as producers, consumers, or both to exchange messages.
Bus destinations can be either "permanent" or "temporary":
  • A permanent destination is defined by an administrator for use by one or more applications over an indeterminate period of time. Such destinations remain until explicitly deleted by the administrator or by some administrative command or script.
  • A temporary destination is created and deleted by an application, or the messaging provider, for use by that application during a session with a service integration bus. The destination is assigned a unique name.
The following are the main types of destination:
1. Queue
A destination for point-to-point messaging.
2. Topic space
A destination for publish/subscribe messaging.
3. Alias
An alias destination makes a destination available by another name and, optionally, overrides the parameters of the destination. Applications can use an alias destination to route messages to another destination in the same bus or in another (foreign) bus.
4. Foreign
A foreign destination provides a mapping to a destination of the same name on a different bus and enables applications on one bus to access directly the destination on another bus. We can set its own destination properties which will override the destination defaults.
We can configure queue, topic space, and alias destinations with one or more mediations that refine how messages are handled by the destination.
Case Study
We have configured SIB in an application for a client, steps for which are provided in the attached document. Application provides batch as well as real time support through messaging server i.e. JMS.


References:
1.           Websphere 5.1 to 6.1 migration.doc by Kapil Naudiyal
2.           Web Links
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjj0000_.html

RAPID Implementation in ERP implentaion methodology


Rapid Implementation

Earlier, ERP implementations used to frighten the clients due to the time and cost of an implementation…

When ERP first came on the scene, most implementations were complex affairs with consulting costs that often ran three to five times the cost of the applications. Scope creep was extensive before a bit of benefit could be measured.  As ERP evolved, consulting costs began to fall more into line and implementation times were reduced. These accelerations (Rapid Implementations), when accomplished with the right strategy and tools can be of tremendous benefit, including a reduction of costs and reduced time-to-benefit. However, without the right strategy and tools, implementation acceleration carries the risk of abbreviated end user training and change management, a lack of post implementation planning, over-engineering of business processes, and other problems that in fact lead to higher over-all cost of ownership and the wearing out of business benefit.


 Earlier ERP Implementation Methodologies

Prior to 1997, methodologies relied heavily on the As-Is and To-Be phases. In the As-Is phase, a company’s current business processes were inventoried, charted, and scripted. In the To-Be phase, a company’s future business processes were designed, charted, and scripted. Ideally, these steps went as follows:

As-Is described the status quo of business processes
To-Be described the direct transfer of the as-is process into a to-be process that eliminates the weak points and achieves the intended benefit.

The key weakness of these methodologies was attention to the As-Is phase in which lower-level business processes were charted and scripted at an inflated cost to clients and with little or no benefit for the To-Be phase. This aspect was one of the key drivers to highly-publicized cost over-runs in the mid 1990’s.

From 1997 onwards, new methodologies started emerging that more directly addressed enterprise software implementations and all stressed speed through a more direct approach, the use of conference room pilots, the deployment of templates, and greater leverage of best practices (i.e. the re-use of business processes that had demonstrably done the job).
In order to address client concerns about the high cost of implementations, many of these methodologies were branded as rapid. For example, Deloitte’s “Fast Track”, Oracle’s “Fast Forward”, and KPMG’s “Rapid Return on Investment” (also labeled R2i).

Fundamentals of Rapid Implementation

The most crucial element of acceleration is the re-use of existing and proven assets. As the business flows, or processes, of firms within an industry are nearly identical, pre-configured processes can be easily implemented. For example, an order to cash business process that has already proven viable for hundreds of consumer packaged goods firms will probably be a good fit for another consumer packaged goods firm. In similar fashion, how much will sales order entry differ for a firm that sells automotive parts from a firm that sells aircraft parts?
Re-usability depends upon a client willingness to adapt itself to new business processes rather than bending the software to adapt to custom processes. This is one of the key drivers why almost all the consulting firms are focusing more and more on Knowledge Management initiatives. These assets play a very important part in Rapid implementation. The closer a client adheres to this principle, the faster the implementation due to:

  1. A major reduction in the business process design and software configuration phases, which normally comprise more than half of the consulting effort expended
  2. Higher level of re-usability of scripts, templates, set-up tools, reports, and user documentation
  3. A reduction in scope management.

The rise of industry-focused solutions has resulted from the thousands of ERP implementations that have occurred over the past fifteen years and is a major step in the evolution of enterprise applications.

Fig 1: Elements of Acceleration


  1. Industry specific Processes - For example an order to cash cycle will be almost similar across all the manufacturing industries. Industry specific processes for industries like Pharmaceuticals, Textile, Manufacturing, Automobile, Aerospace, Industrial etc. can be a very powerful asset for any consulting firm
  2. Proven Methodology – Another major asset that plays a crucial role in rapid implementation. Well-proven methodology used in one implementation can definitely be a benefit for the future implementations
  3. Test Scripts, User Training/Documentation – Training documentation/user-manuals are an example of another re-usable component which is an essential component of every implementation
  4. Re-usable tools, reports and Templates – Industry specific templates can be utilized across same industries. Certain re-usable reports and tools can certainly speed-up implementations
  5. Best-Practices -  Best practices are the most efficient and effective way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large number of similar implementations

The Benefits of Rapid Implementation

Having a look at the concept of Rapid implementation, what it is all about and what are the basic elements that play an important role in accelerations, let’s have a look why enterprises should go for Rapid implementation.

Key benefits that can be derived from a rapid implementation:
  1. Reduced time and cost
  2. Minimal interference to customer’s existing operations
  3. Reduced probability of over-engineering
  4. Accelerated time to benefit


Key Decision Factors

Is rapid implementation a right choice? Key question for almost everyone is “to go” or “not to go”?

Here are five factors to consider when deciding which approach to take:

  1. Necessity: Companies with an immediate need threatening their viability or an issue that relates to customer responsiveness and competitive pressures should consider rapid ERP.
  2. Cost: Fast implementations by definition should cost less. The time needed to gain benefits is also reduced and the resulting efficiencies mean lower cost.
  3. Scope: The best candidates for an enterprise keeping an implementation well within the scope of the project are willing to align their expectations with industry best practices, are not expecting to fix everything at once, and are looking for flexibility for future expansion. Such enterprises know exactly what issues they are seeking to address to drive their business forward.
  4. Internal Readiness: Enterprises must be well aware of how much training will be needed by the implementation. They must be willing to commit high-quality internal resources to the project and should be aiming at not interrupting operational resources.
  5. Expertise of Consulting Firm: Enterprises should be looking for vendors and partners with deep industry segment and geographic knowledge, as well as expertise with mature and proven tools and methodologies.


How does Rapid Implementation works for JD Edwards?
After getting a feel of what is rapid implementation, its benefits and key decision factors, now the questions that come to my mind are – Does it work for JD Edwards? Do we have some Business accelerators for rapidly implementing JD Edwards? Do we have some success stories for the same?
And the answer to all of the above mentioned questions is “Yes”. Rapid implementation does work well for JD Edwards. Many consulting firms have come up with Rapid implementation methodologies and Business accelerators including Deloitte, Oracle and KPMG.  
Business Accelerators for JD Edwards EnterpriseOne
Oracle Business Accelerator solutions are available for five major modules: customer relationship management, distribution, financials, human capital management, and manufacturing.
Oracle Business Accelerators for JD Edwards EnterpriseOne include:
  1. Configured JD Edwards EnterpriseOne application software, including business processes, user roles, technical set-up, and a rapid installation.
2.      Questionnaire wizards that capture your process requirements and configure the JD Edwards EnterpriseOne environment to your business needs.
3.      Engineered hardware configurations.
4.      A complete package of open standards infrastructure software, including application server, portal, database, and security and technology tools.
5.      Implementation services from Oracle Consulting or an authorized Oracle partner.
6.      Training to get users up-to-speed and productive as quickly as possible.
Success stories…
 At Levy/Latham Global, J.D. Edwards OneWorld financials and distribution was implemented in forty-five days, On-time and on-budget. As per the implementers’, some of the key success factors for the same include:
1.      Correct selection of Hardware and Infrastructure - is the hardware and infrastructure up to the job?  If you are re-using existing hardware, is it sufficient for the task at hand?  Did you buy enough horsepower at both the client and server ends?
2.      Strong leadership – You should have very strong leadership and a capable Project Manager with a strong vision towards the goal – Leader and the team must be very single-minded during the project.  Everyone in the team should know exactly what the goal is and that the deadline is not optional.
3.      Application training - How much is enough?  Or, how much should you spend?  As per the implementers’ one should budget somewhere between $1 to $2 for training for every $1 one spend on user licenses.  And this has to happen during your implementation, particularly on a rapid deployment



RUN A IBM DB2 QUERY THROUGH JCL


DB2 QUERY RUN THROUGH JOB



//XITSUIDB JOB 'DSNTEP2 ',CLASS=A,MSGCLASS=X,
// NOTIFY=&SYSUID,MSGLEVEL=(1,0),REGION=4096K
// SET DB2SS=YDB2
//STEP010 EXEC PGM=IKJEFT01
//STEPLIB DD DISP=SHR,DSN=&DB2SS..DSNLOAD
// DD DISP=SHR,DSN=SYS2.CEE.SCEERUN
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD DSN=XITSUID.QUERY.OUTPUT,DISP=SHR
//* DISP=(NEW,CATLG,DELETE),
//* SPACE=(CYL,(10,5),RLSE),
//* DCB=(LRECL=80,BLKSIZE=800,RECFM=FB)
//*SYSPRINT DD SYSOUT=*,OUTLIM=100000
//SYSOUT DD *
//SYSTSIN DD *
DSN SYSTEM(YDB2)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP2) LIB('YDB2.RUNLIB.LOAD')
END
/*
//SYSIN DD *
SELECT RET_UNIT_CDE, PARTITION_ID ------------------> Query Starts (Comment - Please delete in Job)
FROM DBKN01.VBKPT001 WHERE
RET_UNIT_CDE = 100009 OR
RET_UNIT_CDE = 200007
WITH UR -------------------------------> Query Ends (Comment - Please delete in Job)
/*
//* UR MEANS UNCOMMITED READ.
Note: Change YDB2 to JDB2 for SY2

Junit tuorial | Junit concept | Junit tutorial step by step

Junit tutorial

  • The testing problems
  • The framework of JUnit
  • A case study
  • JUnit tool
  • Practices


 The testing problems
The framework of JUnit
A case study
JUnit tool
Practices

class Money {    

    private int fAmount;    
    private String fCurrency;
public Money(int amount, String currency) {
         fAmount= amount;        
         fCurrency= currency;      }
public int amount() {         return fAmount;     }
    
    public String currency() {         return fCurrency;     }


public Money add(Money m) {    
        return new Money(amount()+m.amount(), currency()); }
    }




public class MoneyTest extends TestCase {    
//…    
   public void testSimpleAdd() {        

         Money m12CHF= new Money(12, "CHF");  // (1)        

         Money m14CHF= new Money(14, "CHF");
                
         Money expected= new Money(26, "CHF");
        
         Money result=  m12CHF.add(m14CHF);    // (2)
        
         Assert.assertTrue(expected.equals(result));     // (3)    
    }
}
      (1) Creates the objects we will interact with during the test. This    testing context is commonly referred to as a test's fixture. All we need for the testSimpleAdd test are some Money objects.
     (2) Exercises the objects in the fixture.
     (3) Verifies the result



assertEquals(expected, actual)
assertEquals(message, expected, actual)
assertEquals(expected, actual, delta)
assertEquals(message, expected, actual, delta)
assertFalse(condition)
assertFalse(message, condition)
Assert(Not)Null(object)
Assert(Not)Null(message, object)
Assert(Not)Same(expected, actual)
Assert(Not)Same(message, expected, actual)
assertTrue(condition)
assertTrue(message, condition)

setUp()
       Storing the fixture's objects in instance variables of your TestCase subclass and initialize them by overriding the setUp method

tearDown()
       Releasing the fixture’s

run()
       Defining how to run an individual test case.
       Defining how to run a test suite.

testCase()

public class MoneyTest extends TestCase {    
     private Money f12CHF;    
     private Money f14CHF;
       
     protected void setUp() {        
           f12CHF= new Money(12, "CHF");        
           f14CHF= new Money(14, "CHF");     }
 
     public void testSimpleAdd() {    
           Money expected= new Money(26, "CHF");    
           Money result= f12CHF.add(f14CHF);    
           Assert.assertTrue(expected.equals(result)); }

    TestCase test= new MoneyTest("simple add") {    
           public void runTest() {         testSimpleAdd();     }
    }
}
The real world scenarios
The number boundaries
Smaller than 0 such as –1, -2, …, -100, …
0
Bigger than 0 such as 1, 2, …, 100…
class Money {    

    private int fAmount;    
    private String fCurrency;   

public Money(int amount, String currency) {
         fAmount= amount;        
         fCurrency= currency;      }
public int amount() {         return fAmount;     }
    
    public String currency() {         return fCurrency;     }

public Money add(Money m) {
        if (m.amount()<=0) throw exception;    
        return new Money(amount()+m.amount(), currency());
}
}
TestCase test= new MoneyTest("simple add")
{    
    public void runTest() {        
         testSimpleAdd();    
    }
}

public static Test suite() {    
    TestSuite suite= new TestSuite();    
    suite.addTest(new MoneyTest("testEquals"));    
    suite.addTest(new MoneyTest("testSimpleAdd"));    
    return suite;
}



JUnit supports two ways of running single tests:
static
dynamic

TestCase test= new MoneyTest("simple add")
{    
    public void runTest() {        
         testSimpleAdd();    
    }
}


TestCase test= new MoneyTest("testSimpleAdd");

Since JUnit 2.0 there is an even simpler dynamic way. You only pass the class with the tests to a TestSuite and it extracts the test methods automatically.

public static Test suite() {  return new TestSuite(MoneyTest.class); }
1. Download the latest version of JUnit from http://download.sourceforge.net/junit/
2. Installation
unzip the junit.zip file
add junit.jar to the CLASSPATH. For example: set classpath=%classpath%;INSTALL_DIR\junit3\junit.jar
3. Testing
           Test the installation by using either the batch or the graphical TestRunner tool to run the tests that come with this release. All the tests should pass OK.
for the batch TestRunner type:     java junit.textui.TestRunner junit.samples.AllTests
for the graphical TestRunner type:     java junit.awtui.TestRunner junit.samples.AllTests
for the Swing based graphical TestRunner type:     java junit.swingui.TestRunner junit.samples.AllTests

Notice: The tests are not contained in the junit.jar but in the installation directory directly. Therefore make sure that the installation directory is on the class path

Important: Don't install the junit.jar into the extension directory of your JDK installation.
If you do so the test class on the files system will not be found. JUnit plug-in for Eclipse

How would EJB 3.0 simplify your Java development compared to EJB 1.x, 2.x?

EJB 3.0 is taking ease of development very seriously and has adjusted its model to offer the POJO (Plain Old
Java Object) persistence and the new O/R mapping model inspired by and based on Hibernate (a less
intrusive model). In EJB 3.0, all kinds of enterprise beans are just POJOs. EJB 3.0 extensively uses Java
annotations, which replace excessive XML based configuration files and eliminate the need for rigid component
Emerging Technologies/Frameworks 223
model used in EJB 1.x, 2.x. Annotations can be used to define a bean’s business interface, O/R mapping
information, resource references etc.
ƒ In EJB 1.x, 2.x the container manages the behaviour and internal state of the bean instances at runtime. All
the EJB 1.x, 2.x beans must adhere to a rigid specification. In EJB 3.0, all container services can be
configured and delivered to any POJO in the application via annotations. You can build complex object
structures with POJOs. Java objects can inherit from each other. EJB 3.0 components are only
coupled via their published business interfaces hence the implementation classes can be changed without
affecting rest of the application. This makes the application more robust, easier to test, more portable and
makes it easier to build loosely coupled business components in POJO.
ƒ EJB 3.0 unlike EJB 1.x, 2.x does not have a home interface. The bean class may or may not implement a
business interface. If the bean class does not implement any business interface, a business interface will
be generated using the public methods. If only certain methods should be exposed in the business
interface, all of those methods can be marked with @BusinessMethod annotation.
ƒ EJB 3.0 defines smart default values. For example by default all generated interfaces are local, but the
@Remote annotation can be used to indicate that a remote interface should be generated.
ƒ EJB 3.0 supports both unidirectional and bidirectional relationships between entities.
ƒ EJB 3.0 makes use of dependency injection to make decoupled service objects and resources like queue
factories, queues etc available to any POJO. Using the @EJB annotation, you can inject an EJB stub into
any POJO managed by the EJB 3.0 container and using @Resource annotation you can inject any
resource from the JNDI.
ƒ EJB 3.0 wires runtime services such as transaction management, security, logging, profiling etc to
applications at runtime. Since those services are not directly related to application’s business logic they are
not managed by the application itself. Instead, the services are transparently applied by the container
utilizing AOP (Aspect Oriented Programming). To apply a transaction attribute to a POJO method using
annotation:
public class Account {
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public getAccountDetails(){

}
}
EJB QL queries can be defined through the @NamedQuery annotation. You can also create regular
JDBC style queries using the EntityManager. POJOs are not persistent by birth and become persistent
once it is associated with an EntityManager.