WriteReplace method concept during Serialization in Java


WriteReplace method use by JVM during Serialization  to indentify the which object user want to searlize .

Below program for better understanding.
package Serlialization;

import java.io.Serializable;

class employee1  implements Serializable
{
      public String sname="king";
      int a;
      employee1()
      {
           
            System.out.println("superclas constructor run");
      }
      public void getNameEmployee1()
      {
            System.out.println("The employee1 getName Employee1");
      }
      /* public  Object readResolve() {
                System.out.println("Read Resolve method calling");
              throw new RuntimeException();
          }*/
     
}

public class employee  implements Serializable
{
      employee1 e= new employee1();
      String name="kumudg";
      int a;
      int c;
      transient int d;
      transient String password="password";
    static String age="raju";
    private Object writeReplace() {
        //System.out.println("The control in writeReplace method");
    return e;
      //return new SerializationProxy(this);
    }

   /*public  Object readResolve() {
        throw new RuntimeException();
    }*/

  public employee()
 
  {
        System.out.println("Inside the employee object contructor ");
  }
 
 
  public void getName()
  {
        System.out.println("Inside the getName method"+name);
       
  }
     
 
 
 
 
}


Serliaztion Test class
package Serlialization;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;


public class serializationTest {
public static void main(String [] args)
{
                serializationTest st= new serializationTest();
                employee e= new employee();
                //employee e=null;
                 st.serializedObject(e);
                //employee de=st.deserializedObject();
                //System.out.println("The value is "+de.name + de.age);
                //System.out.println("The transient value is "+de.password);
    //System.out.println("The transient value is "+de.sname);
               
}


public void serializedObject(employee e)
{
                try {
                                FileOutputStream out= new FileOutputStream("c://ser//ser.txt");
                                ObjectOutputStream outs= new ObjectOutputStream(out);
                                outs.writeObject(e);
                               
                } catch (FileNotFoundException e1) {
                                // TODO Auto-generated catch block
                                e1.printStackTrace();
                }
                catch (Exception e1) {
                                // TODO Auto-generated catch block
                                e1.printStackTrace();
                }
               
}

}
When programmer will serialize object then employee1 class’s object will searlize because the object return by WriteReplace method in Employee class returning object of Employee1

Static class loading and dynamic class loading in java ?

Static class loading

Classes are statically loaded with Java using new operator.

class MyClass {
public static void main(String args[]) {
Car c = new Car();
}
}



Dynamic clas loading

Dynamic loading is a technique for programmatically invoking the functions of a
class loader at run time. Let us look at how to load classes dynamically.

Class.forName (String className); //static method which returns a Class


The above static method returns the class object associated with the class
name. The string className can be supplied dynamically at run time. Unlike the
static loading, the dynamic loading will decide whether to load the class Car or
the class Jeep at runtime based on a properties file and/or other runtime
conditions. Once the class is dynamically loaded the following method returns an
instance of the loaded class. It?s just like creating a class object with no
arguments.

class.newInstance (); //A non-static method, which creates an instance of a
//class (i.e. creates an object).

Jeep myJeep = null ;

//myClassName should be read from a .properties file or a Constants class.
// stay away from hard coding values in your program. CO

String myClassName = "au.com.Jeep" ;
Class vehicleClass = Class.forName(myClassName) ;
myJeep = (Jeep) vehicleClass.newInstance();
myJeep.setFuelCapacity(50);

 

How you prevent subclass to serialize if superclass is serialize in java

We know during serialization  if superclass is serialize then subclass will automatically will become serialize  so to prevent subclass to serialize we can do by overriding following below code in subclass

you can  use the private methods to just throw
the NotSerializableException. below code you can add

 private void writeObject(ObjectOutputStream out) throws IOException
 {
 throw new NotSerializableException("Not today!");
 }
 private void readObject(ObjectInputStream in) throws IOException
 {
 throw new NotSerializableException("Not today!");
 }

How you prevent to persist of class property during serialization in Java

Hi

During java serialization we can persist the object state i.e. we can change object state into byte stream and save into file for future use
we can achieve it by implementing class with Serialization interface

If you want to prevent persist object property then use Transient and Static keyword before property declartion
Code:

public class testClass implements serialization
{
public transient int num=5;
public static int num2=8;
}

in Above code we you serialize object property num and num2 will be not serialize

Java.sql.SQLException: [Microsoft][ODBC Driver Manager] Invalid cursor state when using MS SQL Sever 2008

hi
i am using jdbc/odbc connection and my code for database is
StringBuffer stringbufferdet=new StringBuffer("select phasevalue from T_PHASE where projectid = '"+projectId+"' and phaseid=");
    stringbufferdet.append("'");
    //stringbufferdet.append(vmetricresult2.elementAt(i));
    stringbufferdet.append(vmetricresult1.elementAt(i));
    stringbufferdet.append("'");
    String sdet = stringbufferdet.toString();
    System.out.println("----------------------2"+sdet);
    ps = con.prepareStatement(sdet);
    rs = ps.executeQuery();
    rs.next();
  
    phase2=rs.getInt(1);
  
    closeAll(con,ps,null,rs);


when i running code exception is coming

java.sql.SQLException: [Microsoft][ODBC Driver Manager] Invalid cursor state
 at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6958)
 at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7115)
 at sun.jdbc.odbc.JdbcOdbc.SQLGetDataInteger(JdbcOdbc.java:3812)
 at sun.jdbc.odbc.JdbcOdbcResultSet.getDataInteger(JdbcOdbcResultSet.java:5639)
 at sun.jdbc.odbc.JdbcOdbcResultSet.getInt(JdbcOdbcResultSet.java:582)
 at com.hps.dao.SearchReportDAO.getMetricResults(SearchReportDAO.java:1353)
 at com.hps.client.MetricsServlet.doPost(MetricsServlet.java:102)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
 at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
 at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
 at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
 at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
 at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
 at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
 at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
 at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
 at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
 at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
 at java.lang.Thread.run(Thread.java:595)

Solution

Problem is coming due to wrong use of result set
use following line of code to retrive data from resultset

while(rs.next())

{

phase2=rs.getInt(1);

}

latest Microsoft SSIS and SSRS interview question ask by various IT company

Here are some SSIS related Interview Questions with answers. hope they help.

1) What is the control flow
2) what is a data flow
3) how do you do error handling in SSIS
4) how do you do logging in ssis
5) how do you deploy ssis packages.
6) how do you schedule ssis packages to run on the fly
7) how do you run stored procedure and get data
8) A scenario: Want to insert a text file into database table, but during the upload want to change a column called as months - January, Feb, etc to a code, - 1,2,3.. .This code can be read from another database table called months. After the conversion of the data , upload the file. If there are any errors, write to error table. Then for all errors, read errors from database, create a file, and mail it to the supervisor.
How would you accomplish this task in SSIS?
9)what are variables and what is variable scope ?
Answers

For Q 1 and 2:
In SSIS a workflow is called a control-flow. A control-flow links together our modular data-flows as a series of operations in order to achieve a desired result.

A control flow consists of one or more tasks and containers that execute when the package runs. To control order or define the conditions for running the next task or container in the package control flow, you use precedence constraints to connect the tasks and containers in a package. A subset of tasks and containers can also be grouped and run repeatedly as a unit within the package control flow.

SQL Server 2005 Integration Services (SSIS) provides three different types of control flow elements: containers that provide structures in packages, tasks that provide functionality, and precedence constraints that connect the executables, containers, and tasks into an ordered control flow.

A data flow consists of the sources and destinations that extract and load data, the transformations that modify and extend data, and the paths that link sources, transformations, and destinations. Before you can add a data flow to a package, the package control flow must include a Data Flow task. The Data Flow task is the executable within the SSIS package that creates, orders, and runs the data flow. A separate instance of the data flow engine is opened for each Data Flow task in a package.

SQL Server 2005 Integration Services (SSIS) provides three different types of data flow components: sources, transformations, and destinations. Sources extract data from data stores such as tables and views in relational databases, files, and Analysis Services databases. Transformations modify, summarize, and clean data. Destinations load data into data stores or create in-memory datasets.
Q3:
When a data flow component applies a transformation to column data, extracts data from sources, or loads data into destinations, errors can occur. Errors frequently occur because of unexpected data values.

For example, a data conversion fails because a column contains a string instead of a number, an insertion into a database column fails because the data is a date and the column has a numeric data type, or an expression fails to evaluate because a column value is zero, resulting in a mathematical operation that is not valid.

Errors typically fall into one the following categories:

-Data conversion errors, which occur if a conversion results in loss of significant digits, the loss of insignificant digits, and the truncation of strings. Data conversion errors also occur if the requested conversion is not supported.
-Expression evaluation errors, which occur if expressions that are evaluated at run time perform invalid operations or become syntactically incorrect because of missing or incorrect data values.
-Lookup errors, which occur if a lookup operation fails to locate a match in the lookup table.

Many data flow components support error outputs, which let you control how the component handles row-level errors in both incoming and outgoing data. You specify how the component behaves when truncation or an error occurs by setting options on individual columns in the input or output.

For example, you can specify that the component should fail if customer name data is truncated, but ignore errors on another column that contains less important data.

Q 4:
SSIS includes logging features that write log entries when run-time events occur and can also write custom messages.

Integration Services supports a diverse set of log providers, and gives you the ability to create custom log providers. The Integration Services log providers can write log entries to text files, SQL Server Profiler, SQL Server, Windows Event Log, or XML files.

Logs are associated with packages and are configured at the package level. Each task or container in a package can log information to any package log. The tasks and containers in a package can be enabled for logging even if the package itself is not.

To customize the logging of an event or custom message, Integration Services provides a schema of commonly logged information to include in log entries. The Integration Services log schema defines the information that you can log. You can select elements from the log schema for each log entry.

To enable logging in a package
1. In Business Intelligence Development Studio, open the Integration Services project that contains the package you want.
2. On the SSIS menu, click Logging.
3. Select a log provider in the Provider type list, and then click Add.
Q 5 :

SQL Server 2005 Integration Services (SSIS) makes it simple to deploy packages to any computer.
There are two steps in the package deployment process:
-The first step is to build the Integration Services project to create a package deployment utility.
-The second step is to copy the deployment folder that was created when you built the Integration Services project to the target computer, and then run the Package Installation Wizard to install the packages.
Q 9 :

Variables store values that a SSIS package and its containers, tasks, and event handlers can use at run time. The scripts in the Script task and the Script component can also use variables. The precedence constraints that sequence tasks and containers into a workflow can use variables when their constraint definitions include expressions.

Integration Services supports two types of variables: user-defined variables and system variables. User-defined variables are defined by package developers, and system variables are defined by Integration Services. You can create as many user-defined variables as a package requires, but you cannot create additional system variables.

Scope :

A variable is created within the scope of a package or within the scope of a container, task, or event handler in the package. Because the package container is at the top of the container hierarchy, variables with package scope function like global variables and can be used by all containers in the package. Similarly, variables defined within the scope of a container such as a For Loop container can be used by all tasks or containers within the For Loop container.


Question 1 - True or False - Using a checkpoint file in SSIS is just like issuing the CHECKPOINT command against the relational engine. It commits all of the data to the database.
False. SSIS provides a Checkpoint capability which allows a package to restart at the point of failure.

Question 2 - Can you explain the what the Import\Export tool does and the basic steps in the wizard?
The Import\Export tool is accessible via BIDS or executing the dtswizard command.
The tool identifies a data source and a destination to move data either within 1 database, between instances or even from a database to a file (or vice versa).


Question 3 - What are the command line tools to execute SQL Server Integration Services packages?
DTSEXECUI - When this command line tool is run a user interface is loaded in order to configure each of the applicable parameters to execute an SSIS package.
DTEXEC - This is a pure command line tool where all of the needed switches must be passed into the command for successful execution of the SSIS package.


Question 4 - Can you explain the SQL Server Integration Services functionality in Management Studio?
You have the ability to do the following:
Login to the SQL Server Integration Services instance
View the SSIS log
View the packages that are currently running on that instance
Browse the packages stored in MSDB or the file system
Import or export packages
Delete packages
Run packages

Question 5 - Can you name some of the core SSIS components in the Business Intelligence Development Studio you work with on a regular basis when building an SSIS package?
Connection Managers
Control Flow
Data Flow
Event Handlers
Variables window
Toolbox window
Output window
Logging
Package Configurations

Question Difficulty = Moderate

Question 1 - True or False: SSIS has a default means to log all records updated, deleted or inserted on a per table basis.
False, but a custom solution can be built to meet these needs.

Question 2 - What is a breakpoint in SSIS? How is it setup? How do you disable it?
A breakpoint is a stopping point in the code. The breakpoint can give the Developer\DBA an opportunity to review the status of the data, variables and the overall status of the SSIS package.
10 unique conditions exist for each breakpoint.
Breakpoints are setup in BIDS. In BIDS, navigate to the control flow interface. Right click on the object where you want to set the breakpoint and select the 'Edit Breakpoints...' option.


Question 3 - Can you name 5 or more of the native SSIS connection managers?
OLEDB connection - Used to connect to any data source requiring an OLEDB connection (i.e., SQL Server 2000)
Flat file connection - Used to make a connection to a single file in the File System. Required for reading information from a File System flat file
ADO.Net connection - Uses the .Net Provider to make a connection to SQL Server 2005 or other connection exposed through managed code (like C#) in a custom task
Analysis Services connection - Used to make a connection to an Analysis Services database or project. Required for the Analysis Services DDL Task and Analysis Services Processing Task
File connection - Used to reference a file or folder. The options are to either use or create a file or folder
Excel
FTP
HTTP
MSMQ
SMO
SMTP
SQLMobile
WMI


Question 4 - How do you eliminate quotes from being uploaded from a flat file to SQL Server?
In the SSIS package on the Flat File Connection Manager Editor, enter quotes into the Text qualifier field then preview the data to ensure the quotes are not included.
Additional information: How to strip out double quotes from an import file in SQL Server Integration Services
Question 5 - Can you name 5 or more of the main SSIS tool box widgets and their functionality?
For Loop Container
Foreach Loop Container
Sequence Container
ActiveX Script Task
Analysis Services Execute DDL Task
Analysis Services Processing Task
Bulk Insert Task
Data Flow Task
Data Mining Query Task
Execute DTS 2000 Package Task
Execute Package Task
Execute Process Task
Execute SQL Task
etc.

Question Difficulty = Difficult

Question 1 - Can you explain one approach to deploy an SSIS package?
One option is to build a deployment manifest file in BIDS, then copy the directory to the applicable SQL Server then work through the steps of the package installation wizard
A second option is using the dtutil utility to copy, paste, rename, delete an SSIS Package
A third option is to login to SQL Server Integration Services via SQL Server Management Studio then navigate to the 'Stored Packages' folder then right click on the one of the children folders or an SSIS package to access the 'Import Packages...' or 'Export Packages...'option.
A fourth option in BIDS is to navigate to File | Save Copy of Package and complete the interface.



Question 2 - Can you explain how to setup a checkpoint file in SSIS?
The following items need to be configured on the properties tab for SSIS package:
CheckpointFileName - Specify the full path to the Checkpoint file that the package uses to save the value of package variables and log completed tasks. Rather than using a hard-coded path as shown above, it's a good idea to use an expression that concatenates a path defined in a package variable and the package name.
CheckpointUsage - Determines if/how checkpoints are used. Choose from these options: Never (default), IfExists, or Always. Never indicates that you are not using Checkpoints. IfExists is the typical setting and implements the restart at the point of failure behavior. If a Checkpoint file is found it is used to restore package variable values and restart at the point of failure. If a Checkpoint file is not found the package starts execution with the first task. The Always choice raises an error if the Checkpoint file does not exist.
SaveCheckpoints - Choose from these options: True or False (default). You must select True to implement the Checkpoint behavior.

Question 3 - Can you explain different options for dynamic configurations in SSIS?
Use an XML file
Use custom variables
Use a database per environment with the variables
Use a centralized database with all variables

Question 4 - How do you upgrade an SSIS Package?
Depending on the complexity of the package, one or two techniques are typically used:
Recode the package based on the functionality in SQL Server DTS
Use the Migrate DTS 2000 Package wizard in BIDS then recode any portion of the package that is not accurate


Question 5 - Can you name five of the Perfmon counters for SSIS and the value they provide?
SQLServer:SSIS Service
SSIS Package Instances - Total number of simultaneous SSIS Packages running
SQLServer:SSIS Pipeline
BLOB bytes read - Total bytes read from binary large objects during the monitoring period.
BLOB bytes written - Total bytes written to binary large objects during the monitoring period.
BLOB files in use - Number of binary large objects files used during the data flow task during the monitoring period.
Buffer memory - The amount of physical or virtual memory used by the data flow task during the monitoring period.
Buffers in use - The number of buffers in use during the data flow task during the monitoring period.
Buffers spooled - The number of buffers written to disk during the data flow task during the monitoring period.
Flat buffer memory - The total number of blocks of memory in use by the data flow task during the monitoring period.
Flat buffers in use - The number of blocks of memory in use by the data flow task at a point in time.
Private buffer memory - The total amount of physical or virtual memory used by data transformation tasks in the data flow engine during the monitoring period.
Private buffers in use - The number of blocks of memory in use by the transformations in the data flow task at a point in time.
Rows read - Total number of input rows in use by the data flow task at a point in time.
Rows written - Total number of output rows in use by the data flow task at a point in time.

JBoss Cache – Hibernate Second level cache.


Hibernate is an open source O-R mapping tool which is being used extensively for domain modeling as well as bridge between the Object Oriented and Relational design paradigm. It is basically used to map the Object oriented domain entities to a corresponding relational container.
Hibernate’s architecture offer caching of data at 2 levels:
1.    Within a session aka. Transaction
2.    Outside the session.
Caching within the session is provided using the “Session” API in hibernate which maintains a transaction scoped cache of persistent data abstracted within itself.
Hibernate provides a plug and play architecture to integrate a second level cache to provide caching across transactions i.e. Session.
In this case study we will use a scenario to configure JBoss TreeCache as hibernate second level cache.
JBoss Cache has dependency on the following libraries which should be in the classpath of the runtime environment where the cache is being configured i.e. for JBoss 4.2 “..\jboss-4.2.3.GA\server\default\lib”.
1.      jbosscache-core.jar
2.      commons-logging.jar
3.      jboss-common-core.jar
4.      jgroups.jar
5.      Hibernate-Jbosscache2.jar

Jboss Cache is also dependent on the javax.transaction package which will be available either through the JEE jars in the application server classpath or through hibernate jars.

One instance of cache is identified in the configuration as “Cache Region”. The cache regions can be load balanced / clustered using the application server’s load balancing attributes. We can create more than one cache regions for a given application depending upon the characteristics of the data being cached. i.e. Domain entities, Files, demographics information etc. Advantage of segregating the data amongst different cache regions is that we can register different initialization as well as invalidation policies for each one of them.

Using Jboss Cache as a second level cache with hibernate, we can cache following entities:
1.      Entities
2.      Collections
3.      Query results
We need to be careful with query results while configuring a cluster as replication of query results across cluster is very costly. Hence we should have good reasons for configuring the query results across cluster.

Before proceeding forward we need to be aware of the dependencies between hibernate versions as well as JBoss cache version. For all hibernate versions above 3.3.x the recommended JBoss Cache versions to be used are 3.x. JBoss AS 5.x and above supports Cache version 3.x and hibernate version 3.3.x. In any other scenario proper dependencies will have to be verified and dependent jars needs to be copied to the AS classpath to make it work.

Best way to register JBoss cache in the Application Server is via mbeans. We need to provide an xml file to register JBoss cache within application server as mbean i.e. in the directory “..\jboss-4.2.3.GA\server\default\deploy” for JBoss AS 4.2.x . This will also provide a name to the cache service. Example xml is provided below:

<server>

   <mbean code="org.jboss.cache.jmx.CacheJmxWrapper"
          name="jboss.cache:service=TreeCache">

      <depends>jboss:service=Naming</depends>
      <depends>jboss:service=TransactionManager</depends>
.
.
</server>

As you can see a jmx wrapper is being used to register the cache with the application server. In this case it is identified with the name “jboss.cache:service=TreeCache”. The configuration file is also communicating to the application server about the dependencies of this service on the other services running within application server context. For example JBoss cache will be using the transaction manager which is running within the application server and which is accessible through the naming service.

After the Jboss Cache service is registered with the application server next step is to make hibernate aware of the running Jboss cache service which can be used as its second level cache.

Hibernate Session factory needs to be provided with certain options to enable the second level caching using  already started Jboss cache within Application server context.

Following parameters needs to be provided to Session factory:

hibernate.cache.use_second_level_cache
true
This option enables hibernate session factory to use second level caching mechanism
hibernate.cache.region.factory_class
org.hibernate.cache.jbc2.MultiplexedJBossCacheRegionFactory
This option initializes the Cache region factory to be used to handle caching in hibernate
hibernate.treecache.mbean.object_name
jboss.cache:service=TreeCache
This option is to connect to the TreeCache instance configured through mbean in application server.
hibernate.cache.use_query_cache
true
This option enables caching of query results
hibernate.cache.region.jbc2.query.localonly
true
This option stops the replication of query results across cache clusters i.e. query results are cached locally
Another and more object oriented version of Tree Cache has been release named as POJOCache. This cache sits atop Jboss Tree cache and is mainly responsible for the caching of plain old java objects. In this version POJO’s are inserted into the cache which in turn:
1.      Keeps track of the modifications on POJO’s
2.      Replicate the POJO’s across cache clusters
3.      Participate in transactions to persist data into DB.
4.      AOP can be used for configuration of POJO’s and cache.         

1.4.                  References

JBoss TreeCache Reference manual

JBoss Hibernate Configuration Guide.




Sample  TreeCache XML File configuration


<?xml version="1.0" encoding="UTF-8"?>
<server>
   <mbean code="org.jboss.cache.jmx.CacheJmxWrapper"  name="jboss.cache:service=TreeCache">
      <depends>jboss:service=Naming</depends>
      <depends>jboss:service=TransactionManager</depends>
      <!--
          Configure the TransactionManager
      -->
      <attribute name="TransactionManagerLookupClass">org.jboss.cache.transaction.GenericTransactionManagerLookup
      </attribute>
      <!--
          Isolation level : SERIALIZABLE
                            REPEATABLE_READ (default)
                            READ_COMMITTED
                            READ_UNCOMMITTED
                            NONE
      -->
      <attribute name="IsolationLevel">REPEATABLE_READ</attribute>
      <!--
           Valid modes are LOCAL
                           REPL_ASYNC
                           REPL_SYNC
                           INVALIDATION_ASYNC
                           INVALIDATION_SYNC
      -->
      <attribute name="CacheMode">REPL_SYNC</attribute>
      <!--
      Just used for async repl: use a replication queue
      -->
      <attribute name="UseReplQueue">false</attribute>
      <!--
          Replication interval for replication queue (in ms)
      -->
      <attribute name="ReplQueueInterval">0</attribute>
      <!--
          Max number of elements which trigger replication
      -->
      <attribute name="ReplQueueMaxElements">0</attribute>
      <!-- Name of cluster. Needs to be the same for all TreeCache nodes in a
           cluster in order to find each other.
      -->
      <attribute name="ClusterName">test-JBossCache-Cluster</attribute>

      <attribute name="ClusterConfig">
         <config>
            <UDP mcast_addr="228.10.10.10"
                 mcast_port="45588"
                 tos="8"
                 ucast_recv_buf_size="20000000"
                 ucast_send_buf_size="640000"
                 mcast_recv_buf_size="25000000"
                 mcast_send_buf_size="640000"
                 loopback="false"
                 discard_incompatible_packets="true"
                 max_bundle_size="64000"
                 max_bundle_timeout="30"
                 use_incoming_packet_handler="true"
                 ip_ttl="2"
                 enable_bundling="false"
                 enable_diagnostics="true"
                 use_concurrent_stack="true"
                 thread_naming_pattern="pl"
                 thread_pool.enabled="true"
                 thread_pool.min_threads="1"
                 thread_pool.max_threads="25"
                 thread_pool.keep_alive_time="30000"
                 thread_pool.queue_enabled="true"
                 thread_pool.queue_max_size="10"
                 thread_pool.rejection_policy="Run"
                 oob_thread_pool.enabled="true"
                 oob_thread_pool.min_threads="1"
                 oob_thread_pool.max_threads="4"
                 oob_thread_pool.keep_alive_time="10000"
                 oob_thread_pool.queue_enabled="true"
                 oob_thread_pool.queue_max_size="10"
                 oob_thread_pool.rejection_policy="Run"/>

            <PING timeout="2000" num_initial_members="3"/>
            <MERGE2 max_interval="30000" min_interval="10000"/>
            <FD_SOCK/>
            <FD timeout="10000" max_tries="5" shun="true"/>
            <VERIFY_SUSPECT timeout="1500"/>
            <pbcast.NAKACK use_mcast_xmit="false" gc_lag="0"
                           retransmit_timeout="300,600,1200,2400,4800"
                           discard_delivered_msgs="true"/>
            <UNICAST timeout="300,600,1200,2400,3600"/>
            <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                           max_bytes="400000"/>
            <pbcast.GMS print_local_addr="true" join_timeout="5000"
                        join_retry_timeout="2000" shun="false"
                        view_bundling="true" view_ack_collection_timeout="5000"/>
            <FRAG2 frag_size="60000"/>
            <pbcast.STREAMING_STATE_TRANSFER use_reading_thread="true"/>
            <!-- <pbcast.STATE_TRANSFER/> -->
            <pbcast.FLUSH timeout="0"/>
         </config>
      </attribute>
      <!--
       Whether or not to fetch state on joining a cluster
       NOTE this used to be called FetchStateOnStartup and has been renamed to be more descriptive.
      -->
      <attribute name="FetchInMemoryState">true</attribute>
      <!--
          The max amount of time (in milliseconds) we wait until the
          state (ie. the contents of the cache) are retrieved from
          existing members in a clustered environment
      -->
      <attribute name="StateRetrievalTimeout">15000</attribute>
      <!--
          Number of milliseconds to wait until all responses for a
          synchronous call have been received.
      -->
      <attribute name="SyncReplTimeout">15000</attribute>
      <!-- Max number of milliseconds to wait for a lock acquisition -->
      <attribute name="LockAcquisitionTimeout">10000</attribute>
      <!--
         Indicate whether to use region based marshalling or not. Set this to true if you are running under a scoped
         class loader, e.g., inside an application server. Default is "false".
      -->
      <attribute name="UseRegionBasedMarshalling">true</attribute>
   </mbean>
</server>