Latest HashSet Interview Question Related to Concept of working of HashSet

HashSet in Java is a collection which implements Set interface and backed by an HashMap. Since HashSet uses HashMap internally it provides constant time performance for operations like add, remove, contains and size give HashMap has distributed elements properly among the buckets. Java HashSet does not guarantee any insertion orders of the set but it allows null elements. HashSet can be used in place of ArrayList to store the object if you require no duplicate and don't care about insertion order. Iterator of HashSet is fail-fast and throws ConcurrentModificationException if HashSet instance is modified concurrently during iteration by using any method other than remove() method of iterator class.If you want to keep insertion order by using HashSet than consider using LinkedHashSet. It is also a very important part of any Java collection interview, In short correct understanding of HashSet is must for any Java developer.




Java HashSet Examples TutorialsIn this Java tutorial we will learn various examples of HashSet in Java and how to perform different operations in HashSet with simple examples. There is also a very famous interview questions based on HashSet is difference between HashMap and HashSet in java, which we have discussed in earlier post. You can also look that to learn more about HashSet in Java.


How to create HashSet object in Java
Creating HashSet is not different than any other Collection class. HashSet provides multiple constructor which gives you flexibility to create HashSet either copying objects from another collection, an standard way to convert ArrayList to HashSet. You can also specify initialCapacity and load factor to prevent unnecessary resizing of HashSet.


HashSet assetSet = new HashSet(); //HashSet instance without any element
HashSet fromArrayList = new HashSet(Arrays.asList(“Java”,”C++”)); //copying content
HashSet properSet = new HashSet(50); //HashSet with initial capacity




How to store Object into HashSet
Storing object into HashSet, also called elements is similar to Other implementation of Set, add() method of Set interface is used to store object into HashSet. Since Set doesn’t allow duplicates if HashSet already contains that object, it will not change the HashSet and add() will return false in that case.


assetSet.add("I am first object in HashSet"); // add will return true
assetSet.add("I am first object in HashSet"); // add will return false as Set already has




How to check HashSet is empty
There are multiple ways to check if HashSet is empty. An HashSet is called empty if it does not contain any element or if its size is zero. You can get size of HashSet as shown in further example and than see if its zero or not. Another way to do is by using isEmpty() method which returns true if underlying Collection or HashSet is empty.


boolean isEmpty = assetSet.isEmpty(); //isEmpty() will return true if HashSet is empty


if(assetSet.size() == 0){
    System.out.println("HashSet is empty, does not contain any element");
}

How to remove objects from HashSet in Java
HashSet in Java has nice little utility method called remove() remove object from HashSet. remove() deletes the specified object or element from this Set and returns true if Set contains element or false if Set does not contain that element. You can also use Iterator’s remove method for deleting object while Iterating over it.


assetSet.remove("I am first object in HashSet"); // remove will return true
assetSet.remove("I am first object in HashSet"); // remove will return false now


Iterator setIterator = assetSet.iterator()
while(setIterator.hasNext()){
   String item = setIterator().next();
   setIterator.remove(); //removes current element
}




How to clear HashSet in Java
HashSet in Java has a clear() method which removes all elements from HashSet and by clearing HashSet you can reuse It, only problem is that during multi-threading you need to be extra careful because while one thread is clearing objects form HashSet other thread can iterate over it.


assetSet.clear(); //clear Set, size of Set will be zero now


How to find size of HashSet in java
Size of HashSet returns number of objects stored in Collection. You can find size of HashSet by calling size() method of HashSet in Java. For an empty HashSet size() will return zero.


int size = assetSet.size(); // count of object stored in HashSet




How to check if HashSet contains an object
checking existence of an object inside HashSet in Java is not difficult, HashSet provides a utility method contains(Object o) for very same purpose. contains returns true if object exists in collection otherwise it returns false. By the way contains() method uses equals method to compare two object in HashSet. That’s why its important to override hashCode and equals method in Java.


assetSet.contains("Does this object exists in HashSet"); //contains() will return false
assetSet.add("Does this object exists in HashSet"); //add will return true as its new object
assetSet.contains("Does this object exists in HashSet"); // now contains will return true

How to convert HashSet into array in Java
HashSet has an utility method called toArray() inherited from Set interface. which is used to convert a HashSet into Array in Java see following example of converting hashset into array. toArray() returns an object array.


Object[] hashsetArray = assetSet.toArray();
Set<String> stringSet = new HashSet<String>();
String[] strArray = stringSet.toArray();


After Java 1.5 this method accept generics parameter and It can return the same type of element which is stored in HashSet. If size of Array is not sufficient than a new Array with runtime type of elements in HashSet is created. If you want to convert HashSet into Array List than search on Javarevisited.




That's all on this Java HashSet tutorial. HashSet in java is a one of the frequently used Collection class and can be very useful on certain scenario where you need to store unique elements with quick retrieval. important point to not about java HashSet it that add, remove, contains() and size() is constant time operation.

Junit Concept | Junit tutorial with example code

JUnit4 Annotations are single big change from JUnit 3 to JUnit 4 which is introduced in Java 5. With annotations in Junit4,  creating and running a JUnit test becomes more easy and more readable, but you can only take full advantage of JUnit4 if you know the correct meaning of  JUnit 4 annotations and how to use them while writing JUnit tests. In this
Junit tutorial we will not only understand meaning of those annotations but also we will see examples of JUnit4 annotations. By the way this is my first post in JUnit 4 but if you are new here than you may like post 10 tips to write better code comments and 10 Object oriented design principles for Programmer as well.

JUnit 4 Annotations : Overview
Following is a list of frequently used JUnit4 Annotation , which is available when you include junit4.jar in your Classpath:

@Before
@BeforeClass
@After
@AfterClass
@Test
@Ignore
@Test(timeout=500)
@Test(expected=IllegalArgumentException.class)


@Before and @After
In Junit4 there is no setup() or tearDown() method and instead of that we have @Before and @After annotations.
By using @Before you can make any method as setup() and by using @After you can make any method as teardown(). What is most important point to remember is @Before and @After annotated method will be invoked before and after each test case. So in case you have five test cases in your JUnit test file than just like setup() and tearDown() method annotated with @Before and @After will be called five times. Here is an example of using
@Before and @After Annotation in JUnit4:

    @Before
    public void setUp() {
        System.out.println("@Before method will execute before every JUnit4 test");
    }

    @After
    public void tearDown() {
        System.out.println("@After method will execute before every JUnit4 test");
    }


@BeforeClass and @AfterClass
@BeforeClass and @AfterClass JUnit4 Annotations are similar to @After and @Before with only exception that they
are called on per TestClass basis and not on per test basis. They can be used as one time setup and tearDown
method and can be used to initialize class level resources. here is an example of using @BeforeClass and @AfterClass Annotations in JUnit4, here is an example of @BeforeClass and @AfterClass Junit 4 annotation

    @BeforeClass
    public static void setUpClass() throws Exception {
        System.out.println("@BeforeClass method will be executed before JUnit test for"
                + "a Class starts");
    }

    @AfterClass
    public static void tearDownClass() throws Exception {
         System.out.println("@AfterClass method will be executed after JUnit test for"
                + "a Class Completed");
    }


@Test
@Test is a replacement of both TestCase class and convention "test" which we prefix to every test method. for example to test a method  called calculateInterest() we used to create method testCalcuatedInterest() and our class needs to be extended from org.junit.TestCase class. Now with @Test annotation that is not required any more. You just need to annotate your test method with @Test Junit4 annotation and done. no need to extend from TestCase class and no need to prefix "test" to your method, here is an example of  JUnit 4 @Test annotation

 @Test
    public void testCalculateInterest() {
        System.out.println("calculateInterest");
        fail("An Example of @Test JUnit4 annotation");
    }


@Ignore
JUnit 4 Annotations examples list meaningsSome time we add test method in JUnit test class but hasn't implemented that is causing your build to fail if JUnit testcase are integrated or embedded into build process. you can avoid that problem by marking your test method as @Ignore in Junit4. JUnit4 ignores method annotated with @Ignore and doesn't run during test. Here is an example of using @Ignore annotation in JUnit4 to exclude a particular Test from running:


 @Ignore("Not yet implemented")
    @Test
    public void testGetAmount() {
        System.out.println("getAmount");
        fail("@Ignore method will not run by JUnit4");
    }


@Test(timeout=500)
Now with JUnit4 writing testcases based on timeout is extremely easy. You just need to pass a parameter timeout with value in millisecond to @Test annotation. remember timeout values are specified in millisecond and your JUnit4 timeout test case will help if it doesn't complete before timeout period. This works great if you have SLA(Service Level Agreement)  and an operation need to complete before predefined timeout.

  @Test(timeout = 500)
    public void testTimeout() {
        System.out.println("@Test(timeout) can be used to enforce timeout in JUnit4 test case");
        while (1 == 1) {
        
        }
    }

This JUnit4 test will fail after 500 millisecond.

@Test(expected=IllegalArgumentException.class)
Another useful enhancement is Exception handling testcases of JUnit4. Now to test Exception is become very easy and you just need to specify Exception class inside @Test annotation to check whether a method throws a particular exception or not. here is an example which test behavior of a method to verify whether it throws Exception or not,  when run with invalid input:

    @Test(expected=IllegalArgumentException.class)
    public void testException(int input) {
        System.out.println("@Test(expected) will check for specified exception during its run");
    
    }


These were list of frequently used JUnit 4 annotations and there meanings. In the course we have also learn how to use @Before , @After in place of setup() and teardown(). Code review and Unit testing is one of the best development practices to follow and we must try our best to incorporate that in our daily coding and development cycle.

HashMap working concept in Java | how hashMap internally working

How HashMap works in Java or sometime how get method work in HashMap is common interview questions now days. Almost everybody who worked in Java knows what hashMap is, where to use hashMap or difference between hashtable and HashMap then why this interview question becomes so special? Because of the breadth and depth this question offers. It has become very popular java interview question in almost any senior or mid-senior level java interviews.

Questions start with simple statement

"Have you used HashMap before" or "What is HashMap? Why do we use it “

Almost everybody answers this with yes and then interviewee keep talking about common facts about hashMap like hashMap accpt null while hashtable doesn't, HashMap is not synchronized, hashMap is fast and so on along with basics like its stores key and value pairs etc.
This shows that person has used hashMap and quite familiar with the functionality HashMap offers but interview takes a sharp turn from here and next set of follow up questions gets more detailed about fundamentals involved in hashmap. Interview here you and come back with questions like

"Do you Know how hashMap works in Java” or
"How does get () method of HashMap works in Java"

And then you get answers like I don't bother its standard Java API, you better look code on java; I can find it out in Google at any time etc.
But some interviewee definitely answer this and will say "HashMap works on principle of hashing, we have put () and get () method for storing and retrieving data from hashMap. When we pass an object to put () method to store it on hashMap, hashMap implementation calls
hashcode() method hashMap key object and by applying that hashcode on its own hashing funtion it identifies a bucket location for storing value object , important part here is HashMap stores both key+value in bucket which is essential to understand the retrieving logic. if people fails to recognize this and say it only stores Value in the bucket they will fail to explain the retrieving logic of any object stored in HashMap . This answer is very much acceptable and does make sense that interviewee has fair bit of knowledge how hashing works and how HashMap works in Java.
But this is just start of story and going forward when depth increases a little bit and when you put interviewee on scenarios every java developers faced day by day basis. So next question would be more likely about collision detection and collision resolution in Java HashMap e.g

"What will happen if two different objects have same hashcode?”

Now from here confusion starts some time interviewer will say that since Hashcode is equal objects are equal and HashMap will throw exception or not store it again etc. then you might want to remind them about equals and hashCode() contract that two unequal object in Java very much can have equal hashcode. Some will give up at this point and some will move ahead and say "Since hashcode () is same, bucket location would be same and collision occurs in hashMap, Since HashMap use a linked list to store in bucket, value object will be stored in next node of linked list." great this answer make sense to me though there could be some other collision resolution methods available this is simplest and HashMap does follow this.
But story does not end here and final questions interviewer ask like

"How will you retreive if two different objects have same hashcode?”
 Hmmmmmmmmmmmmm
Interviewee will say we will call get() method and then HashMap uses keys hashcode to find out bucket location and retrieves object but then you need to remind him that there are two objects are stored in same bucket , so they will say about traversal in linked list until we find the value object , then you ask how do you identify value object because you don't value object to compare ,So until they know that HashMap stores both Key and Value in linked list node they won't be able to resolve this issue and will try and fail.

But those bunch of people who remember this key information will say that after finding bucket location , we will call keys.equals() method to identify correct node in linked list and return associated value object for that key in Java HashMap. Perfect this is the correct answer.

In many cases interviewee fails at this stage because they get confused between hashcode () and equals () and keys and values object in hashMap which is pretty obvious because they are dealing with the hashcode () in all previous questions and equals () come in picture only in case of retrieving value object from HashMap.
Some good developer point out here that using immutable, final object with proper equals () and hashcode () implementation would act as perfect Java HashMap keys and improve performance of Java hashMap by reducing collision. Immutability also allows caching there hashcode of different keys which makes overall retrieval process very fast and suggest that String and various wrapper classes e.g Integer provided by Java Collection API are very good HashMap keys.

Now if you clear all this java hashmap interview question you will be surprised by this very interesting question "What happens On HashMap in Java if the size of the Hashmap exceeds a given threshold defined by load factor ?". Until you know how hashmap works exactly you won't be able to answer this question.
if the size of the map exceeds a given threshold defined by load-factor e.g. if load factor is .75 it will act to re-size the map once it filled 75%. Java Hashmap does that by creating another new bucket array of size twice of previous size of hashmap, and then start putting every old element into that new bucket array and this process is called rehashing because it also applies hash function to find new bucket location.

If you manage to answer this question on hashmap in java you will be greeted by "do you see any problem with resizing of hashmap in Java" , you might not be able to pick the context and then he will try to give you hint about multiple thread accessing the java hashmap and potentially looking for race condition on HashMap in Java.

So the answer is Yes there is potential race condition exists while resizing hashmap in Java, if two thread at the same time found that now Java Hashmap needs resizing and they both try to resizing. on the process of resizing of hashmap in Java , the element in bucket which is stored in linked list get reversed in order during there migration to new bucket because java hashmap doesn't append the new element at tail instead it append new element at head to avoid tail traversing. if race condition happens then you will end up with an infinite loop. though this point you can potentially argue that what the hell makes you think to use HashMap in multi-threaded environment to interviewer :)

I like this question because of its depth and number of concept it touches indirectly, if you look at questions asked during interview this HashMap questions has verified
Concept of hashing
Collision resolution in HashMap
Use of equals () and hashCode () method and there importance?
Benefit of immutable object?
race condition on hashmap in Java
Resizing of Java HashMap

Just to summarize here are the answers which does makes sense for above questions

How HashMAp works in Java
HashMap works on principle of hashing, we have put () and get () method for storing and retrieving object form hashMap.When we pass an both key and value to put() method to store on HashMap, it uses key object hashcode() method to calculate hashcode and they by applying hashing on that hashcode it identifies bucket location for storing value object.
While retrieving it uses key object equals method to find out correct key value pair and return value object associated with that key. HashMap uses linked list in case of collision and object will be stored in next node of linked list.
Also hashMap stores both key+value tuple in every node of linked list.

What will happen if two different HashMap key objects have same hashcode?
They will be stored in same bucket but no next node of linked list. And keys equals () method will be used to identify correct key value pair in HashMap.

In terms of usage HashMap is very versatile and I have mostly used hashMap as cache in electronic trading application I have worked . Since finance domain used Java heavily and due to performance reason we need caching a lot HashMap comes as very handy there.






JDBC Database connection pool in Spring 2.5 Framework – Code setup in Spring for connection pooling

Setting up JDBC Database Connection Pool in Spring framework is easy for any Java application, just matter of changing few configuration in spring configuration file.If you are writing core java application and not running on any web or application server like Tomcat or  Weblogic,  Managing Database connection pool using Apache Commons DBCP and Commons Pool along-with Spring framework is nice choice but if you have luxury of having web server and managed J2EE Container, consider using Connection pool managed by J2EE server those are better option in terms of maintenance, flexibility and also help to prevent java.lang.OutofMemroyError:PermGen Space in tomcat by avoiding loading of JDBC driver in web-app class-loader, Also keeping JDBC connection pool information in Server makes it easy to change or include settings for JDBC over SSL. In this article we will see how to setup Database connection pool in spring framework using Apache commons DBCP and commons pool.jar

This article is in continuation of my tutorials on spring framework and database like LDAP Authentication in J2EE with Spring Security and  manage session using Spring security  If you haven’t read those article than you may find them useful.

Spring Example JDBC Database Connection Pool
Spring framework provides convenient JdbcTemplate class for performing all Database related operation. if you are not using Hibernate than using Spring's JdbcTemplate is good option. JdbcTemplate requires a DataSource which is javax.sql.DataSource implementation and you can get this directly using spring bean configuration or by using JNDI if you are using J2EE web server or application server for managing Connection Pool. See How to setup JDBC connection Pool in tomcat and Spring for JNDI based connection pooling for more details. In order to setup Data source you will require following configuration in your applicationContext.xml (spring configuration) file:
 
//Datasource connection settings in Spring
<bean id="springDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" >
   <property name="url" value="jdbc:oracle:thin:@localhost:1521:SPRING_TEST" />
   <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
   <property name="username" value="root" />
   <property name="password" value="root" />
   <property name="removeAbandoned" value="true"/>
   <property name="initialSize" value="20" />
   <property name="maxActive" value="30" />
</bean>

//Dao class configuration in spring
 <bean id="EmployeeDatabaseBean" class="com.test.EmployeeDAOImpl">
    <property name="dataSource" ref="springDataSource"/>
 </bean>


Below configuration of DBCP connection pool will create 20 database connection as initialSize is 20 and goes up to 30 Database connection if required as maxActive is 30. you can customize your database connection pool by using different properties provided by Apache DBCP library. Above example is creating connection pool with Oracle 11g database and we are using oracle.jdbc.driver.OracleDriver comes along with ojdbc6.jar or ojdbc6_g.jar ,  to learn more about how to connect Oracle database from Java program see the link.
Java Code for using Connection pool in Spring
Database connection pool Spring example codeBelow is complete code example of DAO class which uses Spring JdbcTemplate to execute SELECT query against database using database connection from Connection pool. If you are not initializing Database connection pool on start-up than it may take a while when you execute your first query because it needs to create certain number of SQL connection and then it execute query but once connection pool is created subsequent queries will execute faster.

//Code for DAO Class using Spring JdbcTemplate
package com.test
import javax.sql.DataSource;
import org.log4j.Logger;
import org.log4j.LoggerFactory;
import org.springframework.jdbc.core.JdbcTemplate;

/**
 * Java Program example to use DBCP connection pool with Spring framework
 * @author Javin Paul
 */
public class EmployeeDAOImpl implements EmployeeDAO {

    private Logger logger = LoggerFactory.getLogger(EmployeeDAOImpl.class);
    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    @Override
    public boolean isEmployeeExists(String emp_id) {
        try {
            logger.debug("Checking Employee in EMP table using Spring Jdbc Template");
            int number = this.jdbcTemplate.queryForInt("select count(*) from EMP where emp_id=?", emp_id);
            if (number > 0) {
                return true;
            }
        } catch (Exception exception) {
            exception.printStackTrace();
        }
        return false;
    }
}


Dependency:
1. you need to include oracle driver jar like ojdbc_6.jar in you classpath.
2. Apache DBCP and commons pool jar in application classpath.

That's all on how to configure JDBC Database connection pool in Spring framework.

Apply Auto Refresh funtionality in JSP | Implement Automatic refresh JSP page after a givem time

Hi Everyone in my project there was requirment of Automatic refresh of JSP page after a 5 Second
I have applied below logic in JSP Page

Code :

<% // Set refresh, autoload time as 5 seconds
 response.setIntHeader("Refresh", 5); 
 // Get current time , for testing of current time display
 Calendar calendar = new GregorianCalendar(); 
 String am_pm; int hour = calendar.get(Calendar.HOUR); int minute = calendar.get(Calendar.MINUTE); int second = calendar.get(Calendar.SECOND); if(calendar.get(Calendar.AM_PM) == 0) am_pm = "AM"; else am_pm = "PM"; String CT = hour+":"+ minute +":"+ second +" "+ am_pm; out.println("Crrent Time: " + CT + "\n");
 %>

Difference between Application Server and Web Server

Application Server vs Web Server

1. Application Server supports distributed transaction and EJB. While Web Server only supports Servlets and JSP.

2. Application Server can contain web server in them. most of App server e.g. JBoss or WAS has Servlet and JSP container.

3. Though its not limited to Application Server but they used to provide services like Connection pooling, Transaction management, messaging, clustering, load balancing and persistence. Now Apache tomcat also provides connection pooling.

4. In terms of logical difference between web server and application server. web server is supposed to provide http protocol level service while application server provides support to web service and expose business level service e.g. EJB.

5. Application server are more heavy than web server in terms of resource utilization.

Use Concurrent HashMap instead of hashtable & synchronizedMap | JDK 1.5 new features Concurrent HashMap

Use Concurrent HashMap instead of hashtable & synchronizedMap
Some of the drawbacks of  Synchronized collection such as  HashTable , Collections.synchronizedMap are as follows .

  Synchronized collection classes such as  Hashtable and  the synchronized wrapper classes created by the Collections.synchronizedMap are thread safe  with poor concurrency, less performance and scalabilty  .

1. Poor concurrency : When these collections  are  accessed by two or more threads, they  achieve thread safety  by  making the collection's data private and synchronizing all public methods  so that   only  one  thread at a time can access the  collection (hashtable /  synchronizedMap )   data.  This leads to poor concurrency.  As  Single lock is used for the whole collection , multiple threads struggle for the collection wide lock which reduces the performance

2. ConcurrentModificationException :
                When one thread is traversing the hashtable / Collections.synchronizedMap through an Iterator ,  while another thread  changes it  by mutative operations (put, remove , etc) , iterator  implemented in the java.util collections classes  fails by throwing ConcurrentModificationException . The exception occurs when  the hasNext() or next() method of Iterator class is called.  The same error also occurs (See  Code Part 1 : )  , when elements are added  in hashtable or  synchronizedMap , once the  iterator is constructed.  While iterating the collection (hashtable) through iterator  , collection / table- wide locking is required ,  otherwise ConcurrentModificationException is occured .

3. Scalabilty Issues :
      Scalabilty is the major issue when we use synchronized collections .  When the workload of the application increases ,   increasing the resources like processor , memory  should also  increase the throughtput of the application.   Unfortunately , it does not happen .  A scalable program can handle   a proportionally larger workload with more resources.  As synchronized collections synchronize on a single common lock , it restricts access  to  a  single thread at a  time,  other threads are restricted to access that collections , even if the resources  are available to schedule those threads.


4.  Some of the  common sequences of operations , such as put-if-absent (to check if an element is in the collection before adding it)    or iteration ,  require external synchronization (i.e. client side locking ) (See Code Part  3 )  to avoid data races  .


Code Part 1 :



//Map hm=Collections.synchronizedMap(new HashMap());

Map hm=new Hashtable(new HashMap());

//ConcurrentHashMap hm=new ConcurrentHashMap();

hm.put(1, "Blue");

hm.put(2, "Green");

hm.put(3, "Yellow");

Iterator entries = hm.entrySet().iterator();

hm.put(4, "Red");

hm.put(5, "Orange");



while (entries.hasNext()) {

    Map.Entry entry = (Map.Entry) entries.next();

    Integer key = (Integer)entry.getKey();

    String value = (String)entry.getValue();

    System.out.println("Key = " + key + ", Value = " + value);}
To overcome the above  issues with the synchronized collections , a new version of HashMap with concurrent access   has been designed  that is ConcurrentHashMap. This class is packaged with  java.util.concurrent  in JDK 1.5)

The main purpose to create ConcurrentHashMap is to provide
 
 1. better concurrency
  2  high scalability
  3. thread safe

and it supports
   1.  full concurrency of retrievals. Allows all readers to read the table concurrently  .  No lock is used for retrival operations.

  2.   concurrency for  writes . Allows  a limited number of writers to update the table concurrently

  3. full thread safe .

ConcurrentHashMap can be used where more read operation is required  ( i.e.  traversal is the dominant  operation )

How a ConcurrentHashMap is implemented ? or How it works?  or how concurrency is achieved?

          Volatile fields  and  lock striping  plays major role for  to achieve concurrency .
Lock striping :   Synchronizing  every method  on  a  single  lock,  restricts access  to  a  single  thread at a  time.   Instead of using single lock ,  ConcurrentHashMap  uses  different  locking mechanism  called lock  striping  to access the shared collection  concurrently which increases the scalabilty and performance .     Using different locks to allow different  threads to operate  on different portions of the same data structure  called lock striping. Splitting the lock into more  than one  improves the scalability .  For example two locks allow two threads to execute concurrently instead of one.

    Lock splitting can sometimes be extended to partition locking on a variablesized set of independent objects, in which case it is called lock striping.    


              Now let see that how lock striping mechanism is applied to ConcurrentHashMap .  The  strategy is to subdivide the  collection (hashtable) into independent  subsets called segments each guarded by a lock sothat  each subset (itself a hashtable)   can be  accessed  concurrently.    It uses an array of 16 locks each of which guards 1/16 of the hash buckets.  N/16 locks are used  for a hashtable having  N hash buckets.  Hash  bucket N  is guarded by  lock N mod 16.  16 locks allow  maximum of 16 threads to modify the hashtable at same time.   Mutative operations such as put() and remove() use locks  where as read operation does not use locks .


Note : The  number  of  locks  can  be  increased


Volatile Fields :   Some of the volatile fileds declared in the ConcurrentHashMap are


transient volatile int count;
    static final class HashEntry<K,V> {
         final K key;
         final int hash;
        volatile V value;
         volatile HashEntry<K,V> next;

        HashEntry(K key, int hash, HashEntry<K,V> next, V value) {
            .....
            .....
         }
 transient volatile HashEntry<K,V>[] table;

From the source of ConcurrentHashMap


As we know , volatile field ensures visibilty i.e.  one thread reads the most up-to-date value written by another thread .  For example count is the volatile field which is used to track the number of  elements  . When one thread  adds  an element to the table , the count is increased  by one  , Similarly when one  thread  removes  an element from the table , the count is decreased  by one .  Now the other threads doing   many read operations  get the count variable's most  recent  updated value.


Similarly   HashEntry<K,V>[] table , value  , volatile HashEntry<K,V> next     fileds  are declared as volatile.  This ensures that all the threads see the the most  recent written  value of  those  fields at all times.       


When iterating the collection (hashtable) through iterator  , it does not  throw ConcurrentModificationException, but the elements  added  or removed after the iterator was constructed  may or may not be reflected . No collection / table- wide locking is required  while iterating the collection.
 Issue:   How to  protect  / lock the entire collection?  . There is no support for locking the entire table in a way that prevents all access. Then  One way is to acquire all of the locks recursively  which is costlier than using a single lock .
 ConcurrentHashMap provides three new update methods:
 putIfAbsent(key, value)  -  check if the key  is in the collection before adding the specified key  and  associate it with the given value
 replace( key, value)  - Replace the existing  key with given key  ,  only if  the key is  mapped to  given value.
 remove(key, value) - Remove the key only if the key is mapped to  given value.


The following program using ConcurrentHashMap  helps to keep the  accessed files in a cache .
 Code Part 2 :



import java.util.*;

import java.util.concurrent.ConcurrentHashMap;

import java.io.*;



public class CacheUsingMap2 {



  ConcurrentHashMap  cache;



  public CacheUsingMap2() {

    cache = new ConcurrentHashMap();

   }



  public String getFile2(String fname) {

     cache.putIfAbsent(fname,  readFile(fname));

      return ((myFile)cache.get(fname)).getFileData();

     }





  public myFile readFile(String name)

                   {

                  File file = new File(name);

                String fileData="";

                  try {

                      

              Scanner scan = new Scanner(file);

              scan.useDelimiter("\\Z");

              fileData = scan.next();



                         } catch (FileNotFoundException e){

                               System.out.println(e);



                                }

                             catch ( IOException e) {

                         System.out.println(e);

                                }



         return (new myFile( fileData));

                }



  public static void main(String args[]) {

  CacheUsingMap2 cache=new CacheUsingMap2();



 String filePath="D:/Files/";

System.out.println( cache.getFile2(filePath+"k.txt"));

System.out.println( cache.getFile2(filePath+"k1.txt"));

System.out.println( cache.getFile2(filePath+"k.txt"));

System.out.println( cache.getFile2(filePath+"k1.txt"));



  }

}



class myFile {


          String fileData;

    public myFile(String data)

          {

              fileData=data; 

          }


  public String getFileData() {

         return fileData;

     }



Code Part  3 :

Sample code to createt cache using  Hashtable  (implements put-if-absent operation) which requires client side locking




....

Hashtable  cache =new Hashtable();

....



  public String getFile(String fname) {



//   if (cache.get(fname)==null)

         if (!cache.containsKey(fname))

                 {

  synchronized(cache)

                                        {

  cache.put(fname, readFile(fname));

                                         }

                 }


                 return ((myFile)cache.get(fname)).getFileData();     }