Tuesday, November 29, 2011

Practical Guide to Last Minute Java and JEE Technology interview questions

This is an EBook which covers various topics on Java and JEE technologies interview questions and answers. The book is not an exhaustive reference but serves as a refresher to various topics during Java interview. In case you have more questions to share, please post it as comments and I will add it in the 2nd edition of the book.

The eBook is available at
Practical Guide to Last Minute Java and JEE Technology interview questions

Originally, I wanted the book to be available for free, but Kindle books are not available for free, so kept a very low price.

Happy reading !!

Thanks
Tejas

Sunday, October 30, 2011

Practical Guide to Building Search Application Using Apache Solr : A short introduction

I have published second book on Apache Solr. The book is a short introduction with live examples on working with Apache Solr.

Apache Solr is an enterprise grade fast search engine with full text search capabilities. It can be integrated with database, provides faceted query interface and clustering support. In this tutorial, we will build a fictitious search application to retrieve medical patient records. The Solr itself is a very big topic and not all the Solr features are covered in this book.
The book covers SolrJ library, Java code to interface to Solr, Solr configuration and installation on Tomcat and building Java code to perform search operations using Solr.
This book is aimed at developers, architects and technical managers who will benefit immensely by reading this book. This is a short and quick guide and provides practical example to build a search interface.
Here's the link to EBook - http://www.amazon.com/Practical-Building-Search-Applications-ebook/dp/B0060U2QCY/ref=sr_1_5?s=books&ie=UTF8&qid=1319967851&sr=1-5

Thanks
Tejas

Tuesday, August 16, 2011

First book publish

Hi


The title of the book is - Practical Guide to Java EE and Amazon EC2 Development . This is an eBook and available for download from amazon.com

The book covers steps to deploy Oracle pet store catalog web application on Amazon AWS cloud platform.

I am planning to publish 2 nd part of this book which will cover more advanced topics on the Amazon AWS platform.

Thanks
Tejas

Monday, May 16, 2011

Introduction to Struts 2 configuration on Tomcat

The configuration of Struts 2 is different from the way Struts 1 configuration. This blog demonstrates  the steps required to configure Struts 2 on Tomcat.
These are the steps:
(1)    Install Sun JDK 1.6
(2)    Install Apache Tomcat 6.0.32
(3)    Download struts-2.2.1.1-apps.zip from struts web site.
(4)    This contains struts2-blank.war file. This is an empty war file containing struts2 configuration. We will use this to configure the struts2 web application.
(5)    Copy struts2-blank.war file to the webapps folder of the Tomcat server.
(6)    Start the Tomcat server.
(7)    Type the URL on the web browser - http://localhost:8080/struts2-blank/example/HelloWorld.jsp
(8)    You should see the following screenshot of the application, which means that the application is working fine.


Now, let’s look at various configuration elements that goes with configuring struts2 application.



Web.xml
The web.xml file is used to configure the FilterDispatcher class that handles all requests to struts2 framework. FilterDispatcher class is configured as a filter and maps all the application requests to Struts2 framework.

<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_9" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">

    <display-name>Struts Blank</display-name>

    <filter>
        <filter-name>struts2</filter-name>
        <filter-class>org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter</filter-class>
    </filter>

    <filter-mapping>
        <filter-name>struts2</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>

    <welcome-file-list>
        <welcome-file>index.html</welcome-file>
    </welcome-file-list>

</web-app>



Struts.xml
The mappings for MVC are declared in the struts.xml file. This file is located in the folder – WEB-INF/src/java

        <action name="HelloWorld" class="example.HelloWorld">
            <result>/example/HelloWorld.jsp</result>
        </action>   

   <package name="default" namespace="/" extends="struts-default">
        <default-action-ref name="index" />
        <action name="index">
            <result type="redirectAction">
                <param name="actionName">HelloWorld</param>
                <param name="namespace">/example</param>
            </result>
        </action>
    </package>



HelloWorld.jsp
The following code snippet shows the use of struts tag library to create the view.

<%@ page contentType="text/html; charset=UTF-8" %>
<%@ taglib prefix="s" uri="/struts-tags" %>
<html>
<head>
    <title><s:text name="HelloWorld.message"/></title>
</head>

<body>
<h2><s:property value="message"/></h2>

<h3>Languages</h3>
<ul>
    <li>
        <s:url id="url" action="HelloWorld">
            <s:param name="request_locale">en</s:param>
        </s:url>
        <s:a href="%{url}">English</s:a>
    </li>
    <li>
        <s:url id="url" action="HelloWorld">
            <s:param name="request_locale">es</s:param>
        </s:url>
        <s:a href="%{url}">Espanol</s:a>
    </li>
</ul>

</body>
</html>



HelloWorld.java
The execute method is called to process the request. This method generally call the business POJO classes and fetches / updates data to database. Once done, this call the view to be rendered based on the return string from the execute method.

public class HelloWorld extends ExampleSupport {

    public String execute() throws Exception {
        setMessage(getText(MESSAGE));
        return SUCCESS;
    }

    /**
     * Provide default valuie for Message property.
     */
    public static final String MESSAGE = "HelloWorld.message";

    /**
     * Field for Message property.
     */
    private String message;

    /**
     * Return Message property.
     *
     * @return Message property
     */
    public String getMessage() {
        return message;
    }

    /**
     * Set Message property.
     *
     * @param message Text to display on HelloWorld page.
     */
    public void setMessage(String message) {
        this.message = message;
    }
}


The view to be rendered is again the same page – HelloWorld.jsp, the same page is called with different message. The following screenshot shows the HelloWorld example.

The above description demonstrates how easy it is to setup Struts 2 MVC web framework. It also demonstrates that it is quite different from configuring Struts 1 framework.

Wednesday, May 4, 2011

Setting up Amazon Elastic Load Balancer

Amazon ELB is a very convenient and easy to setup as a component in the Enterprise Web Applications to balance the incoming web request load to available pool of instances. In addition to this, Amazon ELB also detects the health of instances in the pool and can change the traffic pattern of requests to route them to only healthy instances.
ELB also has facility to enable stickiness in the sessions so that request from the same client is load balanced by the same instance in the pool. An additional use of ELB is to map requests from port HTTP 80 to HTTP port 8080 of the application servers or web servers running on the EC2 instance.

How to setup ELB?
Step 1: Click on the create new load balancer button:

Step 2:
Select the application path on the application server to which the ELB will send ping requests to check the health of the instances.

Step 3 : Select the instances to be added to the pool of instances available for load balancing:

Step 4: Confirm the details and click the create button.
This will create the ELB for the pool of EC2 instances and now you can send requests to the ELB on HTTP port 80 from the address bar of your web browser, ELB will forward the request to the ELB and the response will be rendered to the browser.

For more details of Amazon Elastic Load Balancer, visit here - http://aws.amazon.com/elasticloadbalancing/




 

Tuesday, May 3, 2011

Using Amazon CloudWatch Webservice

Using Amazon Cloudwatch
Amazon AWS has launched Amazon Cloudwatch sometime back, and recently I used it for one of the projects I am working on.
Amazon Cloudwatch is a web service that provides monitoring and alerts for Amazon AWS resources.  It also provides operational metrics on Amazon EC2 resources like elastic load balancers, instances, etc. The operational metrics includes parameters like CPU Utilization, Disk read/writes, and network traffic. Please note that there is a charge to use this service.

It is very easy to setup Amazon Cloud Watch web service. Primarily, I found two uses of CloudWatch as follows:
·         Trigger email generation when a certain parameter exceeds certain value
·         Auto-scale up or down the instances when a certain parameter exceeds certain value.
The parameters mentioned in the above 2 points can be any operational metrics.

How to setup?
Here’s how to setup email generation using Amazon Cloudwatch:
Step 1:
Select parameter, in this case I have selected CPU Utilization:
Step 2:
Next step is to setup alarm and email address:
That’s it !!
The alarm is setup using Amazon CloudWatch web service. When the CPU utilization of the selected instance will cross the threshold of 5 min, an email will be sent to the email id setup.

For more information you can look at the link - http://aws.amazon.com/cloudwatch/

Thursday, March 31, 2011

Manage Local Maven Dependency

Recently, I came across an issue while running build for Maven pom.xml. I am using Hibernate Shards project in my product, and updated pom.xml with following:


            <dependency>
                  <groupId>hibernate</groupId>
                  <artifactId>shards</artifactId>
                  <version>3.0.0B2</version>
            </dependency>


Though when I ran the pom.xml with the above xml snippet, it was not able to find the hibernate shards jar file in the Maven repository and could not download it.
I searched the Maven repository for hibernate shards file, but could not find it. Maven repository is available here - http://mvnrepository.com/

Solution
When a pom.xml file is run using “mvn install” command, as part of the build, Maven fetches the jar files from the Maven repository on the internet.  These jar files are stored under a folder called “.m2” under the users folder.
In this case, since the Hibernate Shards file is not available in the Maven repository, we have to manually run Maven command to install this jar file under the “.m2” folder. So here’s the command,

mvn install:install-file -Dfile=c:\Downloads\hibernate-shards-3.0.0.beta2.jar -DgroupId=hibernate            -DartifactId=shards -Dversion=3.0.0B2 -Dpackaging=jar -DgeneratePom=true


Now when the pom.xml build file is run with “mvn install” command, the build runs successfully, since the local “.m2” folder repository is checked for hibernate shards jar file.

Tuesday, March 15, 2011

Hadoop Compute Cluster: Summary

What is Hadoop?
Hadoop is inspired by Google’s Architecture – Map Reduce and Google File System. It is a top level Apache project. It is completely written in Java.

Advantages of using Hadoop
Hadoop compute clusters are built on cheap commodity hardware. Hadoop automatically handles node failures and data replication. Hadoop is a good framework for building batch data processing system. Hadoop provides API and framework implementation for working with Map Reduce. The Map Reduce implementation is provided on top of Hadoop Job Infrastructure.
Hadoop Job infrastructure can manage and handle HUGE amounts of data in the range of peta bytes.

When is Hadoop a good choice?
Hadoop is a good choice for building batch processing systems to process huge amounts of unstructured data. Also, to use Hadoop effectively, the system should process data in parallel. Also, a definite advantage of Hadoop is that when you want to use cheap hardware and scale the cluster horizontally.

When is Hadoop NOT a good choice?
Hadoop is not a good choice for building systems that carry out intense calculations with little or no data. Also, for systems where requirements restrict that the processing cannot be easily made parallel. Also, since Hadoop is a batch processing system, there is lot of latency between request and response and so is not suitable for interactive system.

Hadoop Eco-system
Several projects are supporting Hadoop Eco-system:
·         Hadoop Common – The common utilities that support the other Hadoop sub – projects
·         HDFS – A distributed file system that provides high throughput access to application data
·         MapReduce – A software framework for distributed processing of large data sets on compute clusters.
·         Avro – A data serialization system.
·         Chukwa – A data collection system for managing large distributed systems
·         Hbase – A scalable, distributed database that supports structured data storage for large tables
·         Hive – A data warehouse  infrastructure that provides data summarization and ad hoc querying
·         Mahout – A scalable machine learning and data mining library
·         Pig – A high – level data-flow language and execution framework for parallel computation
·         ZooKeeper – A high – performance coordination service for distributed applications

Hadoop Distributed File System (HDFS):
HDFS is a file system internally used by Hadoop framework for buffering, transferring and copying data across nodes within the Hadoop cluster.

Hadoop copies each file data to multiple nodes. This allows for node failures without data loss.



Hadoop Architecture Overview

The above diagram depicts Hadoop Architecture. There is one Name node per cluster. This imposes high risk by pausing as a Single Point of Failure for hardware. This can be prevented by mounting the Name node on multiple file systems to provide data redundancy. Name node manages the file system name space and meta data.
There are lots of data nodes within a cluster. It manages data blockes which represent fragments of files on HDFS. Each block is replicated wihin data nodes (at least 3 copies), so this prevents failure.
There is exactly one Job Tracker per cluster. Clients submit job requests to job tracker. Job tracker schedules and monitors Map Reduce jobs on task trackers.
There are typically many task trackers. These are responsible for executing map reduce operations. Task trackers are also responsible for reading and writing input and output data to Map Reduce jobs.

Hadoop modes of operation
Hadoop operates in three modes of operation:
·         Stand – alone
·         Pseudo – distributed
·         Fully-distributed (Cluster)

The stand-alone and Pseudo-distributed modes of operations are development modes of operations used by development teams to carry out development of Map Reduce jobs.
The stand-alone mode of development does not use HDFS but uses local operating system files system. Hadoop and all application code runs inside a single Java process. This is on a Single machine.
The pseudo – distributed mode of Hadoop runs all the processes like Name node, Data node, Job Tracker and Task Tracker as separate processes. HDFS file system is used in this mode of operation. This is again on a single machine.
Both the above approaches are used by developers to carry out development activities.
The third mode of Hadoop operation is Fully – Distributed Cluster. HDFS file system is used. This is over a cluster of machines. Read / write processes are balanced over a cluster of nodes.  There are several daemon threads of Name node, Data node, Task Tracker and Job Tracker.

Conclusion
Hadoop is a leading open source framework where distributed parallel processing is required. There are many companies supporting Hadoop development including Apache, Cloudera, etc.

References:

Tuesday, February 22, 2011

Bean Definitions for Spring Framework

Introduction
I am discussing various ways of defining beans for Spring framework. What I will cover is the approach, tricks and importance of correctly defining the beans for Spring framework.

Why are Beans important for applications using Spring framework?
Spring framework’s core features are around dependency injection and beans lifecycle management. What this means is that programmers don’t write code to inject bean definitions. Instead Spring framework injects the dependency to beans. Spring framework container manages this with the help of XML configuration files and Java reflection. Developers have to write code for the Java beans and create XML files for injecting dependencies in the beans.

Constructor based bean instantiation:
I have shown below XML configuration file – beans1.xml and beans2.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
                <!-- services -->
                <bean id="petStore"
                                class="org.springframework.samples.jpetstore.services.PetStoreServiceImpl">
                                                <property name="accountDao" ref="accountDao"/>
                                                <property name="itemDao" ref="itemDao"/>
                                                <!-- additional collaborators and configuration for this bean go here -->
                </bean>
                <!-- more bean definitions for services go here -->
</beans>



Beans2.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
                <bean id="accountDao"
                                class="org.springframework.samples.jpetstore.dao.ibatis.SqlMapAccountDao">
                <!-- additional collaborators and configuration for this bean go here -->
                </bean>
                <bean id="itemDao"                                                      class="org.springframework.samples.jpetstore.dao.ibatis.SqlMapItemDao">
                                <!-- additional collaborators and configuration for this bean go here -->
                </bean>
                <!-- more bean definitions for data access objects go here -->
</beans>

This is a simple XML file which will be used by Spring IoC container to instantiate the beans defined in the XML file. To instantiate the Spring container, use the following code:
ApplicationContext context =
                new ClassPathXmlApplicationContext(new String[] {"beans1.xml", "beans2.xml"});


The developers write the above Java code to instantiate the Spring IoC container with the Spring’s ApplicationContext class and passing the xml files as arguments.

Instantiation with a static factory method:
Beans can be defined to be instantiated from a static factory method instantiation. This can  be done by providing a static method in the Java class and giving factory-method as attribute in the Spring XML file. See the example below:
<bean id="clientService" class="examples.ClientService"              factory-method="createInstance"/>


public class ClientService {
                private static ClientService clientService = new ClientService();

                private ClientService() {}

                public static ClientService createInstance() {
                                return clientService;
                }
}




Instantiation using an instance factory method:
Beans can be defined to be instantiated from an instance factory method. This can be done providing a instance factory method in the Java class and giving factory-method as in the Spring XML file. See the example below:
<bean id="serviceLocator" class="examples.DefaultServiceLocator">
                <!-- inject any dependencies required by this locator bean -->
</bean>

<bean id="clientService"  factory-bean="serviceLocator"
                                                factory-method="createClientServiceInstance"/>

<bean id="accountService"
                                factory-bean="serviceLocator"
                                                factory-method="createAccountServiceInstance"/>


public class DefaultServiceLocator {

                private static ClientService clientService = new ClientServiceImpl();
               
                private static AccountService accountService = new AccountServiceImpl();
               
                private DefaultServiceLocator() {}
               
                public ClientService createClientServiceInstance() {
                                return clientService;
                }

                public AccountService createAccountServiceInstance() {
                                return accountService;
                }
}


The above approach shows that the lifecycle of the factory bean itself can be managed by the Spring IoC container.

Dependency Injection (DI):
DI is a process of Spring container injecting dependency of one object to other objects automatically without the need for object to construct the other object or need for the object to know other object’s location.
There are two variants of DI, Constructor based DI and Setter based DI.

Constructor-based DI:
Constructor based DI is accomplished by the container invoking a constructor with a number of arguments, each representing a dependency.  See the example below:
package examples;
               
                public class ExampleBean {
                // No. of years to the calculate the Ultimate Answer

                private int years;
                // The Answer to Life, the Universe, and Everything
               
                private String ultimateAnswer;

                public ExampleBean(int years, String ultimateAnswer) {
                                this.years = years;
                                this.ultimateAnswer = ultimateAnswer;
                }
}


<bean id="exampleBean" class="examples.ExampleBean">
                <constructor-arg index="0" value="7500000"/>
                <constructor-arg index="1" value="42"/>
</bean>

The above XML shows the index attribute to identify the correct parameter to the constructor.

Setter based DI:
Setter based DI is accomplished by the container after calling the no-arg constructor or a static factory method and calling the setter methods to inject the dependency.
See the example below:
public class SimpleMovieLister {
                // the SimpleMovieLister has a dependency on the MovieFinder

                private MovieFinder movieFinder;

                // a setter method so that the Spring container can 'inject' a MovieFinder
                public void setMovieFinder(MovieFinder movieFinder) {
                                this.movieFinder = movieFinder;
                }
                // business logic that actually 'uses' the injected MovieFinder is omitted...
}

There are many more forms of bean definitions with benas accepting collection classes like Map, List, Set, etc.
For more information and reference please visit the URL - http://www.springsource.org/ . The web site has many tutorials and information on setting up Spring framework.