Call Jasper Report / Ireport From Java Application

JasperReports is an open source Java reporting tool that can write to screen, to a printer or into PDF, HTML, Microsoft Excel, RTF, ODT, Comma-separated values and XML files.

It can be used in Java-enabled applications, including Java EE or Web applications, to generate dynamic content. It reads its instructions from an XML or .jasper file dynamically.

We can generate reports using two ways,

1) From “JRXML” (Source) file
2) From “Jasper” (Compiled) file

Following jar must be in classpath

  1. commons-beanutils-1.8.2.jar
  2. commons-collections-3.2.1.jar
  3. commons-digester-1.7.jar
  4. commons-logging-1.1.jar
  5. groovy-all-1.7.5.jar
  6. iText-2.1.7.jar
  7. jasperreports-4.1.1.jar

Note : iReport for designing the report.

Create PDF Report From JRXML File

JRXML file is a JasperReports Document. JasperReports are defined in an XML file format, called JRXML, which can be hand-coded, generated, or designed using a tools like IReport, JasperAssistant etc

Execution of report from JRXML file will be very slow, as it need to compile before the execution


import java.io.IOException;
import java.util.HashMap;

import net.sf.jasperreports.engine.JREmptyDataSource;
import net.sf.jasperreports.engine.JRException;
import net.sf.jasperreports.engine.JasperExportManager;
import net.sf.jasperreports.engine.JasperFillManager;
import net.sf.jasperreports.engine.JasperPrint;

public class PdfFromXmlFile {
public static void main(String[] args) throws JRException, IOException {

JasperReport jasperReport = JasperCompileManager.compileReport("report.jrxml");
JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport,new HashMap(), new JREmptyDataSource());
JasperExportManager.exportReportToPdfFile(jasperPrint, "sample.pdf");
}
}

Create PDF Report From Jasper File

Jasper file is a compiled format of JasperReports Document.

Execution of report from Jasper file will be very fast, as it is pre-compiled

It is recommended for the production environment

import java.io.IOException;
import java.util.HashMap;

import net.sf.jasperreports.engine.JREmptyDataSource;
import net.sf.jasperreports.engine.JRException;
import net.sf.jasperreports.engine.JasperExportManager;
import net.sf.jasperreports.engine.JasperFillManager;
import net.sf.jasperreports.engine.JasperPrint;

public class PdfFromJasperFile {
public static void main(String[] args) throws JRException, IOException {

JasperPrint jasperPrint = JasperFillManager.fillReport("report.jasper",  new HashMap<String, Object>(),
new JREmptyDataSource());
JasperExportManager.exportReportToPdfFile(jasperPrint, "sample.pdf");

}
}

Hope you will like this. Cheers… 🙂

What is OpenERP? Open Source ERP Software Explained.

OpenERP (previously known as TinyERP) is an open source integrated enterprise resource planning (ERP) software manufactured by OpenERP.

Belgium-based OpenERP provides an open source suite of ERP business applications. It follows a typical commercial open source business model where the fully-featured applications are available free through a community-supported version; however no warranties or support for the software is available from the company.

If you choose the Enterprise or Online versions your business can obtain support for the software in addition to migration and maintenance, on a per-user, per-month subscription basis.

While not necessarily encouraged, OpenERP does include a provision in the OpenERP Enterprise license that allows customers to make private OpenERP modules when they need to.

OpenERP is, according to the author, an open source alternative to SAP ERP and Microsoft Dynamics.

What can I do with OpenERP?

OpenERP is a full suite of business software, including the following modules:

Accounting: Record your operations in a few clicks and manage all your financial activities in one place.

Application Builder:  The OpenERP application builder lets you customize every module of OpenERP directly from the web interface without any development required.

CRM: Track leads and opportunities customized your sales cycle, controls statistics and forecasts and marketing campaign automation to improve your sales performance.

Human Resources:  The module is for personnel information management, leave, time tracking, attendance‚ expenses, payroll, periodic evaluations and recruitment.

Invoicing: Create and supervise your entire supplier and customer invoices.

Manufacturing: Plan and control your supply chain through different applications in the Manufacturing module.

Marketing: Marketing campaigns can help you automate email and email sending, qualify leads and encourage customers to contact the right department.

Point of Sale: The OpenERP touchscreen point of sale allows you to manage your shop sales. It’s fully web-based so you don’t need to install or deploy any software.

Project Management: Keep track and manage your projects using tasks for short term project execution or plan phases for long term planning.

Purchase: Create and track your purchase orders, manage your suppliers’ info, control your products reception process and check suppliers’ invoices.

Warehouse Management: An inventory management system to easily manage complex needs: tracking stocks of suppliers/customers, full traceability, accounting links, and more. OpenERP supports multi-warehouse management based on hierarchical locational structure.

Because OpenERP is open source and backed by a large community, you can take advantage of more than 700 OpenERP modules on the OpenERP Apps website. These applications extend functionality of the ERP software and provide more business apps for things like manufacturing, localization, project management and more.

The other benefit to users is that you do not need to use all the business apps. You can choose only the modules that you need for your business from the suite (e.g. just CRM or CRM and invoicing). This keeps your OpenERP tidy and less overwhelming if you do not need all the business apps. You can add additional modules (at no cost) as you need them.

Architecture

Client-server Architecture : OpenERP has separate client and server components. The server runs separately from the client. It handles the business logic and communicates with the database application. The client presents information to users and allow them to inter-operate with the server. Multiple client applications are available.

Server and Modules : The server part is written in Python programming language. The client communicates with the server using XML-RPC interfaces.

Business functionality is organised into modules. A module is a folder with a pre-defined structure containing Python code and XML files. A module defines data structure, forms, reports, menus, procedures, workflows, etc… Modules are defined using a client-independent syntax. So, adding new objects, such as menus or forms, makes it available to any client.

Client applications : The clients are thin clients as they contain no business logic. Two client applications are officially supported:

  • A web application, which is deployed as an HTTP server to allow users to connect using their Web browser.
  • A desktop application, written in Python and using the GTK+ toolkit.

Other alternative clients have also been developed by the community

Database : OpenERP uses PostgreSQL as database management system.

Reporting : OpenERP also provides a reporting system with OpenOffice.org integration allowing customization of reports.

Source code and contributions : The source code of OpenERP is hosted on Launchpad, using the Bazaar revision control system, and the contributions are also handled using Launchpad. The documentations are also managed using this service but a website dedicated to all publications has been set up in 2009

OpenERP Versions:  Free, Supported, Hosted or On-Premise Business Apps

OpenERP is made available through three different versions. The OpenERP software is free, but enterprise and hosted versions are fee-based.  OpenERP Enterprise and OpenERP Community are exactly the same product, only the user pays for the additional services being offered by the OpenERP Team, not for the software.

OpenERP Community: (AGPL license) Open source OpenERP software (with all features) with no warranties. With this version you rely on community-based support only and migrations, bug fixes and private ERP modules are not allowed.  Price: Free

OpenERP Enterprise: (AGPL or AGPL plus Private Use) Open source OpenERP as production-ready management software. The Enterprise version is fully supported by the OpenERP Team and includes unlimited migrations, bug fixes, private modules and security alerts. OpenERP Enterprise version is on-premises software that you host yourself (Linux or Windows operating systems). Price: €165 per month for 1 to 10 users, up to €15,500 for 70 to 150 users.

OpenERP Online: Similar to the services offered in the enterprise version with the exception of no private modules or community modules, and it is hosted and maintained by OpenERP. Price: €39 per user per month. Free 30-day trial available.

Hope you will like this.  Cheers… 🙂

Spket: Setting up Eclipse IDE for Ext JS and JQuery development

Spket – Development tool for RIA

Eclipse is great, and I what I like the most about it is the autocomplete feature. Unfortunately, it is available only for Java (or any other supported programming language), and if you work with web development, you probably also work with javaScript. And I miss the autocomplete feature for js files.

But there is a solution for this case. You can use Spket Eclipse plugin. I’ve been using it for a couple of months and it is very good. It is saving me some time when I am coding.

So I decided to write this tutorial. I hope it will be useful to you!

Spket IDE is powerful toolkit for JavaScript and XML development.

The powerful editor for JavaScript, XUL/XBL and Yahoo! Widget development. The JavaScript editor provides features like code completion, syntax highlighting and content outline that helps developers productively create efficient JavaScript code.

Tutorial: Spket: Setting Up Eclipse IDE for Ext JS and JQuery Development

Hope you will like this. Cheers… 🙂

 

Shept – Data grid based Web Apps with Spring and Hibernate

Shept – Data grid based Web Apps with Spring and Hibernate

Build complex web form data entry applications based on data grids rapidly with spring and hibernate. As a mainly server-side approach it offers close integration and can easily be used to create new web apps or add administrative features to your existing ones.

Shept is a library meant to make development of data entry web applications easier by introducing data grids for data entry as a primary input element. It allows the concatenation of those elements in a simple fashion inspired by what used to be a standard in 4GL tools for building client server apps in the 90’s.

In technical terms shept is a thin layer closely integrated with the spring framework and hibernate. As such it is also a documentation project about building data centric web applications with these major opens source projects and provides templates and online demo applications to give you a quick start.

Shept is short for

  • Spring The core application building java toolset
  • Hibernate The core object relational layer and toolset
  • Eclipse as the development environment
  • Postgres as the industry strength open source database
  • Tomcat as the industry standard web application server

All of these are Open Source heavy weights with a long history and huge reputation in the world of Open Source Development frameworks and Tools. They are widely used and heavily documented and all have a large community of enthusiastic supporters worldwide.

Shept has been under development since 2007. It started as a proprietary library and is used in a couple custom web projects so far. Although it introduces new concepts it is more a pragmatic than a generic solution and its APIs are considered to be stable to a large extent.

Use cases

  • Migrate any legacy data driven project (e.g. ‘client server’) into the web
  • Migrate any kind of vendor specific data driven project to Open Source
  • Merge into your existing Spring-Hibernate project for RAD features
  • Building administrative frontends to community projects
  • Creating data entry wizards

Features

Shept picks up the tradition of 4GL frameworks which were commonly used for building Client-Server applications.
It offers decent capabilities for data-handling in tables (data-grids). Beeing a thin layer on top and a close integration into todays popular web application building tools Spring and Hibernate it is as well a concept as a toolset and a framework.

  • Data Grids
  • Layout composition
  • Business objects lifecycle support
  • Comprehensive Data Source Coverage
  • Segment chaining and reuse
  • Validation and Error Handling

Hope you will like this for small web apps. Cheers… 🙂

An Introduction to iBatis (MyBatis), An alternative to Hibernate and JDBC

Hello Friends,

In recent interview, Interviewer asked me “Have you hands on iBatis?”  I simply said “NO“;  now she asked me “Give me some brief of it”. Again i am speechless. and then i am researching on it and share with you.

For those who does not know iBatis/MyBatisyet, it is a persistence framework – an alternative to JDBC and Hibernate, available for Java and .NET platforms. I’ve been working with it for almost two years, and I am enjoying it!

The first thing you may notice in this and following articles about iBatis/MyBatis is that I am using both iBatis and Mybatis terms. Why? Until June 2010, iBatis was under Apache license and since then, the framework founders decided to move it to Google Code and they renamed it to MyBatis. The framework is still the same though,  it just has a different name now.

I gathered some resources, so I am just going to quote them:

What is MyBatis/iBatis?

The MyBatis data mapper framework makes it easier to use a relational database with object-oriented applications. MyBatis couples objects with stored procedures or SQL statements using a XML descriptor. Simplicity is the biggest advantage of the MyBatis data mapper over object relational mapping tools.To use the MyBatis data mapper, you rely on your own objects, XML, and SQL. There is little to learn that you don’t already know. With the MyBatis Data Mapper, you have the full power of both SQL and stored procedures at your fingertips.

iBATIS is based on the idea that there is value in relational databases and SQL, and that it is a good idea to embrace the industrywide investment in SQL. We have experiences whereby the database and even the SQL itself have outlived the application source code, and even multiple versions of the source code. In some cases we have seen that an application was rewritten in a different language, but the SQL and database remained largely unchanged.

It is for such reasons that iBATIS does not attempt to hide SQL or avoid SQL. It is a persistence layer framework that instead embraces SQL by making it easier to work with and easier to integrate into modern object-oriented software. These days, there are rumors that databases and SQL threaten our object models, but that does not have to be the case. iBATIS can help to ensure that it is not.

What is iBatis ?

  • A JDBC Framework
  • Developers write SQL, iBATIS executes it using JDBC.
  • No more try/catch/finally/try/catch.
  • An SQL Mapper
  • Automatically maps object properties to prepared statement parameters.
  • Automatically maps result sets to objects.
  • Support for getting rid of N+1 queries.
  • A Transaction Manager
  • iBATIS will provide transaction management for database operations if no other transaction manager is available.
  • iBATIS will use external transaction management (Spring, EJB CMT, etc.) if available.
  • Great integration with Spring, but can also be used without Spring (the Spring folks were early supporters of iBATIS).

What isn’t iBATIS ?

  • An ORM
  • Does not generate SQL
  • Does not have a proprietary query language
  • Does not know about object identity
  • Does not transparently persist objects
  • Does not build an object cache

Essentially, iBatis is a very lightweight persistence solution that gives you most of the semantics of an O/R Mapping toolkit, without all the drama. In other words ,iBATIS strives to ease the development of data-driven applications by abstracting the low-level details involved in database communication (loading a database driver, obtaining and managing connections, managing transaction semantics, etc.), as well as providing higher-level ORM capabilities (automated and configurable mapping of objects to SQL calls, data type conversion management, support for static queries as well as dynamic queries based upon an object’s state, mapping of complex joins to complex object graphs, etc.). iBATIS simply maps JavaBeans to SQL statements using a very simple XML descriptor. Simplicity is the key advantage of iBATIS over other frameworks and object relational mapping tools.

Who is using iBatis/MyBatis?

I think the biggest case is MySpace, with millions of users. Very nice!

Some more users here.

Hope you will like this. Cheers… 🙂

RPC in Javascript using JSON-RPC-Java

Remote procedure call (RPC) in javascript is a great concept of creating rich web applications. First we will see some background about RPC using JavaScript Object Notation (JSON).

See following quote from Wikipedia entry of JSON-RPC.

JSON-RPC is a remote procedure call protocol encoded in JSON. It is a very simple protocol (and very similar to XML-RPC), defining only a handful of data types and commands. In contrast to XML-RPC or SOAP, it allows for bidirectional communication between the service and the client, treating each more like peers and allowing peers to call one another or send notifications to one another. It also allows multiple calls to be sent to a peer which may be answered out of order.

A JSON invocation can be carried on an HTTP request where the content-type is application/json. Besides using HTTP for transport, one may use TCP/IP sockets. Using sockets, one can create much more responsive web applications with JSON-RPC, compared to polling data from a service with JSON-RPC over HTTP.

SON-RPC-Java is a key piece of Java web application middleware that allows JavaScript DHTML web applications to call remote methods in a Java Application Server without the need for page reloading (now referred to as AJAX). It enables a new breed of fast and highly dynamic enterprise Web 2.0 applications.
For using JSON-RPC-Java in your code, you need to download following json-rpc-java-1.0.1.zip file and unzip it.

http://oss.metaparadigm.com/jsonrpc/download.html

The zip contains required jsonrpc-1.0.jar and jsonrpc.js files that we will use in our project.

Once the JAR file is in classpath of your project, modify your WEB.XML (deployment descriptor) file and make an entry for JSONRPCServlet as follow.


<servlet>
<servlet-name>com.metaparadigm.jsonrpc.JSONRPCServlet</servlet-name>
<servlet-class>com.metaparadigm.jsonrpc.JSONRPCServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>com.metaparadigm.jsonrpc.JSONRPCServlet</servlet-name>
<url-pattern>/JSON-RPC</url-pattern>
</servlet-mapping>

Note that JSONRPCServlet is a servlet class inside the jsonrpc-1.0.jar file. Also, we have mapped a URL /JSON-RPC with the servlet. Hence the servlet will be invoked by the container whenever this URL is called by the client.

Also, do not forget in include the javascript file in your application.

<script src="js/jsonrpc.js"type="text/javascript"></script>
We will create a small webpage which will have two textboxes to enter numbers and a button which will call the RPC (call .sum() method on an object which resides in server) and send these two numbers to server. Server will do sum of these numbers and return the result back and the result will be displayed on webpage.
Following is the content of the html page.
<html>
<head>
<title>JSON-RPC-Java Demo</title>
<script type="text/javascript" src="js/jsonrpc.js"></script>
<script type="text/javascript">
function fnSum(form) {
try {
//Create a jsonrpc object for doing RPC.
jsonrpc = new JSONRpcClient("JSON-RPC");

// Call a Java method on the server
result = jsonrpc.sumObject.sum(form.a.value, form.b.value);
//Display the result
alert(result);

} catch(e) {
alert(e.description);
}
}
</script>
</head>
<body>
<h1>JSON-RPC-JAVA</h1>
<form>
<input type="text" name="a"/>
<input type="text" name="b"/>
<input type="button" onclick="fnSum(this.form)" value="Sum"/>
</form>
</body>
</html>

Note that in order to use JSON-RPC-Java, you need to set an object of com.metaparadigm.jsonrpc.JSONRPCBridge class in session. All the object that we will use in RPC from Client side needs to be registered into JSONRPCBridge’s object. JSONRPCBridge object can be written in session using jsp:useBean tag.

   <jsp:useBean id="JSONRPCBridge" scope="session"
   class="com.metaparadigm.jsonrpc.JSONRPCBridge" />

We will use a business class called Sum which will do sum operation on the input and return the output. Following is the content of Sum class.

package com.javamagic.jsonrpc;
public class Sum {
public Integer sum(Integer a, Integer b) {
return a + b;
}
}

Thus we need to register an object of class Sum in JSONRPCBridge object in order to use it from javascript.


Sum sumObject = new Sum();
JSONRPCBridge.registerObject("sumObject", sumObject);

Doing RPC from javascript is easy now. All we need is an jsonrpc object that we get by doing jsonrpc = new JSONRpcClient(“JSON-RPC”);. Note that the argument passed inside the JSONRpcClient() is the URL that we mapped to JSONRPCServlet servlet.
Once we get an object of jsonrpc, we can call remote methods by:


result = jsonrpc.sumObject.sum(form.a.value, form.b.value);

Where sumObject is the name that we provided while registering Sum class to JSONRPCBridge.

Following is the screenshot of the html page for user input.

Following is the sum output that the server returned to webpage using json-rpc.

Hope you will like this. Cheers… 🙂

Creating & Parsing JSON data with Java Servlet/Struts/JSP

JSON (JavaScript Object Notation) is a lightweight computer data interchange format. It is a text-based, human-readable format for representing simple data structures and associative arrays (called objects). The JSON format is specified in RFC 4627 by Douglas Crockford. The official Internet media type for JSON is application/json.

The JSON format is often used for transmitting structured data over a network connection in a process called serialization. Its main application is in AJAX web application programming, where it serves as an alternative to the traditional use of the XML format.

Supported data types

  1. Number (integer, real, or floating point)
  2. String (double-quoted Unicode with backslash escapement)
  3. Boolean (true and false)
  4. Array (an ordered sequence of values, comma-separated and enclosed in square brackets)
  5. Object (collection of key/value pairs, comma-separated and enclosed in curly brackets)
  6. null

Syntax

The following example shows the JSON representation of an object that describes a person. The object has string fields for first name and last name, contains an object representing the person’s address, and contains a list of phone numbers (an array).

{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": 10021
},
"phoneNumbers": [
"212 732-1234",
"646 123-4567"
]
}

Creating JSON data in Java
JSON.org has provided libraries to create/parse JSON data through Java code. These libraries can be used in any Java/J2EE project including Servlet, Struts, JSF, JSP etc and JSON data can be created.

Download JAR file json-rpc-1.0.jar (75 kb)

Use JSONObject class to create JSON data in Java. A JSONObject is an unordered collection of name/value pairs. Its external form is a string wrapped in curly braces with colons between the names and values, and commas between the values and names. The internal form is an object having get() and opt() methods for accessing the values by name, and put() methods for adding or replacing values by name. The values can be any of these types: Boolean, JSONArray, JSONObject, Number, and String, or the JSONObject.NULL object.

import org.json.JSONObject;

...
...

JSONObject json = new JSONObject();
json.put("city", "Mumbai");
json.put("country", "India");

...

String output = json.toString();

...

Thus by using toString() method you can get the output in JSON format.

JSON Array in Java

A JSONArray is an ordered sequence of values. Its external text form is a string wrapped in square brackets with commas separating the values. The internal form is an object having get and opt methods for accessing the values by index, and put methods for adding or replacing values. The values can be any of these types: Boolean, JSONArray, JSONObject, Number, String, or the JSONObject.NULL object.

The constructor can convert a JSON text into a Java object. The toString method converts to JSON text.

JSONArray class can also be used to convert a collection of Java beans into JSON data. Similar to JSONObject, JSONArray has a put() method that can be used to put a collection into JSON object.

Thus by using JSONArray you can handle any type of data and convert corresponding JSON output.

Using json-lib library

JSON-lib is a java library for transforming beans, maps, collections, java arrays and XML to JSON and back again to beans and DynaBeans.

Json-lib comes in two flavors, depending on the jdk compatibility. json-lib-x.x-jdk13 is compatible with JDK 1.3.1 and upwards. json-lib-x.x-jdk15 is compatible with JDK 1.5, includes support for Enums in JSONArray and JSONObject.

Download: json-lib.jar

Json-lib requires (at least) the following dependencies in your classpath:

  1. jakarta commons-lang 2.5
  2. jakarta commons-beanutils 1.8.0
  3. jakarta commons-collections 3.2.1
  4. jakarta commons-logging 1.1.1
  5. ezmorph 1.0.6

Example

package net.javamagic.java;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import net.sf.json.JSONObject;

public class JsonMain {
public static void main(String[] args) {

Map<String, Long> map = new HashMap<String, Long>();
map.put("A", 10L);
map.put("B", 20L);
map.put("C", 30L);

JSONObject json = new JSONObject();
json.accumulateAll(map);

System.out.println(json.toString());

List<String> list = new ArrayList<String>();
list.add("Sunday");
list.add("Monday");
list.add("Tuesday");

json.accumulate("weekdays", list);
System.out.println(json.toString());
}
}

Output:

{"A":10,"B":20,"C":30}
{"A":10,"B":20,"C":30,"weekdays":["Sunday","Monday","Tuesday"]}

Using Google Gson library

Gson is a Java library that can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object. Gson can work with arbitrary Java objects including pre-existing objects that you do not have source-code of.

There are a few open-source projects that can convert Java objects to JSON. However, most of them require that you place Java annotations in your classes; something that you can not do if you do not have access to the source-code. Most also do not fully support the use of Java Generics. Gson considers both of these as very important design goals.

Gson Goals

  • Provide simple toJson() and fromJson() methods to convert Java objects to JSON and vice-versa
  • Allow pre-existing unmodifiable objects to be converted to and from JSON
  • Extensive support of Java Generics
  • Allow custom representations for objects
  • Support arbitrarily complex objects (with deep inheritance hierarchies and extensive use of generic types)

Google Gson Example


import java.util.List;
import com.google.gson.Gson;

public class Test {

public static void main(String... args) throws Exception {
String json =
"{"
+ "'title': 'Computing and Information systems',"
+ "'id' : 1,"
+ "'children' : 'true',"
+ "'groups' : [{"
+ "'title' : 'Level one CIS',"
+ "'id' : 2,"
+ "'children' : 'true',"
+ "'groups' : [{"
+ "'title' : 'Intro To Computing and Internet',"
+ "'id' : 3,"
+ "'children': 'false',"
+ "'groups':[]"
+ "}]"
+ "}]"
+ "}";

// Now do the magic.
Data data = new Gson().fromJson(json, Data.class);

// Show it.
System.out.println(data);
}

}

class Data {
private String title;
private Long id;
private Boolean children;
private List<Data> groups;

public String getTitle() { return title; }
public Long getId() { return id; }
public Boolean getChildren() { return children; }
public List<Data> getGroups() { return groups; }

public void setTitle(String title) { this.title = title; }
public void setId(Long id) { this.id = id; }
public void setChildren(Boolean children) { this.children = children; }
public void setGroups(List<Data> groups) { this.groups = groups; }

public String toString() {
return String.format("title:%s,id:%d,children:%s,groups:%s", title, id, children, groups);
}

Hope this will help you.  Cheers… 🙂

Checkstyle in Eclipse

Checkstyles

This article will describe the usage of the Checkstyle plugins for Eclipse.

1. Checkstyle

Checkstyle checks if the Java code is according to certain standards hence ensuring a certain quality of the coding.

2. Checkstyle

   2.1.  Overview

Checkstyle is a tool to help ensuring that your Java code adheres to a set of coding standards

   2.2.  Installation

http://eclipse-cs.sourceforge.net/update Update site for the Eclipse Checkstyle Plugin.

If you developing with Eclipse, make sure to select the Sun Conventions (Eclipse) under Window -> Preferences -> Checkstyle. Press the “Set as Default” after selecting the right entry.

2.3. Configuration

You can turn of certain checks. If you change settings from the standard profile you should always make a copy of the existing profile.

To customizine your check, first make a copy of the checks

Select your new configuration and press Configure. De-activate for example the checks for Javadoc Comments.

Make this new setting your default one.

2.4.  Using common Checkstyle rules for teams

For teams it is good to follow the same coding style rules.

The Eclipse checkstyle plugin allows this by proving a remote site for the checkstyle settings.

Press new in the settings. Select “Remote Configuration”. Give the rule set a description and then type in the URL you want to use for the settings.

Make this new setting your default one.

2.5.  Using Checkstyle in your projects

Make your new profile the default one.

You need to activate the Eclipse Checkstyle Plugin for your project. Right click on your project and search for Checkstyle. Select the checkbox “Checkstyle active for this project”.

You can use the checkstyle browser view to display the violations.

Hope it will help you. Cheers… 🙂

LDAP – Basic

LDAP – Lightweight Directory Access Prorocol

LDAP is characterised as a ‘write-once-read-many-times’ service. That is to say, the type of data that would normally be stored in an LDAP service would not be expected to change on every access. To illustrate: LDAP would NOT be suitable for maintaining banking transaction records since, by their nature, they change on every access (transaction). LDAP would, however, be eminently suitable for maintaining details of the bank branches, hours of opening, employees etc..

What is it?

Why is it used?

When is it used?

How is it used?

The nice thing about the Internet is that there’s so much information on it. The bad thing about the Internet is that there’s so much information on it.

This might seem a little cliched, but it’s true – the Web is rich in information, but poor in the tools needed to index and search it. Google’s (http://www.google.com/) doing a great job of fixing this problem, but it’s still limited largely to collating and indexing published content. If, for example, you’re looking for the email address of Sam Jones, who you know works somewhere in Long Beach with Rough Rubber Shoes, or the telephone number of your great-grand-uncle Josh, who moved to New York a few years back and was never heard from again, you’re outta luck – Google can’t help you, and neither can any of the other search engines out there.

What would be ideal in this situation is an Internet version of your local telephone directory, a public database of users and their affiliations, locations and contact information that you could query at the click of a button. Something that made it possible to easily search for resources (users, computers, businesses) by different attributes, that was universally accessible, and that was versatile enough to be used for different applications.

Something like LDAP. Let’s start with the basics: what the heck is LDAP anyhoo?

The acronym LDAP stands for Lightweight Directory Access Protocol, which, according to the official specification at http://www.ietf.org/rfc/rfc2251.txt, is a protocol “designed to provide access to the X.500 Directory while not incurring the resource requirements of the Directory Access Protocol (DAP) […] specifically targeted at simple management applications and browser applications that provide simple read/write interactive access to the X.500 Directory, and is intended to be a complement to the DAP itself”.

Yup, it didn’t make sense to me either.

Before you can understand LDAP, you need to first understand what a “directory service” is. A directory service is exactly what it sounds like – a publicly available database of structured information. The most common example of a directory service is your local Yellow Pages – it contains names, addresses and contact numbers of different businesses, structured by business category, all indexed in a manner that is easily browseable or searchable.

Like ice-cream, directory services come in many flavours. They may be local to a specific organization (the corporate phone book) or more global in scope (a countrywide Yellow Pages). They can contain different types of information, ranging from employee names, phone numbers and email addresses to domain names and their corresponding IP addresses They can exist in different forms and at different locations, either as a single electronic database within an organization’s internal network or as a series of inter-connected databases existing at different geographical locations on a corporate extranet or the global Internet. Despite these differences, however, they all share certain common attributes: structured information, powerful browsing and search capabilities, and – in the case of distributed directories – inter-cooperation between the different pieces of the database.

Now, obviously, organizing information neatly in a directory is only part of the puzzle – in order for it to be useful, you need a way to get it out. If you’re using the local phone book, getting information out it as simple as flipping to the index, locating the category of interest, and opening it to the appropriate page. If you’re using an electronic, globally distributed directory service, however, you need something a little more sophisticated.

That’s where LDAP comes in. Put very simply, LDAP is a protocol designed to allow quick, efficient searches of directory services. Built around Internet technologies, LDAP makes it possible to easily update and query directory services over standard TCP/IP connections, and includes a host of powerful features, including security, access control, data replication and support for Unicode.

LDAP is based largely on DAP, the Directory Access Protocol, which was designed for communication between directory servers and clients compliant to the X.500 standard. DAP is, however, fairly complex to implement and use, and is not suitable for the Web; LDAP is a simpler, faster alternative offering much of the same basic functionality without the performance overhead and deployment difficulties of DAP.

Since LDAP is built for a networked world, it is based on a client-server model. The system consists of one (or more) LDAP servers, which host the public directory service, and multiple clients, which connect to the server to perform queries and retrieve results. LDAP clients are today built into most common address book applications, including email clients like Microsoft Outlook and Qualcomm Eudora; however, since LDAP-compliant directories can store a diverse range of data (not just names and phone numbers), LDAP clients are also increasingly making an appearance in other applications.

A corporate directory is a database of people, network resources, organizations, and so forth. The corporate database probably holds not just phone numbers, but also other information like email addresses, employee and department numbers, and application configuration data. The corporate directory is managed by a directory server, which takes requests from client applications and serves them back directory data from the database.LDAP, Lightweight Directory Access Protocol, provides a standard language that directory client applications and directory servers use to communicate with one another about data in directories. LDAP applications can search, add, delete and modify directory data. LDAP is a lightweight version of the earlier DAP, Directory Access Protocol, used by the International Organization for Standardization X.500 standard. DAP gives any application access to the directory through an extensible and robust information framework, but at a high administrative cost. DAP does not use the Internet standard TCP/IP protocol, has complicated directory naming conventions, and generally requires a big investment. LDAP preserves most features of DAP at lower cost. LDAP uses an open directory access protocol running over TCP/IP and uses simplified encoding methods. LDAP retains the X.500 standard data model and can support millions of entries for a comparatively modest investment in hardware and network infrastructure.LDAP directories differ from relational databases. In LDAP, you do not look data up in tables. Instead, you look data up in trees, similar to the tree you get if you diagram the contents of a file system. The data is not in rows and columns, but in what are called entries. These entries are much like entries in the phone book. Entries may even actually contain phone numbers. Here is a text representation of an LDAP entry.

dn: uid=bjensen, ou=People, dc=example,dc=com
cn: Barbara Jensen
cn: Babs Jensen
sn: Jensen
givenname: Barbara
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
ou: Product Development
ou: People
l: Cupertino
uid: bjensen
mail: bjensen@example.com
telephonenumber: +1 408 555 1862
facsimiletelephonenumber: +1 408 555 1992
roomnumber: 0209
userpassword: hifalutin

An LDAP entry is composed of attributes and their values. At the outset of the text representation you see the DN, Distinguished Name, uid=bjensen, ou=People, dc=example,dc=com. The DN is a distinguished name, because it distinguishes the entry from all others in the directory. You also see attributes like CN, Common Name, which takes values Barbara Jensen and Babs Jensen. You further see attributes like SN, surname, which takes the value Jensen, and mail, which takes the value bjensen@example.com.

You also see some objectClass attribute values. The objectClass attribute tells you what other attribute types the entry can have. Object class definitions are found in directory schema. Schema specify all the known object classes and attribute types available for entries in the directory. You can add schema definitions to LDAP directories, making the LDAP entries easily extensible.

When you want to look up something in a directory, you typically know the values of one of the attributes. By analogy, if you want to look up a phone number, you already know the name of the person or organization whose telephone number you want. If you are looking up a phone number, you also probably have some idea where the person or organization is located. The same is the case for LDAP directories. You typically need to have some idea where the entry is located.

For example, assume you want to look up Barbara Jensen’s phone number in the LDAP directory holding the entry shown previously. You need to know one of the attributes. In this case, you know Barbara’s name. You also need to know approximately where her entry is located. If you know that she is in the directory at Example.com, and that the root of their tree starts at dc=example,dc=com, that is enough.

There are GUI tools out there for LDAP lookups, but many systems also have a command called ldapsearch. You guessed it, ldapsearch is for searching LDAP directories. Here is an ldapsearch command that searches the entries under dc=example,dc=com for entries having common name Barbara Jensen.

$ ldapsearch -b dc=example,dc=com "(cn=Barbara Jensen)"

The argument to the -b option is the base DN for the search. By default, the ldapsearch command searches through all the entries in the tree below the base DN. The "(cn=Barbara Jensen)" is called the filter, because it tells me the criteria for filtering through the entries found under the base DN. If you have set everything up correctly, your search returns something very much like the entry shown above, except that you almost surely will not see the user password attribute and its value. You can also narrow the search results to see only the DN of the entry and the telephone number. You do this by adding the attribute or attributes you want returned after the filter.

$ ldapsearch -b dc=example,dc=com "(cn=Barbara Jensen)" telephoneNumber

If everything works as expected, this search returns a partial entry.

dn: uid=bjensen, ou=People, dc=example,dc=com
telephonenumber: +1 408 555 1862

More Info : LDAP – An Introduction to LDAP

Technologies for JAVA Web Application

Hello Friends,

Recently in one of the interview, interviewer asked me “Have you ever hands on Mule, ActiveMQ , Vaadin , MongoDB, CouchDB, Neo4J, GWT ?”,  i am speechless, because among all of these,  some are the word which i had listen first time in my life, and i was just left out that interview  and start googling these all tachnology, why it uses? Where is to be used? When to be used? And How to be used?

So, here i will be shared my experience with you of that technologies and some quick overview of that.

Neo4j: NOSQL for the Enterprise

Neo4j is an open-source graph database, implemented in Java. The developers describe Neo4j as “embedded, disk-based, fully transactional Java persistence engine that stores data structured in graphs rather than in tables”.

Neo4j is a high-performance, NOSQL graph database with all the features of a mature and robust database. The programmer works with an object-oriented, flexible network structure rather than with strict and static tables — yet enjoys all the benefits of a fully transactional, enterprise-strength database. For many applications, Neo4j offers performance improvements on the order of 1000x or more compared to relational DBs.

Wait, what is Neo4j?

Neo4j is a graph database, that is, it stores data as nodes and relationships. Both nodes and relationships can hold properties in a key/value fashion.

You can navigate the structure either by following the relationships or use declarative traverser features to get to the data you want.

Handling complexity

Most applications will not only have to scale to a huge volumes, but also scale to the complexity of the domain at hand. Typically, there may be many interconnected entities and optional properties. Even simple domains can be complex to handle because of the queries you want to run on them, for example to find paths. Two coding examples are the social network example (partial Ruby implementation) and the Neo4j IMDB example (Ruby variation of the code). For more examples of different domains modeled in a graph database, visit the Domain Modeling Gallery.

Storing objects

The common domain implementation pattern when using Neo4j is to let the domain objects wrap a node, and store the state of the entity in the node properties. To relieve you from the boilerplate code needed for this, you can use a framework like jo4neo (intro, blog posts), where you use annotations to declare properties and relationships, but still have the full power of the graph database available for deep traversals and other graphy stuff. Here’s a code sample showing jo4neo in action:

public class Person {
  //used by jo4neo
  transient Nodeid node;
  //simple property
  @neo String firstName;
  //helps you store a java.util.Date to neo4j
  @neo Date date;
  // jo4neo will index for you
  @neo(index=true) String email;
  // many to many relation
  @neo Collection<role> roles;

  /* normal class oriented
  * programming stuff goes here
  */
}

Another way to persist objects is by using the neo4j.rb Neo4j wrapper for Ruby. Time for a few lines of sample code again:

require "rubygems"
require "neo4j"

class Person
  include Neo4j::NodeMixin
  # define Neo4j properties
  property :name, :salary, :age, :country

  # define an one way relationship to any other node
  has_n :friends

  # adds a Lucene index on the following properties
  index :name, :salary, :age, :country
end

REST API

Of course you want a RESTful API in front of the graph database as well. There’s been plenty of work going on in that area and here are some options:

  • The neo4j.rb Ruby bindings comes with a REST extension.
  • The neo4jr-simple Ruby wrapper has the neo4jr-social example project, which exposes social network data over a REST API.
  • Similarly, the Scala bindings has a companion example project which will show you how to set up a project exposing your data over REST.
  • Last but not least, Jim Webber has joined up with the core Neo4j team to create a kick-ass REST API. The current code base is only in the laboratory but a lot of people are already kicking its tires.

Language bindings

The Neo4j graph engine is written in Java, so you can easily add the jar file and start using the simple and minimalistic API right away. Your first stop should be the Getting started guide, or if you want to add a package of useful add-on components to the mix, go for Getting started with Apoc. Other language bindings:

Frameworks

Work is being done on using Neo4j as backend of different frameworks. Follow the links to get more information!

Tools

  • Shell: a command-line shell for browsing the graph and manipulate it.
  • Neoclipse: Eclipse plugin (and standalone application) for Neo4j. Visual interface to browse and edit the graph.
  • Batch inserter: tool to bulk upload big datasets quickly.
  • Online backup: performs backup of a running Neo4j instance.

Query languages

Beyond using Neo4j programmatically, you can also issue queries using a query language. These are the supported options at the moment:

  • SPARQL: Neo4j can be used as a triple- or quadstore, and has SAIL and SPARQL implementations. Go to the components site to find out more about the related components.
  • Gremlin: a graph-based programming-language with different backend implementations in the works as well as a supporting toolset.

CouchDB


Apache CouchDB, commonly referred to as CouchDB, is an open source document-oriented database written mostly in the Erlang programming language. It is part of the NoSQL group of data stores and is designed for local replication and to scale horizontally across a wide range of devices.

What CouchDB is

  • A document database server, accessible via a RESTful JSON API.
  • Ad-hoc and schema-free with a flat address space.
  • Distributed, featuring robust, incremental replication with bi-directional conflict detection and management.
  • Query-able and index-able, featuring a table oriented reporting engine that uses JavaScript as a query language.

What it is Not

  • A relational database.
  • A replacement for relational databases.
  • An object-oriented database. Or more specifically, meant to function as a seamless persistence layer for an OO programming language.

CouchDB is most similar to other document stores like Riak, MongoDB and Lotus Notes. It is not a relational database management system. Instead of storing data in rows and columns, the database manages a collection of JSON documents. The documents in a collection need not share a schema, but retain query abilities via views. Views are defined with aggregate functions and filters are computed in parallel, much like MapReduce.

Views are generally stored in the database and their indexes updated continuously, although queries may introduce temporary views. CouchDB supports a view system using external socket servers and a JSON-based protocol. As a consequence, view servers have been developed in a variety of languages.

Features

  • Document Storage

CouchDB stores documents in their entirety. You can think of a document as one or more field/value pairs expressed as JSON. Field values can be simple things like strings, numbers, or dates. But you can also use ordered lists and associative maps. Every document in a CouchDB database has a unique id and there is no required document schema.

  •  ACID Semantics

Like many relational database engines, CouchDB provides ACID semantics. It does this by implementing a form of Multi-Version Concurrency Control (MVCC) not unlike InnoDB or Oracle. That means CouchDB can handle a high volume of concurrent readers and writers without conflict.

  •  Map/Reduce Views and Indexes

To provide some structure to the data stored in CouchDB, you can develop views that are similar to their relational database counterparts. In CouchDB, each view is constructed by a JavaScript function (server-side JavaScript by using CommonJS and SpiderMonkey) that acts as the Map half of a map/reduce operation. The function takes a document and transforms it into a single value which it returns. The logic in your JavaScript functions can be arbitrarily complex. Since computing a view over a large database can be an expensive operation, CouchDB can index views and keep those indexes updated as documents are added, removed, or updated. This provides a very powerful indexing mechanism that grants unprecedented control compared to most databases.

  • Distributed Architecture with Replication

CouchDB was designed with bi-direction replication (or synchronization) and off-line operation in mind. That means multiple replicas can have their own copies of the same data, modify it, and then sync those changes at a later time. The biggest gotcha typically associated with this level of flexibility is conflicts.

  • REST API

CouchDB treats all stored items (there are others besides documents) as a resource. All items have a unique URI that gets exposed via HTTP. REST uses the HTTP methods POST, GET, PUT and DELETE for the four basic CRUD (Create, Read, Update, Delete) operations on all resources. HTTP is widely understood, interoperable, scalable and proven technology. A lot of tools, software and hardware, are available to do things with HTTP like caching, proxying and load balancing.

  • Eventual Consistency

According to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. CouchDB guarantees eventual consistency to be able to provide both availability and partition tolerance.

MongoDB

MongoDB (from “humongous”) is an open source document-oriented NoSQL database system written in the C++ programming language. It manages collections of BSON documents.

MongoDB features:

  • Ad hoc queries

In MongoDB, any field can be queried at any time. MongoDB supports range queries, regular expression searches, and other special types of queries in addition to exactly matching fields. Queries can also include user-defined JavaScript functions (if the function returns true, the document matches).

Queries can return specific fields of documents (instead of the entire document), as well as sorting, skipping, and limiting results. Queries can “reach into” embedded objects and arrays.

  • Indexing

Indexes in MongoDB are conceptually similar to those in RDBMSes like MySQL. Any field in a MongoDB document can be indexed.

Secondary indexes are also available, including single-key, compound, unique, non-unique, and geospatial indexes. Nested fields (as described above in the ad hoc query section) can also be indexed and indexing an array type will index each element of the array.

MongoDB’s query optimizer will try a number of different query plans when a query is run and select the fastest, periodically resampling. Developers can see the index being used with the `explain` function and choose a different index with the `hint` function.

Indexes can be created or removed at any time.

  • Aggregation

In addition to ad hoc queries, MapReduce can be used for batch processing of data and aggregation operations. In version 2.1, the current development release of MongoDB, a new aggregation framework is available. This framework enables users to obtain the kind of results SQL group-by is used for, without having to write custom JavaScript.

  • File storage

The software implements a protocol called GridFS that is used to store and retrieve files from the database. This file storage mechanism has been used in plugins for NGINX and lighttpd.

  • Server-side JavaScript execution

JavaScript is the lingua franca of MongoDB and can be used in queries, aggregation functions (such as MapReduce), and sent directly to the database to be executed.

Example of JavaScript in a query:

> db.foo.find({$where : function() { return this.x == this.y; }})

Example of code sent to the database to be executed:

> db.eval(function(name) { return “Hello, “+name; }, [“Joe”])

This returns “Hello, Joe”.

JavaScript variables can also be stored in the database and used by any other JavaScript as a global variable. Any legal JavaScript type, including functions and objects, can be stored in MongoDB so that JavaScript can be used to write “stored procedures.”

  • Capped collections

MongoDB supports fixed-size collections called capped collections. A capped collection is created with a set size and, optionally, number of elements. Capped collections are the only type of collection that maintains insertion order: once the specified size has been reached, a capped collection behaves like a circular queue.

A special type of cursor, called a tailable cursor, can be used with capped collections. This cursor was named after the `tail -f` command, and does not close when it finishes returning results but continues to wait for more to be returned, returning new results as they are inserted into the capped collection.

Vaadin

Vaadin is a Java framework for building modern web applications that look, great, perform well and make you and your users happy.

Vaadin is an open source Web application framework for rich Internet applications. In contrast to JavaScript libraries and browser-plugin based solutions, it features a server-side architecture, which means that the majority of the logic runs on the servers. Ajax technology is used at the browser-side to ensure a rich and interactive user experience. On client-side Vaadin is built on top of and can be extended with Google Web Toolkit.

Features

One of the most prominent features of Vaadin Framework is the ability to use Java (using a Java EE platform) as the programming language, while creating content for the Web. The framework incorporates event-driven programming and widgets, which enables a programming model that is closer to GUI software development, than traditional Web development with HTML and JavaScript.

Vaadin Framework utilizes Google Web Toolkit for rendering the resulting Web page. While Google Web Toolkit operates only on client-side (i.e. a browser’s JavaScript engine) – which could lead to trust issues – Vaadin adds server-side validation to all actions. This means that if the client data is tampered with, the server notices this and doesn’t allow it.

Vaadin Framework’s default component set can be extended with custom GWT widgets and themed with CSS.

From application developers point of view, Vaadin is just one JAR-file that can be included in any kind of Java Web project developed with standard Java tools. In addition, there are Eclipse and Netbeans plugins for easing the development of Vaadin applications as well as direct support of (and distribution through) Maven.

Vaadin applications can be deployed as Java Servlets to any Java server, including Google App Engine. Applications can also be deployed as Portlets to any Java portal, with deeper integration to Liferay Portal.

Apache ActiveMQ

Apache ActiveMQ is an open source (Apache 2.0 licensed) message broker which fully implements the Java Message Service 1.1 (JMS). It provides “Enterprise Features” like clustering, multiple message stores, and ability to use any database as a JMS persistence provider besides VM, cache, and journal persistency.

Apart from Java, ActiveMQ can be also used from .NET, C/C++ or Delphi or from scripting languages like Perl, Python, PHP and Ruby via various “Cross Language Clients” together with connecting to many protocols and platforms. These include several standard wire-level protocols, plus their own protocol called OpenWire.

ActiveMQ is used in enterprise service bus implementations such as Apache ServiceMix, Apache Camel, and Mule.

ActiveMQ is often used with Apache ServiceMix, Apache Camel and Apache CXF in SOA infrastructure projects.

Mule

Mule is a lightweight enterprise service bus (ESB) and integration framework. It can handle services and applications using disparate transport and messaging technologies. The platform is Java-based, but can broker interactions between other platforms such as .NET using web services or sockets.

The architecture is a scalable, highly-distributable object broker that can seamlessly handle interactions across legacy systems, in-house applications and almost all modern transports and protocols.

Some of the key features of Mule are:

  • Pluggable connectivity, for around 50 protocols including JMS, JDBC, TCP, UDP, Multicast, HTTP, servlet, SMTP, POP3, file, XMPP.
  • Message routing capabilities
  • Deployment topologies including ESB, ESN, “hub and spoke” and client server
  • Web services and WS-* support using Apache CXF, Xfire, Axis and Glue
  • Integration with JBoss and other application servers
  • Spring integration
  • Transformation layer
  • Integrated security management