Monday, February 20, 2012

MongoDB - Replica Sets with Spring Data MongoDB (Part 1)

Introduction

In this tutorial, we will study how to setup MongoDB Replica Sets using a Spring Data MongoDB-based application for testing our replica sets. We will not create a MongoDB application here but instead we will reuse an existing one from the Spring MVC 3.1 - Implement CRUD with Spring Data MongoDB guide. For this study, we will use production-ready, cloud servers to demonstrate in real-time the results.


What is a Replica Set?

Replica sets are a form of asynchronous master/slave replication, adding automatic failover and automatic recovery of member nodes.

  • A replica set consists of two or more nodes that are copies of each other. (i.e.: replicas)
  • The replica set automatically elects a primary (master). No one member is intrinsically primary; that is, this is a share-nothing design.
  • Drivers (and mongos) can automatically detect when a replica set primary changes and will begin sending writes to the new primary.

Replica sets have several common uses:
  • Data Redundancy
  • Automated Failover / High Availability
  • Distributing read load
  • Simplify maintenance (compared to "normal" master-slave)
  • Disaster recovery

Source: MongoDB Replica Sets

Servers

We have four cloud servers (for security purposes, I have modified their actual IP addresses). All servers are running with the following configuration:
OS: CentOS release 5.2 (Final)
RAM: 512MB
HD: 30GB

Here are our servers:
Server #IP addressMongoDB portComments
Server 1123.456.78.9027017primary server
Server 2123.456.78.9127017slave server
Server 3123.456.78.9227017slave server
Server 4123.456.78.9327017arbiter server (see note below)

Although Server 1 is the primary server, this is not permanent. Whenever the primary server is down, another server is elected as a primary. Server 4 is an arbiter server which is useful for certain cases in electing a primary server.

Note: Our Spring Data MongoDB-based application is hosted in Server 1.

What is an Arbiter?
Arbiters are nodes in a replica set that only participate in elections: they don't have a copy of the data and will never become the primary node (or even a readable secondary). They are mainly useful for breaking ties during elections (e.g. if a set only has two members).

When to add an arbiter
  • Two members with data : add an arbiter to have three voters. 2 out of 3 votes for a member establishes it as primary.
  • Three members with data : no need to add an arbiter. In fact having 4 voters is worse as 3 of 4 needed to elect a primary instead of 2 of 3. In theory one might add two arbiters thus making number of votes five, and 3 of 5 would be ok; however this is uncommon and generally not recommended.
  • Four members with data : add one arbiter.

Source: MongoDB - Adding an Arbiter

Do I need an Arbiter?
You need an arbiter if you have an even number of votes. As an extension to this, at most you should only ever have 1 arbiter. If you aren't sure how many votes you have, it's probably the same as the number of servers in the set you have (including slaves, hidden, arbiters).

Source: Does My MongoDB Replica Set Need An Arbiter?

Next

In the next section, we will install MongoDB on our servers and configure Replica Sets. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: MongoDB - Replica Sets with Spring Data MongoDB (Part 1) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

MongoDB - Replica Sets with Spring Data MongoDB (Part 2)

Review

In the previous section, we have introduced and discussed Replica Sets. In this section, we will install MongoDB and configure Replica Sets.


Installation

Our first step is to download and install MongoDB in each server. Since our server is running CentOS, we can download the CentOS MongoDB package by following the instructions from this link. However, it is easier to simply install the prebuilt binaries as stated in the MongoDB downloads section, so we will follow this advice instead.

1. Open a browser and visit the MongoDB download section at http://www.mongodb.org/downloads

2. Under the Production Release, choose the file that matches your operating system. In our case, it's Linux 32-bit.

3. Once downloaded, transfer the compressed file to all servers. (In each server, we have created a directory named /home/mongo. This is where we will extract the contents of the compressed file.)

4. Extract the contents by running the following command (you might need to modify the directory). Remember to do these on all servers.
tar -C /home/mongo/ -zxvf/home/mongo/mongodb-linux-i686-2.0.2.tgz 


5. Now we need to create the database directory for our MongoDB servers. By default, /data/db is used by MongoDB, so we'll create those directories by running the following command (Remember to do these on all servers):
mkdir -p/data/db 


6. Next, we will run MongoDB servers in a Replica Set using the following command (Again, do these step on all servers):
/home/mongo/mongodb-linux-i686-2.0.2/bin/./mongod --replSet cluster1


Server 1 should output the following log:
[Server #1] /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongod --replSet cluster1
Sat Feb 18 11:14:36 
Sat Feb 18 11:14:36 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Sat Feb 18 11:14:36 
Sat Feb 18 11:14:36 [initandlisten] MongoDB starting : pid=5921 port=27017 dbpath=/data/db/ 32-bit host=28125_2_85413_357231
Sat Feb 18 11:14:36 [initandlisten] 
Sat Feb 18 11:14:36 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Sat Feb 18 11:14:36 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
Sat Feb 18 11:14:36 [initandlisten] **       with --journal, the limit is lower
Sat Feb 18 11:14:36 [initandlisten] 
Sat Feb 18 11:14:36 [initandlisten] db version v2.0.2, pdfile version 4.5
Sat Feb 18 11:14:36 [initandlisten] git version: 514b122d308928517f5841888ceaa4246a7f18e3
Sat Feb 18 11:14:36 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_41
Sat Feb 18 11:14:36 [initandlisten] options: { replSet: "cluster1" }
Sat Feb 18 11:14:36 [initandlisten] waiting for connections on port 27017
Sat Feb 18 11:14:36 [websvr] admin web console waiting for connections on port 28017
Sat Feb 18 11:14:36 [initandlisten] connection accepted from 127.0.0.1:39621 #1
Sat Feb 18 11:14:36 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:14:36 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Sat Feb 18 11:14:46 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:14:56 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

Server 2 should output the following log:
[Server #2] /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongod --replSet cluster1
Sat Feb 18 11:15:34 
Sat Feb 18 11:15:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Sat Feb 18 11:15:34 
Sat Feb 18 11:15:34 [initandlisten] MongoDB starting : pid=13534 port=27017 dbpath=/data/db/ 32-bit host=28125_2_82937_349828
Sat Feb 18 11:15:34 [initandlisten] 
Sat Feb 18 11:15:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Sat Feb 18 11:15:34 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
Sat Feb 18 11:15:34 [initandlisten] **       with --journal, the limit is lower
Sat Feb 18 11:15:34 [initandlisten] 
Sat Feb 18 11:15:34 [initandlisten] db version v2.0.2, pdfile version 4.5
Sat Feb 18 11:15:34 [initandlisten] git version: 514b122d308928517f5841888ceaa4246a7f18e3
Sat Feb 18 11:15:34 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_41
Sat Feb 18 11:15:34 [initandlisten] options: { replSet: "cluster1" }
Sat Feb 18 11:15:34 [initandlisten] waiting for connections on port 27017
Sat Feb 18 11:15:34 [websvr] admin web console waiting for connections on port 28017
Sat Feb 18 11:15:34 [initandlisten] connection accepted from 127.0.0.1:46620 #1
Sat Feb 18 11:15:34 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:15:34 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Sat Feb 18 11:15:44 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

Server 3 should output the following log:
[Server #3] /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongod --replSet cluster1
Sat Feb 18 11:15:35 
Sat Feb 18 11:15:35 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Sat Feb 18 11:15:35 
Sat Feb 18 11:15:35 [initandlisten] MongoDB starting : pid=8902 port=27017 dbpath=/data/db/ 32-bit host=28125_2_85413_357219
Sat Feb 18 11:15:35 [initandlisten] 
Sat Feb 18 11:15:35 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Sat Feb 18 11:15:35 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
Sat Feb 18 11:15:35 [initandlisten] **       with --journal, the limit is lower
Sat Feb 18 11:15:35 [initandlisten] 
Sat Feb 18 11:15:35 [initandlisten] db version v2.0.2, pdfile version 4.5
Sat Feb 18 11:15:35 [initandlisten] git version: 514b122d308928517f5841888ceaa4246a7f18e3
Sat Feb 18 11:15:35 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_41
Sat Feb 18 11:15:35 [initandlisten] options: { replSet: "cluster1" }
Sat Feb 18 11:15:35 [initandlisten] waiting for connections on port 27017
Sat Feb 18 11:15:35 [websvr] admin web console waiting for connections on port 28017
Sat Feb 18 11:15:35 [initandlisten] connection accepted from 127.0.0.1:52051 #1
Sat Feb 18 11:15:35 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:15:35 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Sat Feb 18 11:15:45 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:15:55 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

Server 4 should output the following log:
[Server #4] /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongod --replSet cluster1
Sat Feb 18 11:15:37 
Sat Feb 18 11:15:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Sat Feb 18 11:15:37 
Sat Feb 18 11:15:37 [initandlisten] MongoDB starting : pid=11685 port=27017 dbpath=/data/db/ 32-bit host=28125_2_84690_354582
Sat Feb 18 11:15:37 [initandlisten] 
Sat Feb 18 11:15:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Sat Feb 18 11:15:37 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
Sat Feb 18 11:15:37 [initandlisten] **       with --journal, the limit is lower
Sat Feb 18 11:15:37 [initandlisten] 
Sat Feb 18 11:15:37 [initandlisten] db version v2.0.2, pdfile version 4.5
Sat Feb 18 11:15:37 [initandlisten] git version: 514b122d308928517f5841888ceaa4246a7f18e3
Sat Feb 18 11:15:37 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_41
Sat Feb 18 11:15:37 [initandlisten] options: { replSet: "cluster1" }
Sat Feb 18 11:15:37 [initandlisten] waiting for connections on port 27017
Sat Feb 18 11:15:37 [websvr] admin web console waiting for connections on port 28017
Sat Feb 18 11:15:37 [initandlisten] connection accepted from 127.0.0.1:51100 #1
Sat Feb 18 11:15:37 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:15:37 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Sat Feb 18 11:15:47 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sat Feb 18 11:15:57 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

7. Next step is to configure our MongoDB servers to act as a cluster. Follow the steps below:

  1. Login to Server 1
  2. Run a MongoDB client using the following command:
    /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongo
    
    
  3. Initiate the MongoDB cluster
    rs.initiate({_id: 'cluster1', members: [
     {_id: 0, host: '123.456.78.90:27017'},
     {_id: 1, host: '123.456.78.91:27017'},
     {_id: 2, host: '123.456.78.92:27017'},
     {_id: 3, host: '123.456.78.93:27017', arbiterOnly: true}]
    })
    
    

    If you have accidentally initiated an incorrect configuration, you can reconfigure the configuration with:
    rs.reconfig({_id: 'cluster1', members: [
     {_id: 0, host: '123.456.78.90:27017'},
     {_id: 1, host: '123.456.78.91:27017'},
     {_id: 2, host: '123.456.78.92:27017'},
     {_id: 3, host: '123.456.78.93:27017', arbiterOnly: true}]
    }, true)
    
    

    Notice, the last host is an arbiter only server. This will participate in electing a primary but receive no data.

    You should see the following output from MongoDB client:
    > rs.initiate({_id: 'cluster1', members: [
    bye
    [Server #1] /home/mongo/mongodb-linux-i686-2.0.2/bin/./mongo
    MongoDB shell version: 2.0.2
    connecting to: test
    > rs.initiate({_id: 'cluster1', members: [
    ... {_id: 0, host: '123.456.78.90:27017'},
    ... {_id: 1, host: '123.456.78.91:27017'},
    ... {_id: 2, host: '123.456.78.92:27017'},
    ... {_id: 3, host: '123.456.78.93:27017', arbiterOnly: true}]
    ... })
    {
    "info" : "Config now saved locally.  Should come online in about a minute.",
    "ok" : 1
    }
    

    Examine the output from Server 1
    Sat Feb 18 11:16:22 [conn2] replSet replSetInitiate admin command received from client
    Sat Feb 18 11:16:22 [conn2] replSet replSetInitiate config object parses ok, 4 members specified
    Sat Feb 18 11:16:22 [conn2] replSet replSetInitiate all members seem up
    Sat Feb 18 11:16:22 [conn2] ******
    Sat Feb 18 11:16:22 [conn2] creating replication oplog of size: 47MB...
    Sat Feb 18 11:16:22 [FileAllocator] allocating new datafile /data/db/local.ns, filling with zeroes...
    Sat Feb 18 11:16:22 [FileAllocator] creating directory /data/db/_tmp
    Sat Feb 18 11:16:22 [FileAllocator] done allocating datafile /data/db/local.ns, size: 16MB,  took 0.053 secs
    Sat Feb 18 11:16:22 [FileAllocator] allocating new datafile /data/db/local.0, filling with zeroes...
    Sat Feb 18 11:16:22 [FileAllocator] done allocating datafile /data/db/local.0, size: 16MB,  took 0.106 secs
    Sat Feb 18 11:16:22 [FileAllocator] allocating new datafile /data/db/local.1, filling with zeroes...
    Sat Feb 18 11:16:23 [FileAllocator] done allocating datafile /data/db/local.1, size: 32MB,  took 1.124 secs
    Sat Feb 18 11:16:23 [FileAllocator] allocating new datafile /data/db/local.2, filling with zeroes...
    Sat Feb 18 11:16:26 [FileAllocator] done allocating datafile /data/db/local.2, size: 64MB,  took 3.228 secs
    Sat Feb 18 11:16:29 [conn2] ******
    Sat Feb 18 11:16:29 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
    Sat Feb 18 11:16:29 [conn2] replSet info saving a newer config version to local.system.replset
    Sat Feb 18 11:16:29 [conn2] replSet saveConfigLocally done
    Sat Feb 18 11:16:29 [conn2] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
    Sat Feb 18 11:16:29 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "cluster1", members: [ { _id: 0.0, host: "123.456.78.90:27017" }, { _id: 1.0, host: "123.456.78.91:27017" }, { _id: 2.0, host: "123.456.78.92:27017" }, { _id: 3.0, host: "123.456.78.93:27017", arbiterOnly: true } ] } } ntoreturn:1 reslen:112 7192ms
    Sat Feb 18 11:16:39 [rsStart] replSet STARTUP2
    Sat Feb 18 11:16:39 [rsHealthPoll] replSet member 123.456.78.91:27017 is up
    Sat Feb 18 11:16:39 [rsHealthPoll] replSet member 123.456.78.92:27017 is up
    Sat Feb 18 11:16:39 [rsHealthPoll] replSet member 123.456.78.93:27017 is up
    

Next

In the next section, we will configure our Spring application to support Replica Sets. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: MongoDB - Replica Sets with Spring Data MongoDB (Part 2) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

MongoDB - Replica Sets with Spring Data MongoDB (Part 3)

Review

In the previous section, we have installed and configured our servers for MongoDB replication. In this section, we will update our Spring application to support MongoDB Replica Sets.


Spring App Configuration

We have setup a MongoDB Replica Set using four servers, and we've examined the output from the logs and verified that all instances are running. Our next step is to configure our Spring MongoDB-base application.

Since we're reusing our application from the tutorial Spring MVC 3.1 - Implement CRUD with Spring Data MongoDB, adding a MongoDB Replica Set support in Spring is trivial. All we need to do is modify the spring.properties file with the following contents (this file is located under the WEB-INF directory):



Then we have to modify as well the spring-data.xml file with the following updates:



That's all we need to for our Spring application!

Next

In the next section, we will run and test our servers to verify and test our MongoDB cluster for replication. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: MongoDB - Replica Sets with Spring Data MongoDB (Part 3) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

MongoDB - Replica Sets with Spring Data MongoDB (Part 4)

Review

In the previous section, we have introduced MongoDB Replica Sets, installed MongoDB, configured Replica Sets, and updated our Spring application. In this section, we will test our cluster to verify if we have achieved our goal.


Testing

Start Spring App

Kindly start our Spring application from Server 1. You may need to deploy the application first (it's up to you whether to deploy it on the cloud or on a localhost machine).

Notice the log output from Server 1. It acknowledges two connections from our Spring application, and it starts allocating space for the spring_mongodb_tutorial database:
Sat Feb 18 11:18:50 [conn14] getmore local.oplog.rs query: { ts: { $gte: new Date(5710309296642719745) } } cursorid:7697632718916849147 reslen:20 4588ms
Sat Feb 18 11:18:51 [initandlisten] connection accepted from 123.456.78.90:58023 #27
Sat Feb 18 11:18:53 [initandlisten] connection accepted from 123.456.78.90:58027 #28
Sat Feb 18 11:18:53 [conn28] CMD: drop spring_mongodb_tutorial.role
Sat Feb 18 11:18:53 [conn28] CMD: drop spring_mongodb_tutorial.user
Sat Feb 18 11:18:53 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.ns, filling with zeroes...
Sat Feb 18 11:18:54 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.ns, size: 16MB,  took 0.081 secs
Sat Feb 18 11:18:54 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.0, filling with zeroes...
Sat Feb 18 11:18:56 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.0, size: 16MB,  took 1.534 secs
Sat Feb 18 11:18:56 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.1, filling with zeroes...
Sat Feb 18 11:18:57 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.1, size: 32MB,  took 1.27 secs
Sat Feb 18 11:18:57 [conn28] build index spring_mongodb_tutorial.user { _id: 1 }
Sat Feb 18 11:18:57 [conn28] build index done 0 records 0.074 secs
Sat Feb 18 11:18:57 [conn28] insert spring_mongodb_tutorial.user 4131ms
Sat Feb 18 11:18:57 [conn28] build index spring_mongodb_tutorial.role { _id: 1 }
Sat Feb 18 11:18:57 [conn28] build index done 0 records 0 secs
Sat Feb 18 11:18:57 [conn14] getmore local.oplog.rs query: { ts: { $gte: new Date(5710309296642719745) } } cursorid:7697632718916849147 nreturned:4 reslen:1048 6948ms
Sat Feb 18 11:18:57 [conn13] getmore local.oplog.rs query: { ts: { $gte: new Date(5710309296642719745) } } cursorid:844650061822854635 nreturned:4 reslen:1048 5872ms

Watch how Server 2 synchronizes with Server 1
Sat Feb 18 11:18:51 [initandlisten] connection accepted from 123.456.78.90:43302 #20
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.ns, filling with zeroes...
Sat Feb 18 11:18:57 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.ns, size: 16MB,  took 0.042 secs
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.0, filling with zeroes...
Sat Feb 18 11:18:57 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.0, size: 16MB,  took 0.041 secs
Sat Feb 18 11:18:57 [rsSync] build index spring_mongodb_tutorial.user { _id: 1 }
Sat Feb 18 11:18:57 [rsSync] build index done 0 records 0 secs
Sat Feb 18 11:18:57 [rsSync] build index spring_mongodb_tutorial.role { _id: 1 }
Sat Feb 18 11:18:57 [rsSync] build index done 0 records 0 secs
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.1, filling with zeroes...
Sat Feb 18 11:18:58 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.1, size: 32MB,  took 0.078 secs

Also, observe how Server 3 synchronizes with Server 1
Sat Feb 18 11:18:51 [initandlisten] connection accepted from 123.456.78.90:52987 #20
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.ns, filling with zeroes...
Sat Feb 18 11:18:57 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.ns, size: 16MB,  took 0.043 secs
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.0, filling with zeroes...
Sat Feb 18 11:18:57 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.0, size: 16MB,  took 0.044 secs
Sat Feb 18 11:18:57 [rsSync] build index spring_mongodb_tutorial.user { _id: 1 }
Sat Feb 18 11:18:57 [rsSync] build index done 0 records 0 secs
Sat Feb 18 11:18:57 [rsSync] build index spring_mongodb_tutorial.role { _id: 1 }
Sat Feb 18 11:18:57 [rsSync] build index done 0 records 0 secs
Sat Feb 18 11:18:57 [FileAllocator] allocating new datafile /data/db/spring_mongodb_tutorial.1, filling with zeroes...
Sat Feb 18 11:18:58 [FileAllocator] done allocating datafile /data/db/spring_mongodb_tutorial.1, size: 32MB,  took 0.091 secs

Server 4 does not synchronize because it's an arbiter-only server! Take note of that.

Start Killing

I don't mean killing people but rather MongoDB servers. Let's add a new record first. I have chosen mary, Mary, Jane, zzzzzzz, Regular as the properties of the new record. Feel free to vary.


Now, let's kill Server 1
Sat Feb 18 11:21:02 got kill or ctrl c or hup signal 2 (Interrupt), will terminate after current cmd ends
Sat Feb 18 11:21:02 [conn14] getmore local.oplog.rs query: { ts: { $gte: new Date(5710309296642719745) } } cursorid:7697632718916849147 exception: interrupted at shutdown code:11600 reslen:20 2565ms
Sat Feb 18 11:21:02 [conn13] getmore local.oplog.rs query: { ts: { $gte: new Date(5710309296642719745) } } cursorid:844650061822854635 exception: interrupted at shutdown code:11600 reslen:20 2565ms
Sat Feb 18 11:21:02 [interruptThread] now exiting
Sat Feb 18 11:21:02 Sat Feb 18 11:21:02 [conn14] got request after shutdown()
Sat Feb 18 11:21:02 [conn13] got request after shutdown()
dbexit: 
Sat Feb 18 11:21:02 [interruptThread] shutdown: going to close listening sockets...
Sat Feb 18 11:21:02 [interruptThread] closing listening socket: 5
Sat Feb 18 11:21:02 [interruptThread] closing listening socket: 6
Sat Feb 18 11:21:02 [interruptThread] closing listening socket: 8
Sat Feb 18 11:21:02 [interruptThread] removing socket file: /tmp/mongodb-27017.sock
Sat Feb 18 11:21:02 [interruptThread] shutdown: going to flush diaglog...
Sat Feb 18 11:21:02 [interruptThread] shutdown: going to close sockets...
Sat Feb 18 11:21:02 [interruptThread] shutdown: waiting for fs preallocator...
Sat Feb 18 11:21:02 [interruptThread] shutdown: closing all files...
Sat Feb 18 11:21:02 [interruptThread] closeAllFiles() finished
Sat Feb 18 11:21:02 [interruptThread] shutdown: removing fs lock...
Sat Feb 18 11:21:02 dbexit: really exiting now
Sat Feb 18 11:21:02 [conn1] end connection 127.0.0.1:39621
Logstream::get called in uninitialized state
Sat Feb 18 11:21:02 [conn38] end connection 173.204.91.83:37424

Reload our Spring application. Notice, we still have the same data. We lost nothing. The replicas are working!

Kill More

Let's kill more servers. Add new record again. I have chosen anna, Anna, Williams, zzzzzzz, Admin as the properties of the new record.


Kill Server #2. This means only Server 3 and Server 4 are running, one slave and one arbiter remains respectively. Refresh our Spring application. However the application fails to load the data. Why?

The log explains why:
Sat Feb 18 11:23:46 [rsSync] replSet syncThread: 10278 dbclient error communicating with server: 173.204.91.84:27017
Sat Feb 18 11:23:46 [conn42] end connection 123.456.78.92:53291
Sat Feb 18 11:23:47 [rsHealthPoll] couldn't connect to 123.456.78.90:27017: couldn't connect to server 173.204.91.90:27017
Sat Feb 18 11:23:48 [rsHealthPoll] DBClientCursor::init call() failed
Sat Feb 18 11:23:48 [rsHealthPoll] replSet info 123.456.78.92:27017 is down (or slow to respond): DBClientBase::findN: transport error: 123.456.78.92:27017 query: { replSetHeartbeat: "cluster1", v: 1, pv: 1, checkEmpty: false, from: "123.456.78.93:27017" }
Sat Feb 18 11:23:48 [rsHealthPoll] replSet member 123.456.78.92:27017 is now in state DOWN
Sat Feb 18 11:23:48 [rsMgr] replSet can't see a majority, will not try to elect self

It's because our servers can't elect a primary server. In general, you need two non-arbiter servers in order to have a primary server! The arbiter is designed for breaking the tie-break between equal votes. Now let's restore the other two dead servers. Run mongod again for in Server 1 and Server 2.

Refresh our Spring application again. Notice we have all four records, including Mary and Anna. Remember when we added Anna, the Server #1 was dead!

Let's kill Server 2 and see if Anna is still displayed. Yes, it's diplayed!

Let's kill Server 3 and see if Anna is still displayed. And run Server 3 again. Yes, it's still diplayed!

Conclusion

That's it! We've successfully implemented a MongoDB Replica Set. We have shown step-by-step how to configure our servers and our Spring application. We've also made some quick tests to verify the replication feature of our servers.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: MongoDB - Replica Sets with Spring Data MongoDB (Part 4) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Thursday, February 16, 2012

Spring Batch Tutorial (Part 1)

In this tutorial, we will create a simple Spring Batch application to demonstrate how to process a series of jobs where the primary purpose is to import a lists of comma-delimited and fixed-length records. In addition, we will add a web interface using Spring MVC to teach how to trigger jobs manually, and so that we can visually inspect the imported records. In the data layer, we will use JPA, Hibernate, and MySQL.


Dependencies

  • Spring core 3.1.0.RELEASE
  • Spring Batch 2.1.8.RELEASE
  • See pom.xml for details

Github

To access the source code, please visit the project's Github repository (click here)

Functional Specs

Before we start, let's define the application's specs as follows:
  • Import a list of comma-delimited records
  • Import a list of fixed-length records
  • Import a list of mixed-type records
  • Jobs must be triggered using a web interface
  • Display the imported records in a web interface
  • Each record represents a user and its associated access levels

Here's our Use Case diagram:
[User]-(Import job1)
[User]-(Import job2) 
[User]-(Import job3) 
[User]-(View records)

The CSV Files

To visualize what we want to do, let's examine first the files that we plan to import:

User Files

user1.csv
This file contains comma-separated value (CSV) records representing User records. Each line has the following tokens: username, first name, last name, password.

user2.csv
This file contains fixed-length records representing User records. Each line has the following tokens: username(positions 1-5), first name(6-9), last name(10-16), password(17-25).

user2.csv
This file contains comma-separated value and fixed-length records representing User records. Each line has the following tokens: username, first name, last name, password.

This file contains two types of CSV-records:
  • DELIMITED-RECORD-A: uses the standard comma delimiter
  • DELIMITED-RECORD-B: uses | delimiter

It also contains two types of fixed-length records:
  • FIXED-RECORD-A: username(16-20), first name(21-25), last name(26-31), password(32-40)
  • FIXED-RECORD-B: username(16-21), first name(22-27), last name(28-33), password(35-42)

Role Files

role1.csv
This file contains comma-separated value (CSV) records representing Role records. Each line has the following tokens: username and access level.

role2.csv
This file contains fixed-length records representing Role records. Each line has the following tokens: username and access level.

role3.csv
This file contains comma-separated value (CSV) records representing Role records. Each line has the following tokens: username and access level.

By now you should have a basic idea of the file formats that we will be importing. You must realize that all we want to do is import these files and display them on a web interface.

Diagrams

Here's the Class diagram:
# Cool UML Diagram
[User|id;firstName;lastName;username;password;role{bg:orange}]1--1> [Role|id;role{bg:green}]

Here's the Activity Diagram:

(start)->import->success->(Show Success Alert)->|a|->(end),
fail->(Show Fail Alert)->|a|,
view->(Show Records)->|a|->(end)

Screenshots

Let's preview how the application will look like after it's finished. This is also a good way to clarify further the application's specs.

Entry page
The entry page is the primary page that users will see. It contains a table showing user records and four buttons for adding, editing, deleting, and reloading data. All interactions will happen in this page.

Entry page






Next

In the next section, we will write the Java classes. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring Batch Tutorial (Part 1) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring Batch Tutorial (Part 4)

Review

We have just completed our application! In the previous sections, we have discussed how to perform batch processing with Spring Batch. We have also created a Spring MVC application to act as a web interface. In this section, we will build and run the application using Maven, and demonstrate how to import the project in Eclipse.


Running the Application

Access the source code

To download the source code, please visit the project's Github repository (click here)

Preparing the data source

  1. Run MySQL (install one if you don't have one yet)
  2. Create a new database:
    spring_batch_tutorial
  3. Import the following file which is included in the source code under the src/main/resources folder:
    schema-mysql.sql
    This script contains Spring Batch infrastructure tables which can be found in the Spring Batch core library. I have copied it here separately for easy access.

Building with Maven

  1. Ensure Maven is installed
  2. Open a command window (Windows) or a terminal (Linux/Mac)
  3. Run the following command:
    mvn tomcat:run
  4. You should see the following output:
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'tomcat'.
    [INFO] artifact org.codehaus.mojo:tomcat-maven-plugin: checking for updates from central
    [INFO] artifact org.codehaus.mojo:tomcat-maven-plugin: checking for updates from snapshots
    [INFO] ------------------------------------------
    [INFO] Building spring-batch-tutorial Maven Webapp
    [INFO]    task-segment: [tomcat:run]
    [INFO] ------------------------------------------
    [INFO] Preparing tomcat:run
    [INFO] [apt:process {execution: default}]
    [INFO] [resources:resources {execution: default-resources}]
    [INFO] [tomcat:run {execution: default-cli}]
    [INFO] Running war on http://localhost:8080/spring-batch-tutorial
    Feb 13, 2012 9:36:54 PM org.apache.catalina.startup.Embedded start
    INFO: Starting tomcat server
    Feb 13, 2012 9:36:55 PM org.apache.catalina.core.StandardEngine start
    INFO: Starting Servlet Engine: Apache Tomcat/6.0.29
    Feb 13, 2012 9:36:55 PM org.apache.catalina.core.ApplicationContext log
    INFO: Initializing Spring root WebApplicationContext
    Feb 13, 2012 9:37:01 PM org.apache.coyote.http11.Http11Protocol init
    INFO: Initializing Coyote HTTP/1.1 on http-8080
    Feb 13, 2012 9:37:01 PM org.apache.coyote.http11.Http11Protocol start
    INFO: Starting Coyote HTTP/1.1 on http-8080
    
  5. Note: If the project will not build due to missing repositories, please enable the repositories section in the pom.xml!

Access the Entry page

  1. Follow the steps with Building with Maven
  2. Open a browser
  3. Enter the following URL (8080 is the default port for Tomcat):
    http://localhost:8080/spring-batch-tutorial/

Import the project in Eclipse

  1. Ensure Maven is installed
  2. Open a command window (Windows) or a terminal (Linux/Mac)
  3. Run the following command:
    mvn eclipse:eclipse -Dwtpversion=2.0
  4. You should see the following output:
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'eclipse'.
    [INFO] org.apache.maven.plugins: checking for updates from central
    [INFO] org.apache.maven.plugins: checking for updates from snapshots
    [INFO] org.codehaus.mojo: checking for updates from central
    [INFO] org.codehaus.mojo: checking for updates from snapshots
    [INFO] artifact org.apache.maven.plugins:maven-eclipse-plugin: checking for updates from central
    [INFO] artifact org.apache.maven.plugins:maven-eclipse-plugin: checking for updates from snapshots
    [INFO] -----------------------------------------
    [INFO] Building spring-batch-tutorial Maven Webapp
    [INFO]    task-segment: [eclipse:eclipse]
    [INFO] -----------------------------------------
    [INFO] Preparing eclipse:eclipse
    [INFO] No goals needed for project - skipping
    [INFO] [eclipse:eclipse {execution: default-cli}]
    [INFO] Adding support for WTP version 2.0.
    [INFO] -----------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] -----------------------------------------
    
    This command will add the following files to your project:
    .classpath
    .project
    .settings
    target
    You may have to enable "show hidden files" in your file explorer to view them
  5. Open Eclipse and import the project

Conclusion

That's it! We've have successfully completed our Spring Batch application and learned how to process of jobs in batches. We've also added Spring MVC support to allow jobs to be controlled online.

I hope you've enjoyed this tutorial. Don't forget to check my other tutorials at the Tutorials section.

Revision History


Revision Date Description
1 Feb 16 2012 Uploaded tutorial and Github repository

StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring Batch Tutorial (Part 4) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring Batch Tutorial (Part 2)

Review

In the previous section, we have laid down the functional specs of the application and examined the raw files that are to be imported. In this section, we will discuss the project's structure and write the Java classes.


Project Structure

Our application is a Maven project and therefore follows Maven structure. As we create the classes, we've organized them in logical layers: domain, repository, service, and controller.

Here's a preview of our project's structure:

The Layers

Disclaimer

I will only discuss the Spring Batch-related classes here. And I've purposely left out the unrelated classes because I have described them in detail already from my previous tutorials. See the following guides:

Controller Layer

The BatchJobController handles batch requests. There are three job mappings:
  • /job1
  • /job2
  • /job3
Everytime a job is run, a new JobParameter is initialized as the job's parameter. We use the current date to be the distinguishing parameter. This means every job trigger is considered a new job.

What is a JobParameter?

"how is one JobInstance distinguished from another?" The answer is: JobParameters. JobParameters is a set of parameters used to start a batch job. They can be used for identification or even as reference data during the run:

Source: Spring Batch - Chapter 3. The Domain Language of Batch

Notice we have injected a JobLauncher. It's primary job is to start our jobs. Each job will run asynchronously (this is declared in the XML configuration).

What is a JobLauncher?

JobLauncher represents a simple interface for launching a Job with a given set of JobParameters:

Source: Spring Batch - Chapter 3. The Domain Language of Batch



Batch Layer

This layer contains various helper classes to aid us in processing batch files.
  • UserFieldSetMapper - maps FieldSet result to a User object
  • RoleFieldSetMapper - maps FieldSet result to a Role object. To assign the user, an extra JDBC query is performed
  • MultiUserFieldSetMapper - maps FieldSet result to a User object; it removes semi-colon from the first token.
  • UserItemWriter - writes a User object to the database
  • RoleItemWriter - writes a Role object to the database. To assign the user, an extra JDBC query is performed







Next

In the next section, we will focus on the configuration files. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring Batch Tutorial (Part 2) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring Batch Tutorial (Part 3)

Review

In the previous section, we have written and discussed the Spring Batch-related classes. In this section, we will write and declare the Spring Batch-related configuration files.


Configuration

Properties File

The spring.properties file contains the database name and CSV files that we will import. A job.commit.interval property is also specified which denotes how many records to commit per interval.



Spring Batch

To configure a Spring Batch job, we have to declare the infrastructure-related beans first. Here are the beans that needs to be declared:

  • Declare a job launcher
  • Declare a task executor to run jobs asynchronously
  • Declare a job repository for persisting job status

What is Spring Batch?

Spring Batch is a lightweight, comprehensive batch framework designed to enable the development of robust batch applications vital for the daily operations of enterprise systems. Spring Batch builds upon the productivity, POJO-based development approach, and general ease of use capabilities people have come to know from the Spring Framework, while making it easy for developers to access and leverage more advance enterprise services when necessary. Spring Batch is not a scheduling framework.

Source: Spring Batch Reference Documentation

What is a JobRepository?

JobRepository is the persistence mechanism for all of the Stereotypes mentioned above. It provides CRUD operations for JobLauncher, Job, and Step implementations.

Source: Spring Batch - Chapter 3. The Domain Language of Batch

What is a JobLauncher?

JobLauncher represents a simple interface for launching a Job with a given set of JobParameters

Source: Spring Batch - Chapter 3. The Domain Language of Batch

Here's our main configuration file:



Notice we've also declared the following beans:
  • Declare a JDBC template
  • User and Role ItemWriters

Job Anatomy

Before we start writing our jobs, let's examine first what constitutes a job.

What is a Job?

A Job is an entity that encapsulates an entire batch process. As is common with other Spring projects, a Job will be wired together via an XML configuration file

Source: Spring Batch: The Domain Language of Batch: Job

Each job contains a series of steps. For each of step, a reference to an ItemReader and an ItemWriter is also included. The reader's purpose is to read records for further processing, while the writer's purpose is to write the records (possibly in a different format).

What is a Step?

A Step is a domain object that encapsulates an independent, sequential phase of a batch job. Therefore, every Job is composed entirely of one or more steps. A Step contains all of the information necessary to define and control the actual batch processing.

Source: Spring Batch: The Domain Language of Batch: Step

Each reader typically contains the following properties
  • resource - the location of the file to be imported
  • lineMapper - the mapper to be used for mapping each line of record
  • lineTokenizer - the type of tokenizer
  • fieldSetMapper - the mapper to be used for mapping each resulting token

What is an ItemReader?

Although a simple concept, an ItemReader is the means for providing data from many different types of input. The most general examples include: Flat File, XML, Database

Source: Spring Batch: ItemReaders and ItemWriters

What is an ItemWriter?

ItemWriter is similar in functionality to an ItemReader, but with inverse operations. Resources still need to be located, opened and closed but they differ in that an ItemWriter writes out, rather than reading in.

Source: Spring Batch: ItemReaders and ItemWriters

The Jobs

As discussed in part 1, we have three jobs.

Job 1: Comma-delimited records

This job contains two steps:
  1. userLoad1 - reads user1.csv and writes the records to the database
  2. roleLoad1 - reads role1.csv and writes the records to the database
Notice userLoad1 is using DelimitedLineTokenizer and the properties to be matched are the following: username, firstName, lastName, password. Whereas, roleLoad1 is using the same tokenizer but the properties to be matched are the following: username and role.

Both steps are using their own respective FieldSetMapper: UserFieldSetMapper and RoleFieldSetMapper.

What is DelimitedLineTokenizer?

Used for files where fields in a record are separated by a delimiter. The most common delimiter is a comma, but pipes or semicolons are often used as well.

Source: Spring Batch: ItemReaders and ItemWriters


Job 2: Fixed-length records

This job contains two steps:
  1. userLoad2 - reads user2.csv and writes the records to the database
  2. roleLoad2 - reads role2.csv and writes the records to the database

Notice userLoad2 is using FixedLengthTokenizer and the properties to be matched are the following: username, firstName, lastName, password. However, instead of matching them based on a delimiter, each token is matched based on a specified length: 1-5, 6-9, 10-16, 17-25 where 1-5 represents the username and so forth. The same idea applies to roleLoad2.

What is FixedLengthTokenizer?

Used for files where fields in a record are each a 'fixed width'. The width of each field must be defined for each record type.

Source: Spring Batch: ItemReaders and ItemWriters


Job 3: Mixed records

This job contains two steps:
  1. userLoad3 - reads user3.csv and writes the records to the database
  2. roleLoad3 - reads role3.csv and writes the records to the database

Job 3 is a mixed of Job 1 and Job 2. In order to mix both, we have to set our lineMapper to PatternMatchingCompositeLineMapper.

What is PatternMatchingCompositeLineMapper?

Determines which among a list of LineTokenizers should be used on a particular line by checking against a pattern.

Source: Spring Batch: ItemReaders and ItemWriters

For the FieldSetMapper, we are using a custom implementation MultiUserFieldSetMapper which removes a semicolon from the String. See Part 2 for the class declaration.



Next

In the next section, we will run the application using Maven. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring Batch Tutorial (Part 3) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Wednesday, February 8, 2012

Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 1)

In this tutorial, we will create a simple CRUD application using Spring 3.1 and Redis. We will based this tutorial on a previous guide for MongoDB: Spring MVC 3.1 - Implement CRUD with Spring Data MongoDB. This means we will re-use our existing design and implement only the data layer to use Redis as our data store.


Dependencies

  • Spring core 3.1.0.RELEASE
  • Spring Data Redis 1.0.0.RC1
  • Redis (server) 2.4.7
  • See pom.xml for details

Github

To access the source code, please visit the project's Github repository (click here)

Functional Specs

Before we start, let's define our application's specs as follows:
  • A CRUD page for managing users
  • Use AJAX to avoid page refresh
  • Users have roles: either admin or regular (default)
  • Everyone can create new users and edit existing ones
  • When editing, users can only edit first name, last name, and role fields
  • A username is assumed to be unique

Here's our Use Case diagram:
[User]-(View)
[User]-(Add) 
[User]-(Edit) 
[User]-(Delete) 

Database

If you're new to Redis and coming from a SQL-background, please take some time to read the Redis Official Documentation. I would like to put emphasis on studying the Redis data types. See Data types and A fifteen minute introduction to Redis data types.

In its purest form, Redis is a key-value store that can support various data structures: Strings, Sets, Lists, Hashes, and Sorted Sets. Among these structures, we will pay extra attention to Hashes because we can use it to represent Java objects.

Hashes

Redis Hashes are maps between string fields and string values, so they are the perfect data type to represent objects (eg: A User with a number of fields like name, surname, age, and so forth):

Source: Data types

From Java to Redis

We have two Java classes representing our domain: User and Role. Here is the Class diagram:

# Cool UML Diagram
[User|id;firstName;lastName;username;password;role{bg:orange}]1--1> [Role|id;role{bg:green}]

However, Redis is a key-value store, and we can already see the model mismatch. How do we exactly map a Java class to a Redis structure? One of way of dealing with this is to use Hash structure.

Assume we have the following User object with the following properties:
User
----
id = 1
username = john
password = 12345678
role = 1

To map this object to Redis, via the command-line tool, as a Hash structure we use the command HMSET:
redis> HMSET user:1 id 1 username john password 12345678 role 1
"OK"

redis> HGETALL user:1
{"id":"1","username":"john","password":"12345678","role":"1"}

In this example, user:1 becomes the column name and the id (if we think of this in terms of a relational database). Notice we don't have to map a Role object because we've already set the value along with the user key.

Try Redis

If you need to experiment with an actual Redis instance online, visit the Try Redis.


Screenshots

Let's preview how the application will look like after it's finished. This is also a good way to clarify further the application's specs. Note: These are the same screenshots you will see from the Spring MVC 3.1 - Implement CRUD with Spring Data MongoDB guide.

The Activity diagram:

http://yuml.me/diagram/activity/(start)-%3E%3Cd1%3Eview-%3E(Show%20Records)-%3E%7Ca%7C-%3E(end),%20%3Cd1%3Eadd-%3E(Show%20Form)-%3E%7Ca%7C,%20%3Cd1%3Eedit-%3E%3Cd2%3Ehas%20selected-%3E(Show%20Form)-%3E%7Ca%7C,%20%3Cd2%3Eno%20record%20selected-%3E(Popup%20Alert)-%3E%7Ca%7C,%20%3Cd1%3Edelete-%3E%3Cd3%3Ehas%20selected-%3E(Delete%20Record)-%3E%7Ca%7C,%20%3Cd3%3Eno%20record%20selected-%3E(Popup%20Alert)-%3E%7Ca%7C.

Entry page
The entry page is the primary page that users will see. It contains a table showing user records and four buttons for adding, editing, deleting, and reloading data. All interactions will happen in this page.

Entry page

Edit existing record
When user clicks the Edit button, an Edit Record form shall appear after the table.

Edit record form

When a user submits the form, a success or failure alert should appear.

Success alert

When the operation is successful, the update record should reflect on the table.

Edited record appears on the table

Create new record
When a user clicks the New button, a Create New Record form shall appear after the table.

Create new record form

When a user submits the form, a success or failure alert should appear.

Success alert

When the operation is successful, the new record should appear on the table.

New record shows on the form

Delete record
When user clicks the Delete button, a success or failure alert should appear.

Success alert

Reload record
When user clicks the Reload button, the data on the table should be reloaded.

Errors
When user clicks the Edit or Delete button without selecting a record first, a "Select a record first!" alert should appear.

Error alert

Next

In the next section, we will study how to setup a Redis server both in Windows and Ubuntu. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 1) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 2)

Review

In the previous section, we have laid down the functional specs of the application and studied how to map a Java object to a Redis data structure. In this section, we will study how to setup a Redis server both in Windows and Ubuntu.


Redis Setup

We will demonstrate first how to setup Redis in Windows 7, then on Ubuntu 10.04.

Windows 7

To setup a Redis server in Windows, follow these steps:

1. Open a browser and visit the Redis download section at http://redis.io/download

2. Choose the Win32/64 download. Notice it's status is Unofficial because the Redis project does not directly support win32/win64


3. Although Windows is not officially supported, there's a port available for it by Dušan Majkić. Under the Win32/64 section, click on the link A Native win32/win64 port created by Dušan Majkić.. It will bring you to a Github page.

4. Click on the Downloads section (upper-right) and you should see the following downloads.


5. Download the latest one (currently, it's 2.4.5).

6. Once the download is finished, extract the contents. Open the new folder and browse under the 32bit folder (choose 64bit if you have Windows 64bit version).




7. To run a Redis server, double-click the redis-server.exe


You should see the following console stating that Redis is now running:


To run a client interface, double-click the redis-cli.exe


And you should see the following console--waiting for your command:


Ubuntu 10.04

To setup a Redis server in Ubuntu, you will need to build it from the source. There are two ways:
  • Manual download
  • Terminal-based

Manual download

1. Open a browser and visit the Redis download section at http://redis.io/download

2. Download the latest and stable version (currently at 2.4.7).

3. Once the download is finished, extract the contents.

4. Now, let's build the source. Open a terminal and enter the following command:
/REDIS-DOWNLOAD-PATH/make


After building Redis, test it using the following command (make sure to replace REDIS-DOWNLOAD-PATH accordingly):
/REDIS-DOWNLOAD-PATH/make test


5. The binaries that are now compiled are available in the src directory. Run Redis with:
/REDIS-DOWNLOAD-PATH/src/redis-server


Terminal-based

1. Download, extract and compile Redis with:
$ wget http://redis.googlecode.com/files/redis-2.4.7.tar.gz
$ tar xzf redis-2.4.7.tar.gz
$ cd redis-2.4.7
$ make


2. The binaries that are now compiled are available in the src directory. Run Redis with:
$ src/redis-server


Note: These are the same steps you will see under the Download section at http://redis.io/download

Next

In the next section, we will discuss the project's structure and start writing the Java classes. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 2) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share

Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 3)

Review

In the previous section, we have learned how to setup a Redis server in Windows and Ubuntu. In this section, we will discuss the project's structure and write the Java classes.


Project Structure

Our application is a Maven project and therefore follows Maven structure. As we create the classes, we've organized them in logical layers: domain, repository, service, and controller.

Here's a preview of our project's structure:

Note: You might have noticed ignore an error icon in the jQuery file. This is an Eclipse validation issue. You can safely ignore this error.

The Layers

Domain Layer

This layer contains two POJOs, User and Role.




Controller Layer

This layer contains two controllers, MediatorController and UserController
  • MediatorController is responsible for redirecting requests to appropriate pages. This isn't really required but it's here for organizational purposes.
  • UserController is responsible for handling user-related requests such as adding and deleting of records



Service Layer

This layer contains two services, UserService and InitRedisService
  • UserService is our CRUD service for managing users
  • InitRedisService is used for initiliazing our database with sample data using the RedisTemplate



As mentioned in Part 1, we shall use Hashes to store Java objects in Redis. With the help of Spring Data for Redis, in particular the RedisTemplate, we're able to perform various Redis operations.

To access Hash operations using RedisTemplate, we use the following syntax:
template.opsForHash()
template.opsForHash().put
template.opsForHash().delete


To keep track of our users, we will use Set data structure for Redis
template.opsForSet()
template.opsForSet().add
template.opsForSet().remove


What is Spring Data Redis?

Spring Data for Redis is part of the umbrella Spring Data project which provides support for writing Redis applications. The Spring framework has always promoted a POJO programming model with a strong emphasis on portability and productivity. These values are carried over into Spring Data for Redis.

Source: http://www.springsource.org/spring-data/redis

Utility classes

TraceInterceptor class is an AOP-based utility class to help us debug our application. This is a subclass of CustomizableTraceInterceptor (see Spring Data JPA FAQ)



Next

In the next section, we will focus on the configuration files for enabling Spring MVC. Click here to proceed.
StumpleUpon DiggIt! Del.icio.us Blinklist Yahoo Furl Technorati Simpy Spurl Reddit Google I'm reading: Spring MVC 3.1 - Implement CRUD with Spring Data Redis (Part 3) ~ Twitter FaceBook

Subscribe by reader Subscribe by email Share