M101J MongoDB for Java Week 1

This post covers the first week on 10gen's M101J MongoDB for Java.

[Installing and Using

Maven](https://www.youtube.com/watch?feature=player_embedded&v=72vejAmaypM)

Download the Maven Zip and add its bin directory to your path.

[The MongoDB Java

Driver](https://www.youtube.com/watch?feature=player_embedded&v=FtyaK3pMHxw)

Add the following to the pom.xml file to install the MongoDB Java Driver:

    <dependency>  
        <groupId>org.mongodb</groupId>  
        <artifactId>mongo-java-driver</artifactId>  
        <version>2.11.2</version>  
    </dependency>

But check here to see the latest version of the MongoDB Java Driver.

Here is a general syntax for using the MongoDB Java Driver:

import com.mongodb.*;  
import java.net.UnknownHostException;

public class HellowWorldMongoDBStyle {  
    public static void main(String[] args) throws UnknownHostException {  
        MongoClient client = new MongoClient(new ServerAddress("localhost", 27017));  
        DB database = client.getDB("course");  
        DBCollection collection = database.getCollection("hello");  
        DBObject document = collection.findOne();  
        System.out.println(document);

        DBCursor documents = collection.find();  
        // Iterate through the cursor  
        for (DBObject d : documents) {  
            System.out.println(d);  
        }  
        // Or  
        while (documents.hasNext()) {  
            System.out.println(documents.next());  
        }

        // Close the cursor  
        documents.close();  
    }  
}

[Intro to the Spark Web Application

Framework](https://www.youtube.com/watch?feature=player_embedded&v=UH- VD_ypal8)

While trying to get Spark to work as per the instructions in the 10gen lecture, I got the following error on running the HelloWorldSparkStyle file:

Exception in thread "main" java.lang.UnsupportedClassVersionError: spark/Route : Unsupported major.minor version 51.0

On the discussion forum, a person with the handle "vimal_krishna" found the solution.

Replace the following dependency in the pom.xml file:

    <dependency>  
        <groupId>com.sparkjava</groupId>  
        <artifactId>spark-core</artifactId>  
        <version>1.0</version>  
    </dependency>

With:

    <dependency>  
        <groupId>spark</groupId>  
        <artifactId>spark</artifactId>  
        <version>0.9.9.4-SNAPSHOT</version>  
    </dependency>

And it Worked.

[Intro to the Freemarker Templating

Engine](https://www.youtube.com/watch?feature=player_embedded&v=_8-3K2Ds-Ok)

Go to Freemarker's Download section and look for the latest Maven code to add to your pom.xml's dependencies:

    <dependency>  
        <groupId>org.freemarker</groupId>  
        <artifactId>freemarker</artifactId>  
        <version>2.3.20</version>  
    </dependency>

Here is a class to get started with:

import freemarker.template.Template;  
import java.io.*;  
import java.util.*;

public class HelloWorldFreemarkerStyle {  
    public static void main(String[] args) {  
        Configuration configuration = new Configuration();  
        configuration.setClassForTemplateLoading(HelloWorldFreemarkerStyle.class, "/");

        try {  
            Template helloTemplate = configuration.getTemplate("hello.ftl");  
            StringWriter writer = new StringWriter();  
            Map<String, Object> helloMap = new HashMap<String, Object>();  
            helloMap.put("name", "FreeMarker");  
            helloTemplate.process(helloMap, writer);  
            System.out.println(writer);  
        } catch (Exception e) {  
            e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.  
        }  
    }  
}

[Spark and Freemarker

Together](https://www.youtube.com/watch?feature=player_embedded&v=7fdtf9aLc2w)

Here is the code:

import freemarket.template.*;  
import spark.*;  
import java.io.*;  
import java.util.*;

public class HelloWorldSparkFreemarkerStyle {  
    public static void main(String[] args) {  
        final Configuration configuration = new Configuration();  
        configuration.setClassForTemplateLoading(HelloWorldSparkFreemarkerStyle.class, "/");  
        Spark.get(new Route("/") {  
            @Override  
            public Object handle(final Request request, final Response response) {  
                StringWriter writer = new StringWriter();  
                try {  
                    Template helloTemplate = configuration.getTemplate("hello.ftl");  
                    Map<String, Object> helloMap = new HashMap<String, Object>();  
                    helloMap.put("name", "FreeMarker");  
                    helloTemplate.process(helloMap, writer);  
                    System.out.println(writer);  
                } catch (Exception e) {  
                    halt(500);  
                    e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.  
                }  
                return writer;  
            }  
        });  
    }  
}

Now when I ran this code I got the following error:

== Spark has ignited ...  
>> Listening on 0.0.0.0:4567  
java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind

This is because I didn't stop the execution of the HelloWorldSparkStyle. After stopping it, the new code works.

[All together now: MongoDB, Spark, and

Freemarker](https://www.youtube.com/watch?feature=player_embedded&v=8S5tvJAOYzg)

Here is the main method that combines everything:

public static void main(String[] args) throws UnknownHostException {  
        final Configuration configuration = new Configuration();  
        configuration.setClassForTemplateLoading(HelloWorldSparkFreemarkerStyle.class, "/");

        MongoClient client = new MongoClient(new ServerAddress("localhost", 27017));  
        DB database = client.getDB("course");  
        final DBCollection collection = database.getCollection("hello");

        Spark.get(new Route("/") {  
            @Override  
            public Object handle(final Request request, final Response response) {  
                StringWriter writer = new StringWriter();  
                try {  
                    Template helloTemplate = configuration.getTemplate("hello.ftl");  
                    DBObject document = collection.findOne();

                    helloTemplate.process(document, writer);  
                    System.out.println(writer);  
                } catch (Exception e) {  
                    halt(500);  
                    e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.  
                }  
                return writer;  
            }  
        });  
    }

Note that the process() method accepts any object that implements Map, which DBObject indirectly does.

[Spark framework: handling GET

requests](https://www.youtube.com/watch?feature=player_embedded&v=7t1IafamuVs)

Spark has an embedded Jetty server, and when you create a route Jetty starts automatically. Inside Jetty, a Spark Handler exists with one or more route.

Imagine a route like the following:

public class SparkRoutes {  
    public static void main(String[] args) {  
        Spark.get(new Route("/echo/:thing") {  
            @Override  
            public Object handle(final Request request, final Response response) {  
                return request.params(":thing");  
            }  
        });  
    }  
}

Going to localhost:4567/echo/cat will print cat to the screen.

[Spark framework: handling POST

requests](https://www.youtube.com/watch?feature=player_embedded&v=jZDuxesy5cc)

Forms are supposed to be handled with POST requests.

Here is a template called fruitPicker.ftl:

<html><head><title>Fruit Picker</title></head>  
<body>  
<form action="/favorite_fruit" method="POST">  
    <p>What is your favorite fruit?</p>  
    <#list fruits as fruit>  
        <p>  
            <input type="radio" name="fruit" value="${fruit}">${fruit}</input>  
        </p>  
     </#list>  
</form>  
</body>  
</html>

The <#list fruits as fruit> iterates through an array called fruits which will be passed to the template.

Here is the class:

public class SparkFormHandling {  
    public static void main(String[] args) {  
         //Configure Freemarker  
        final Configuration configuration = new Configuration();  
        configuration.setClassForTemplateLoading(SparkFormHandling.class, "/");

        // Configure routes  
        Spark.get(new Route("/") {  
            @Override  
            public Object handle(final Request request, final Response response) {  
                try {  
                    Map<String, Object> fruitMap = new HashMap<String, Object>();  
                    fruitMap.put("fruits", Arrays.asList("apple", "orange", "banana", "peach"));

                    Template fruitPickerTemplate = configuration.getTemplate("fruitPicker.ftl");  
                    StringWriter writer = new StringWriter();  
                    fruitPickerTemplate.process(fruitMap, writer);  
                    return writer;  
                } catch (Exception e) {  
                    halt(500);  
                    return null;  
                }  
            }  
        });

        Spark.post(new Route("/favorite_fruit") {  
            @Override  
            public Object handle(final Request request, final Response response) {  
                final String fruit = request.queryParams("fruit");  
                if (fruit == null) {  
                    return "Why don't you pick one?";  
                } else {  
                    return "Your favorite fruit is" + fruit;  
                }  
            }  
        });

    }  
}

[Intro to Schema

Design](https://www.youtube.com/watch?feature=player_embedded&v=6XE3wZCPiZ8)

The biggest question is "To Embed or Not to Embed; That is the Question"

The example given relates to blogs. In this schema design, the authors embed tags and comments as an array of documents in the post collection. A tag for example may be updated, and embedding it into a document will cause it to be updated accross all documents, which sounds bad. However, because a tag and a comment is typically accessed at the same time as accessing a post, it is better. Access to data is the biggest consideration in the question of to embed or not to embed. They argue that because changes to tags would be rare it would not hurt to embed it. Perhaps it would be unwise to embed if there are updates that would occur all the time. In addition, if a subdocument is greater than 16MB, it must be in a separate collection


Comments

Add Comment

Name

Email

Comment

Are you human? - six = -4


Name: Naveen

Creation Date: 2015-08-18

Intro to the Spark Web Application Framework: I am facing same issue:"Exception in thread "main" java.lang.UnsupportedClassVersionError: spark/Route : Unsupported major.minor version 51.0" and tried the solution provided above ,but unfortunately it not working for me. I have placed the below dependency in my project pom.xml file(as per the solution shared somewhere above this) but I am getting error : Dependency spark:spark: 0.9.9.4-SNAPSHOT not found.


Name: Naveen

Creation Date: 2015-08-18

Okay now . Able to resolve the above issue: Reason for failure is 0.9.9.4-SNAPSHOT is not available. You can user below version: 0.9.8-SNAPSHOT/ Sat May 21 21:54:23 UTC 2011 0.9.9-SNAPSHOT/ Mon May 23 10:57:38 UTC 2011 0.9.9.1-SNAPSHOT/ Thu May 26 09:47:03 UTC 2011 0.9.9.3-SNAPSHOT/ Thu Sep 01 07:53:59 UTC 2011


Name: Alex

Creation Date: 2016-03-19

Hello, Matthew I faced the same issue while trying to add spark into pom.xml: 'dependency (com.sparkjava).......not found'. I use Intellij15. Could you please advise what is it wrong in my env? Thanks, Alex