marianong
marianong
Coding in flip flops
21 posts
This blog is intended for sharing ideas, opinions, knowledge, and any technology testing i make. It's much of a personal notebook in which i want to take advantage of technology to make it available for me at any moment and (at the same time) to make it available for everyone interested in it. Mariano Navas Tweets by @marianongdev more Quotes
Don't wanna be here? Send us removal request.
marianong · 12 years ago
Text
Javascript at server side (part II)
Today we are going to talk about objects in Javascript.
Object creation and attribute access
In Javascript we define and create objects at the same time. In other Object Oriented Languages (such as Java) we create classes as templates for objects, and then instantiate objects from that classes (classes act as a definition of an object of a given type). Objects are constructs with state (attributes) and behaviour (methods).
The recommended and most common way for creating an object is the object literal syntax:
var message = { from: "Me", to: "You", subject: "Let's date" }
In the example above we have defined and instantiated an object at the same time. This notation is also known as JSON (JavaScript Object Notation). That object has three attributes (from, to and subject) and no methods.
Another way of creating and object is through a generic constructor. We'll talk more about constructors soon.
var message = new Object();//A generic object has been created message.from = "Me";//Created and assigned attribute from for message object message.to = "You";//Idem for to attribute //And so on and so forth … message.from;//Evaluates to "Me" as in the JSON syntax
To access attributes we have two ways:
message.from;//Evaluates to "Me". It's called dot notation message['from'];//Evaluates also to "Me". This is called bracket notation
You'll usually find out there the dot notation. Bracket notation is mandatory when the name of the attribute is specified dynamically:
var attName = "from"; message[attName];//Evaluates to "Me". No way to use dot notation here
Method definition
A method is just an attribute whose value is a function.
var boss = { name: "John Doe", work: function() { return "I do nothing"; } } boss.work();//Invoke work() method on boss objects; //returns the string "I do nothing" (sin acritud)
No dumb questions for Java developers about Javascript objects
The rest of the stuff related to methods we are going to cover in this post is now presented as questions any Java developer getting into the server side Javascript world could ask.
Do we have classes in Javascript? Do we have inheritance?
As we said earlier, no. An object is at the same time a definition and an instantiation. If we want to reuse the same definition for different instances without copy and pasting the same piece of code on different variable assignments we have two alternatives:
1- Using a constructor function; that is, a function that initialises an object. Objects in Javascript are either functions or sets of key/value pairs, whose keys are always strings (names) and whose values are either primitives or functions (methods).
//Let's declare a constructor function. //Convention says we have to start it's name with a capital letter. function Foo(_name) { this.name = _name; } var foo1 = new Foo('Mariano');//Creates an instance of a Foo object var foo2 = new Foo('Alberto');//Here we are another instance foo1.name;//Evaluates to 'Mariano' foo2.name;//Evaluates to 'Alberto'
2- Using a prototype. A prototype is an special attribute of any object that can point to any other object. Once a prototype is assigned, any other instance will "inherit" it's attributes and values.
//Constructor function declaration function Bar(_age) { this.age = _age; } //Prototype assigment Bar.prototype = new Foo('Alejandro'); var bar = new Bar(12); bar;//evaluates to {age: 15} bar.age;//evaluates to 15 bar.name;//evaluates to 'Alejandro' Object.getPrototypeOf(bar);//Evaluates to {name: 'Alejandro'}. //That is the Foo object assigned to Bar.prototype.
You can get deeper into prototypes stuff in some of the links listed in the resources section of this post.
Can we add attributes and methods to objects after they have been instantiated?
YES!!!. Just assign new variables to values or functions to add attributes and methods respectively. No inheritance, no polymorphism. Just all dynamic.
//I create an empty object var my_object = {}; my_object.att1 = "Hi";//Added attribute with name att1 to my_object my_object.met1 = function() { return "Hi"; };//Added method met1 to my_object
How the hell do I know what methods and attributes my object has?
Three ways:
1- Look at documentation. The preferred one.
2- Look at source code. The straight one.
3- As we an consider an object as a set of key/values pairs, we can iterate through it's attributes:
var it = function(obj) { for (var prop in obj) { console.log(prop); } } //Let's reuse the bar object of the example above it(bar); /* * Prints out the following: * age * name * undefined */
What is the default value for every variable in Javascript
undefined. Both at (local) function level and a global level. This is a new possible value for Java developers, different from null.
What's the difference between null and undefined?
Try this piece of code out (taken from here; read in there for further information):
typeof null;//Evaluates to 'null' typeof undefined;//Evaluates to 'undefined'
The this keyword
Please have a look at this previous post. This series of posts can be considered a continuation on those. It's confusing they don't have a related name, but a lot of things have happened between the original intended series and this one. I find server side Javascript to be a better title for the whole same thing.
Whats next?
That's all I have time for so far. Now we know about functions in Javascript, we have the functional programming foundations of the language and a glimpse into the Object Oriented use of Javascript. Now we need to know about design patterns, modularity and best practices to build large, maintainable and clean coded server side Javascript applications. That'll be the topic of a future third part of this series of posts. See you then (and please, feel free to share your thoughts about what the content of such a third post could be).
Resources
Understanding prototypes in Javascript.
How does JavaScript .prototype work?
Javascript garden.
Javascript undefined Vs. null.
What about Googling?
2 notes · View notes
marianong · 12 years ago
Text
Javascript at server side (part I)
Many of us have in some way used Javascript inside a browser. We know what the DOM is, how to use CSS selectors to operate on DOM elements using JQuery and how to manage Ajax requests. That's the standard stack at the client side of web applications throughout the first decade of 21th century. But with the arise of cloud computing, SaaS and big data challenges we are rethinking the way we code at both client and server side. Regarding server side, asynchronous server architectures driven by the Reactor Pattern have become a must have in order to achieve systems able to scale out and remain responsive to global audiences.
We have several alternatives at server side to achieve that goal. It's a non trivial issue to make a decision on one of them. At the time of this writing a very popular one is node.js. Node.js is a [V8](http://en.wikipedia.org/wiki/V8_(JavaScript_engine) Javascript runtime that implements the reactor pattern. Today we are going to start a series of posts about how to adapt our Javascript (client targeted) knowledge to the server side in order to implement maintainable, clean code for node.js. I assume you already have used Javascript at client side. If you have done so but you are not a Javscript Jedi, nor you have it crystal clear how to use it at server side this series of posts are for you.
Introduction to functions
Javascript is a dynamic, functional featured language. Therefore functions are a core part of it. Our first stop will be trying to gain a deep understanding of what a function is in Javascript. Good resources for this topic are detailed at the end of this post.
Functions as a first class citizens
Functions are a core construct in Javascript. As any other type, they can be defined and invoked anywhere, and passed around as parameters to other functions. We might have also have functions inside other functions:
function foo() { function sqr(n) { return n * n; } return sqr(9); } foo();//81
Function declarations
We declare functions with a name:
//A function called foo function foo() { //Some code here }
An we can invoke them writing () after them or invoking call() on them:
//Both execute foo function foo(); foo.call();
The scope of a function declaration is it's own and the parent (where the function itself is defined).
Function expressions
Function expressions are functions as part of a larger expression. It can be for example a variable assignment or a parameter passed into another function.
//A function inside a variable declaration which refer to the function itself var square = function(x) { return x * x; }; //A function as a parameter to the foo function foo(function(){ /* Some code here */ });
The above showed functions are call anonymous functions because they don't have a name. A function expression could carry a non anonymous function as well.
var foo = function bar() { //Code here };
In case we have a named function in an expression assigned to a variable the function will only be referable through the variable. It's name attribute will be the name of the function.
var foo = function bar() { return 10; }; foo();//returns 10 bar();//ReferenceError: bar is not defined foo.name;//returns 'bar'
The name attribute of an anonymous function is an empty string.
var foo2 = function() { return 100; } foo2.name;//returns ''
The scope of function expressions is the same as variables. Take a look at the next section for details.
Declaration Vs. evaluation
This is an important concept in functional programming. When an expression is declared (or defined) it's not evaluated yet. Only gets evaluated when invoked.
//Declaration of a function expression var sqr = function(x) { return x * x; } //The above defined functions has not been executed yet //Evaluation of the expression sqr(2);//returns 4
Functions can be used either as input or output in other functions
Yes; we can pass a function as a parameter to another function (as we have seen before) or even return a function within another function:
var sqr_function = function() { var actual_function = function(x) { return x * x; } return actual_function; } sqr_function()(3);//Returns 9
Ok, this is a pointless example, but you get the idea, right?
Scoping of variables in Javascript
An important feature of Javascript functions is that they are the only construct in the language allowed to delimitate scopes for variables. In Java and C family languages we can limit an scope with any code block. In Javascript only functions create a new scope. But, contrary to for example Java, it allows us to declare the same variable in the same scope or a nested one without error. See these examples taken from here:
var x = 1; console.log(x); // 1 if (true) { var x = 2; console.log(x); // 2 } console.log(x); // 2
function foo() { var x = 1; if (x) { (function () { var x = 2; // some other code }());//This is a nested self calling function } // x is still 1. }
As you can see a workaround to create a scope inside a function is to create a nested self calling function.
This kind of variable overriding is possible due to the use of the var reserved word. It's the way to define a local variable in Javascript (local to the current scope, that as we have mentioned can be only defined by functions). If we want to define global variables (global to the current scope) we just declare them without the var keyword.
var local = "I'm local"; global = "And I'm global";
Let's see some examples of local and global use of variables in different scopes:
var foo = function() {//Outer scope var x = 5; var bar = function() {//Inner scope var x = 10; return x; } bar(); return x;//returns 5; the second x variable is in another scope }
var foo = function() {//Outer scope var x = 5; var bar = function() {//Inner scope var x = 10; return x; } return bar();//returns 10 }
var foo = function() {//Outer scope x = 5; var bar = function() {//Inner scope x = 10; return x; } bar(); return x;//returns 10; the x is global, so function bar() changes it's value }
var foo = function() {//Outer scope var x = 5; var bar = function() {//Inner scope var y = 10; return x; } bar(); return y;//ReferenceError: y is not defined. Local variables defined with var are not //visible outside their scope (here, the function expression assigned to bar //variable) }
What's next
In the next post of this series I'll get into objects in Javascript and how to adopt an object oriented paradigm.
Resources
Function declarations vs. function expressions.
Javascript scoping and hoisting.
Javascript basics.
0 notes
marianong · 12 years ago
Text
Why the quality an productivity of your work increases when using git as CVS (Part II)
Here I come with some more practical evidences. In order to release them more quickly this post (and the followings on the same topic) will be short.
What if you need to change work frames in the middle of a new feature?
Sure this has happened to you: you're working in a new feature or in new fancy unit test, and then a very important bug (with an immediate needed hot fix) shows up. You need to leave off your current task, perhaps even change your current branch and start working in that hot fix. Once the bug is fixed, you come back and try to catch up where you left off. Well, how do you do that with Subversion? Several possible work arounds (taken from Stackoverflow):
Create a new (temporary) branch and commit your current changes there. Four svn commands are needed to accomplish this. A merge is needed when we want to catch up with those changes. This approach work with Git as well, although doing it in Git is simpler (just two straightforward commands are needed).
Create a patch with our working copy and reset the current branch. Applying the patch will recover the work.
We can automatise these two approaches with scripts if we do them frequently. No brainer. However I find in git a much better, simpler, straightforward solution: git stash.
Easier to get back to a previous commit, both keeping and discarding current working copy changes
This is a relatively frequent scenario: After spending some time crafting some piece of code you realise that you're in the wrong path; or perhaps you need to do something on a previous baseline before completing the task at hand. You have to checkout a previous commit and keep (or discard) the current working copy.
Here comes trouble. With SVN the way to recover a previous commit is performing a svn merge passing the current revision and the revision you want to go back to apply that diff. That forces you to log and seek for both the revision you currently are and the one you want to go to. This action restores the given revision state in your current working copy. The next commit will be set on top of the commit history as next revision. If you want to keep the las commit state you have to svn copy that revision on a new branch. A lot of things to do and to think about (I'm getting to lazy to do all that, man).
With git it's all just about to checkout directly to the commit you want to go to. Depending on you wanting to keep the current working copy or not we might branch or stash those changes before checking the target commit out. We could even merge the current dirty working copy state with the target commit. We have several checkout options, merging or discarding our working copy changes.
Creating a new directory or repository from already existing code
Something that should be trivial might be not depending on what CVS you're using. Suppose you have written some piece of code, or have some text files you decide to start versioning from now on. SVN doesn't allow you to create a new repository on an existing non empty directory. To create a repo on your existing files and then committing it to the central server you have to follow these steps (in order):
Create a folder or repository on your central server (i.e. svn mkdir [pathToNewDirInCentralServer]).
Checkout this new directory into a temporary location (svn co [pathToNewDirInCentralServer] [localPath]).
As a sanity check, could be useful to check that our new local tracked directory points to the right remote one (svn info on localPath).
Once checked, move the new .svn folder created in the previous step to the root location of the files we want to start versioning.
Add the files we want to keep track of to SVN (svn add [filesToTrack]) and ignore the ones we don't want to keep track of (set the ignore svn property to the appropriate value in the root directory).
Commit files (svn ci).
Because there's no way to create a new repository on a non-empty directory we have to take this ugly workaround. Working with git this more-than-common scenario would be as easy as typing git init on the root directory we want to track files in. That's all. Next we just add, commit and push whatever we want.
0 notes
marianong · 12 years ago
Text
Why the quality an productivity of your work increases when using git as CVS (Part I)
I assume that you probably have a Subversion background as I actually have. After one year of intensive and somewhat advanced use of Git I think I can give you some points where I find Git has increased notably the quality of my code and my overall productivity at work.
This is the first in a series of posts with the same title. This first post is about some reasons I've just thought about today; more will come in near future (I suppose).
Commits are much more logic, cohesive, small
I've found that git makes it a lot easier to improve the granularity and meaning of your commits. With Subversion once you track a file you commit by default all changes made to that file, regardless they make sense in a particular commit or not. When coding we change a lot of things, but not all of them are meant to be part of an specific feature or unit of work. Git allows us to explicitly indicate what changes we want to commit through the staging area, which give us total control over the project history no matter the order we make modifications to it.
Another git feature that leverages commits quality is it's local nature. We can commit offline, so not being connected is not an issue when it comes to have cohesive, granular commits. We can commit whenever we need it, and take the changes we want to be together. Neither of both is easy and straightforward to accomplish with Subversion, specially if you're not connected to the central repository.
We've defeated laziness in branching
Needless to say that branching and merging in subversion is painful. If you have worked with it for a while you already know that. Tree conflicts difficult to diagnose and sometimes cumbersome to resolve; cleanup needed sometimes to remove inconsistencies; update command needed even to stay in sync with your own commits; Log lookup necessary to keep track of merges if they're not obvious … all these difficulties drive us to an hostile attitude towards branching. It ends up in mixing new features with hot fixes (sometimes even in the same commit!!!), and bug injection in production code.
With Git and it's natural, inherently part of the system branching model we are encouraged to adopt best practices that Subversion prevents us from adopting.
We've reduced (almost eliminated) erroneous commits
Many times we forget a file in a commit, or just realise that our last commits should have carried more changes. In Svn all our commits are public, and we can not (we should not!) modify them. As a result all those mistakes are kept in our project history and make it dirty.
In Git before we publish (push) any change to any other repo we do it locally. Therefore we have a chance to correct those errors, and a complete set of tools to do so: amend commit, rebase to make a concurrent history linear, interactive rebase to change commit granularity … we have time and tools to repair commits and improve them even after making mistakes.
This also complements the commit quality improvements we've mentioned in the first point of this post.
Team members interference can plummet your productivity and lower the quality of CVS history with SVN
Due to the previous point issue we might find ourselves stuck. Imagine that for any reason someone made a "bad" commit and broke the build. In a continuous integration environment this is a very bad thing that fortunately can be detected quickly. But in the mean time (until the issue is fixed) you might be prevented from making commits. If you update the project won't compile. If you don't update you cannot commit in case of conflicts, or you propagate the error to more than one commit if there are no conflicts. Ons possible way out is to solve the compilation error, commit it, and then commit your change. But you shouldn't do that, because in SVN we commit all local changes at once (it doesn't allow us to separate the local changes we want to commit and commit only those as Git does); and you don't want to mix the compile error fix with anything else. Moreover, you might not be the right person to fix the broken build, since it's probably about a feature you are not involved with. So either you postpone your commit (and loose granularity) or wait (and loose productivity). Neither of both are good.
This issue might become a common one when working with Subversion, specially if the team is large. With Git is less likely to happen due to the staging area and local commits features. And if it happens, no one has to stop their work or make tutti-frutti commits: they can make a branch from the previous (not wrong) commit and keep on working and committing as usual, or take any other approach until the problem is solved.
Decreased productivity when working in and out-office through VPN with SVN
This issue might not seem frequent, but I've suffer it recently. Imagine you're working on a project directly on your customer infrastructure. You're connecting to his Subversion repository through VPN. Let's say that sometimes you work on your customer office (and connect directly to their network), and sometimes you work in your very own office or at home. Let's suppose that in order to avoid VPN configuration on every team member machine we configure a tunnel in our intranet to connect all people to the VPN endpoint through the tunnel. The URL of the repository is therefore different at your office and at customer's office.
In SVN you can point only at one central repository at a time. To change that we have to reallocate the repository root. This is inconvenient and not productive. If (as it's the case for us) we're managing several repositories and different branches working copies, we have to remember to change all of them depending on where we are working on nearly a daily basis. Needless to say that this is error prone and end up causing problems that take time to solve.
With Git this somewhat weird situation is not an issue. We can configure several remotes, each with the URL of the environment we're working in. Without making any change to the repos configuration, we can fetch, pull or push to any remote we want just in the command line. Depending on our location we'll use one remote or the other.
No need to worry about other team members changes when committing if we don't have to
When working with SVN before we can commit we have to update and merge conflicting changes (if any). In other words, our commits go to the central repository, and if anyone has made changes that conflict with ours we are forced to merge them in place. Fair enough. But perhaps we don't want to do it now: we'd rather prefer to ignore those conflicts so far and keep on working on a feature we're really focused right now. With Git you can do that. Your commits are local, and only when you're about to push your changes Git will force you to update your branch before pushing even if there're no conflicts. This can improve productivity, allowing you to stay focused on the task at hand without unnecessary distractions.
Do you agree with these thoughts? Would you add anything else to the list? Can you find a real situation where Subversion makes you more productive and better programmer than Git? Please share your thoughts!!! Post a comment!!
0 notes
marianong · 12 years ago
Text
Patterns to implement undo/redo feature, Part 2
In the previous post we talked about how to implement the undo/redo feature using the command pattern. Now we are going to explain another approach: the memento pattern.
The memento pattern is based on the idea of taking snapshots of an object state. The object whose state is being tracked is called Originator. The one that manipulates the Originator's state is called Caretaker. And the snapshot of the state is called … guess what? … Memento :-).
Now let's give one possible implementation in Java for this pattern. As the command pattern shown in my last post this implementation is made on my own and is not the only one valid. You can visit the undoredo repo at github, package me.marianonavas.undoredo.memento for full example code download experience.
In order to give flexibility to the developer when implementing this pattern let's define the Originator as an interface. This way any object whose state is meant to be tracked only will have to implement the interface, that would look like this:
public interface Originator<T> { Memento<T> saveToMemento(); void restoreFromMemento(Memento<T> memento); T getState(); }
The Memento type could be defined then like this:
public interface Memento<T> { T getState(); }
Just for convenience we'll use in this example a generic based minimalistic implementation of the originator interface, that we'll call GenericOriginator (just for originality's sake).
public class GenericOriginator<T> implements Originator<T> { private T state; public GenericOriginator(T state) { super(); this.state = state; } public void setState(T state) { this.state = state; } @Override public Memento<T> saveToMemento() { return new GenericMemento<T>(state); } @Override public void restoreFromMemento(Memento<T> memento) { state = memento.getState(); } @Override public T getState() { return state; } }
In real production code we won't use this class. We want instead to add state tracking capabilities to our real class via implementing Originator. That's the real power of this pattern: we can apply it to any object we already have in our application without modifying it's behaviour (just adding the methods declared in the interface).
The Memento object is only a holder for an state object, which can be any type we need. Using this holder we get type safety checks in compile time.
And this is all the infrastructure we need. Now let's create a Caretaker in the form of JUnit test. This test will manipulate the state of an object and make assertions on the expected behaviour of the undo and redo actions. We here don't invoke an undo() or redo() method as in the command pattern. Instead we take an snapshot of the object state in those points we want to be able to revert to, invoking the saveToMemento() method on the Originator object. After that, we call restoreFromMemento() to revert the Originator to any previous state:
import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import org.junit.Test; public class Caretaker { @Test public void testGenericOriginator() { String expected = "State 1"; State state = new State(expected); GenericOriginator<State> originator = new GenericOriginator<State>(state); // Let's create a restore point for originator Memento<State> memento1 = originator.saveToMemento(); assertEquals(state, memento1.getState()); } @Test public void testUndo() { State state1 = new State("State 1"); State state2 = new State("State 2"); GenericOriginator<State> originator = new GenericOriginator<State>(state1); Memento<State> memento1 = originator.saveToMemento(); originator.setState(state2); assertEquals(state2, originator.getState()); originator.restoreFromMemento(memento1); assertEquals(state1, originator.getState()); } @Test public void testRedo() { State state1 = new State("State 1"); State state2 = new State("State 2"); GenericOriginator<State> originator = new GenericOriginator<State>(state1); Memento<State> memento1 = originator.saveToMemento(); originator.setState(state2); Memento<State> memento2 = originator.saveToMemento(); originator.restoreFromMemento(memento1); assertFalse(originator.getState().equals(state2)); originator.restoreFromMemento(memento2); assertEquals(state2, originator.getState()); } }
What pattern should we use? Well, it depends on our concrete needs. If we need to be able to trace several actions execution focusing on being able to revert them in order (LIFO order) then the command pattern give us an easier foundation. If we need to be able to revert or re-apply a random state in an object the Memento pattern implementation shown here is convenient. But, what if we want to execute random business logic (not organised into an stack) at any point? well, we can do it with both patterns. But that is a matter of a future post. For now I leave it to you as homework.
Hint: To do this with the Command patter we can implement the executor as a key-value store for commands instead of a stack. That way we can access any random command to do (or redo) it, or to undo it. With the Memento pattern we can create an auxiliary storage class to do the same: keep a key-value store of the Memento objects, and change the interface to define an execute() method (or similar) that will perform the action of doing (or undoing redoing) any logic.
0 notes
marianong · 12 years ago
Text
Patterns to implement undo/redo feature, Part 1
Today I'm going to go briefly into this topic: imagine you need undo/redo functionality for your application. What design patterns can you use to implement it in an scalable, maintainable manner? Well, we have two straightforward, already made options: the command pattern and the memento pattern. Today we're going to cover the command pattern approach.
Command pattern based implementation of undo/redo
This is probably the most popular one. The main idea behind it is to isolate an action from it's actual implementation. This way we can have generic opposite actions such as write and delete, or draw and erase we can execute in an implementation agnostic manner.
In this pattern we usually need to implement several components:
One or more Command objects (well, in order to be useful we sure will need several command objects :-)), that will implement the different actions we want to perform (i.e. write and delete). They will be implementations of some kind of "Command" interface with an "execute()" method (or similar).
An invoker. This object is responsible for invoking the action, no matter what is it about. It'll call the "execute()" method.
A receiver: this is the object that is going to receive the action of the command (it's target). We can have as many receivers as we need (one for each command at most).
A client object that puts all together.
Let's review it with an example coded in Java. You can download the whole example shown here from my GitHub account (look at the me.marianonavas.undoredo.command package). This example is entirely made on my own, and is not the only solution to the problem.
We first need a command interface. It's convenient that each command knows exactly how to do it`s job and how to undo it, so our interface will look like this:
public interface Command { void doIt(); void undoIt(); String whoAmI(); }
Let's use a sample implementation that calls intuitive methods in a target object:
public class UndoableSampleCommand implements Command { private final DefaultCommandTarget target; public UndoableSampleCommand(DefaultCommandTarget target) { this.target = target; } @Override public void doIt() { target.doForward(); } @Override public String whoAmI() { return "I'm a sample undoable command. Can you guess what I do?"; } @Override public void undoIt() { target.doBackward(); } }
As we said before we need a command object and a receiver. In this sample implementation our command is going to act on a DefaultCommandTarget object, which is a class created four our example. So far, we have a concrete command class that extends the Command interface and a command target class whose methods are called by the command's. Next we need an invoker, and here the thing gets interesting:
public class CommandExecutor { private final CommandStack commandStack; public CommandExecutor(int undoBufferSize) { super(); commandStack = new CommandStack(undoBufferSize); } public void executeCommand(Command cmd) { commandStack.push(cmd); cmd.doIt(); } public void undoLastCommand() { Command cmd = commandStack.getLastCommand(); cmd.undoIt(); } public void redoLastUndoedCommand() { Command cmd = commandStack.recoverLastGettedCommand(); cmd.doIt(); } }
Our invoker uses a Command Stack object to hold an instance of all executed commands. It acts like a cyclic stack with fixed size: when it grows beyond it's size, it discards elements on the stack on a FIFO fashion (to avoid infinite growth of the stack and therefore a memory leak). As a stack it allows us to retrieve the last executed command (pop-like behaviour), or recover the last popped command for redo. You can download the code and unit test for the Command Stack in GitHub.
Now that we have all the pieces, let's put them together. The following main method creates an instance of the command and uses the executor to invoke it, then to call undo and finally to invoke a redo on it.
public static void main(String[] args) { // First let's migrate the account [email protected] String initialState = "First"; String finalState = "Second"; DefaultCommandTarget migration = new DefaultCommandTarget(initialState, finalState); Command cmd = new UndoableSampleCommand(migration); CommandExecutor exec = new CommandExecutor(5); System.out.print("Command execution: "); exec.executeCommand(cmd); System.out.print("Undo action: "); exec.undoLastCommand(); System.out.print("Redo action: "); exec.redoLastUndoedCommand(); }
Along with this main method that prints in standard output what is going on we can use a unit tests (mocking command objects with mockito) to verify that doIt() and undoIt() methods in our command interface are called properly from the invoker (source here).
And that's it. We've used command pattern to implement undoable action agnostic behaviour. Now all we have to do is to implement the Command interface with the undoable actions we want to include in our system, and let the invoker call those actions and control the flow of their execution.
That's all we have time for today. Next time, we'll dig into another way to implement the undo/redo behaviour: the Memento pattern.
0 notes
marianong · 12 years ago
Text
The story of an heroic survival
Recently I've posted [an article in Paradigma Tecnológico's blog](http://www.paradigmatecnologico.com/en/blog-2/the-story-of-an-heroic-survival-the-java-platform/) about the story and trends regarding the Java Platform. I just wanted to mention it here, in case it's of your interest and you want to have a look at it (if you're a British English speaker, have a butchers :-) ). Thank you, and talk to you soon here (I wish).
0 notes
marianong · 13 years ago
Text
User-friendly JSON validation in Groovy
Today I'm going to explain a real solution we gave to a customer regarding editing a raw json plain text file. It is not a good practise to let end users edit config files directly in raw format, but in this case we are talking about a backoffice with only one administrator and good capabilities for such a task. In addition, we were running out of time and had to give a simple but useful solution to a problem cosisting on allowing an end user to modify some configuration represented by a very flexible and unpredictible text file in terms of content.
So we ended up with a screen with a text area filled with a pretty printed json file, and a submit link. The user is completely free to mess up the file although his or her real intentions are sure the best possible ones. We're talking about a configuration file and a hole system depends on its correctness, so (as we are dealing with and end user) any possible precaution won't be enough.
However if the content of the file is not correct the system won't behave as expected, and that would be end user's fault. Since this configuration is his/her responsability, we only have to worry about the syntax of the file being correct, so the machine can parse and act according to the file specifications. If the specifications are not right, the user will have to change them, not us. Neither is our duty to detect inconsistences in terms of file content. So as long as we are working with a json format, we have to accomplish mainly two tasks:
Make sure the submited content after edition is well-formed json.
In case of formatting errors, to give user friendly feedback to allow the user to quickly detect and solve the problem.
Here you are some brief explanations and code samples on how we did it:
We were dealing with a Groovy/Grails application, so first we made a an action in our controller to serve the page with the textarea and the link:
def editOntologies = { String jsonStr = sentimentEditionService.readOntologiesConfiguration() [documentText: jsonStr] }
Simple: we invoke a service to obtain a pretty formatted json as String, and pass it to the model. The action call after the user's submit action looks like this:
def updateOntologies = { String jsonDocStr = params.documento try { String testOutput = sentimentEditionService.updateOntologies(jsonDocStr) flash.message='fichero de Ontologies modificado con éxito' //TODO: descomentar cuando haya back // redirect(action: index) render testOutput } catch (OntologiesJSonParsingException ex) { flash.message=ex.message flash.doc=ex.docPart redirect(action: editOntologies, params: [numChar: ex.numChar])// } }
The request parameter "documento" carries the edited json text. The updateOntologies() method in sentimentEditionService updates the actual configuration file, and returns the same String (it may return void, but for debugging purposes it's useful to have this in the controller). In case of any json parsing error, the service will throw an OntologiesJSonParsingException which carries useful information about what is wrong with the file. The catch block feeds some entries in the flash scope to allow the controller use them after the subsequent redirect.
Let's dig a bit into what the sentimentEditionService service does:
String updateOntologies(String docStr) throws OntologiesJSonParsingException { def slurper = new JsonSlurper() try { def jsonTree = slurper.parseText(docStr) jsonTree.toString(4) dataService.restWithPost(ONTOLOGIES_UPDATE_PATH, jsonTree) } catch (JSONException ex) { String exMsg = ex.message int index = exMsg.indexOf('{"root"') String explanationPart = exMsg.substring(0, index) def matcher = explanationPart =~/(.*character\s(.*\d))\sof/ Integer numChar = matcher[0][2].toInteger() explanationPart = matcher[0][1] OntologiesJSonParsingException nestedEx = new OntologiesJSonParsingException(explanationPart, ex) nestedEx.numChar=numChar nestedEx.docPart=docStr throw nestedEx } }
As you can see, first we parse the String. If everything in the parsing process is correct we invoke the service responsible for the actual update of the file, that will be replaced with the new edited content. If there's anything wrong with the format (pretty sure there will be something) a JSONException will be thrown, and we will be there to catch and manage it. In such a case taking advantage of the standard message format for this Exception message we use regular expressions to extract:
A general description of the error (holded by the variable "explanationPart").
The exact character position where the error has been detected (holded by the variable "numChar").
Then we create a OntologiesJSonParsingException object and populate it with this information. Now following the execution flow we come back to the catch block in the controller we've seen before:
catch (OntologiesJSonParsingException ex) { flash.message=ex.message flash.doc=ex.docPart redirect(action: editOntologies, params: [numChar: ex.numChar])// }
Now you can understand where that flash scoped variables we're setting come from. We pass also the error position as a parameter to the action which shows the textarea and submit link (we are going to load the edited content in the text area and show a message to the user indicating to him where the error is located). we have to modify this action this way to deal with this new redirect:
def editOntologies = { String docWithErrors = flash.doc String jsonStr if(docWithErrors) { jsonStr = docWithErrors } else { jsonStr = sentimentEditionService.readOntologiesConfiguration() } [documentText: jsonStr, numChar: params.numChar] }
To keep this action working when we first access the page, we ask for the "doc" variable in the flash scope. In case we come here for first it'll be null and the content of the textarea will be loaded from the actual config file by the readOntologiesConfiguration() method in the sentimentEditionService. If we come from a malformed json input, this variable will bring us the edited content of the file, errors included. We don't want to spoil the work done by the user, who may have changed a lot of things correctly, just because of one silly error (know what I mean ...). So we keep all the changes already made, and the model is populated with them. Also we pass the error position in the model.
And how do we render it? With a custom tag we declar as this:
def renderErrorsJson = {attrs, body -> String jsonText = attrs.jsonText Integer errorCharPosition = attrs.errorCharPosition.toInteger() Integer margins = attrs.margins.toInteger() if(!margins) { margins = 100 } jsonText.replaceAll('\n', '<br/>') String str1 = jsonText.substring(errorCharPosition-margins>=0?errorCharPosition-margins:0, errorCharPosition) String errorChar = "<span style='background-color: red'>${jsonText.charAt(errorCharPosition)}</span>" String str2 = jsonText.substring(errorCharPosition+1, (errorCharPosition+margins>=jsonText.length()?jsonText.length():errorCharPosition+margins)) out << ((str1 + errorChar + str2).replaceAll(System.properties['line.separator'], '<br/>')) }
And then we call it in this fashion:
<g:if test="${numChar}"> <hp:renderErrorsJson jsonText="${documentText}" errorCharPosition="${numChar}" margins="250"/> </g:if>
To end with, we embed all together in a message styled div in the gsp:
<g:if test="${flash.message}"> <div class="errors"> Hay errores de formato en el documento<br/> A continuación se muestra un breve fragmento del mismo dentro del cual se ha localizado el error. <br/> Se señala con fondo rojo la posición en la que éste se ha detectado<br/> <br/> <span style="font-weight: bold;">${flash.message}</span> <br/> <br/> <g:if test="${numChar}"> <nh:renderErrorsJson jsonText="${documentText}" errorCharPosition="${numChar}" margins="250"/> </g:if> </div> </g:if>
This way we show the user along with the JSONException message a fragment of the file sourrounding the error (the actual size of this fragment is definde by the margin attribute of the tag), and in red background the actual position where the error has been detected, letting him realize what is wrong and make proper corrections.
It's nice to see how easy is to make this with Groovy and Grails. As a homework to really appreciate it, I encourage you to try the same in Java using Spring MVC.
See you next time. Bye.
9 notes · View notes
marianong · 13 years ago
Text
Groovy goodies part 1
Redefining the plus (+) operator in Groovy
Hi everyone!
Today I'm going to say almost nothing. I'm going just to give you a self explanatory example of how to define the behaviour of the plus (+) operator in any class. Just implement a plus() method. Here's the example (you can try it out in the Groovy console):
class MyClass { Integer i1 Integer i2 MyClass plus(MyClass other) { new MyClass(i1: i1+other.i1, i2: i2+other.i2) } String toString() { "i1=${i1}, i2=${i2}" } } MyClass m1 = new MyClass(i1: 10, i2: 20) MyClass m2 = new MyClass(i1: 5, i2: 10) MyClass result = m1+m2 println(result)
The output of the script will be this:
i1=15, i2=30
No comments or explanations needed, I think.
Goodbye!
1 note · View note
marianong · 13 years ago
Text
Groovy gotchas part 2
In the last episode of this series we talked about those GDK methods that may show up as not available in our code due to the underlying implementation of the class. Today in few words we are going to talk about a simple but tricky issue, wich can you drive mad if don't get the point soon. When you are aprsing a JSON response in Groovy everything is soft nd easy. We user the "dot" notation in a Javascript manner to "navigate" throughout our object hierarchy. But imagine we need some dinamyc retrieval in the structure, and we decide to use the [] operator (not orthodox to say, but I don't remember it's name at this very moment): BigDecimal valGeneral=competitorsRestMap[it.id]?.general?.media ... as it happens, it is a domain entity with a numeric id attribute. Despite the json has an entry that corresponds to this number, I'm taking a null value. Can you guess why? I was steaming through my ears when i realized that the underlying implementation of the object representing the JSON structure is a Map. The subscript operator (I finally recalled it's name) is translated by the compiler into a get method invokation, which relies on the equals and hashCode methods implementation of the object being holded. In the JSON structure this value happens to be a String, and I'm passing into the underlying get method a Number, so the equals call doesn't match and I get returned a null value. The solution is as simple as this: BigDecimal valGeneral=competitorsRestMap[it.id as String]?.general?.media Easy, but tricky if you are new to Groovy. Hope this post can help anyone before steaming brain throught your ears. See ya.
3 notes · View notes
marianong · 13 years ago
Text
A briefing on some grails 1.3 productivity issues
Hi everyone! This is ment to be a tiny post about a productivity issue i've found in Grails 1.3.7 when working with intellij idea 10.5. It seems like the platform leads you to restart the server when stoping at a breakpoint in the controller code before invoking a service. I don't know what is the cause of this annoying issue, neither if it is related to my very concrete development environment or is a more general problem, related to the Grails version, the IDE integration or both. I'm pretty sure this is solved in Grails 2.0, and has to do more with Grails than with the IDE. Anyway, any comments or ideas would be wellcome (if there is anybody out there, I guess not :)). The error I get is the following: *java.lang.ClassCastException: front.competitors.CompetitorsService cannot be cast to front.competitors.CompetitorsService*. This happens if I debug code before service code invocation. I warn you: if you make a bad refactor taking a service out of it's convention folder, and then refactor back to fix it, you may find the problem i'm dealing with. Take care and see you soon again in the Java developer diary.
4 notes · View notes
marianong · 14 years ago
Text
Implementing a dynamic null-safe comparator in Groovy
Recently i've run into this requirement: in a gsp grails view, i had to present a fixed (not ordenable) list of items ordered first by one column in descendant orden, and in case of equality by another column. The logic for both ordenations was the natural order of Double values that could be null. So the first issue i had to deal with was making a null-safe comparison of Double objects without getting a NullPointerException.
Fortunately we have Jakarta Commons Collections and it's pretty cool gems, one of which is the utility comparators ready to use it provides.
But this was not enougth for me. My list of items was (as usual) represented by an ordered list of beans, but none of them was to be null in my case. The concrete properties of this bean i had to take into account when ordering it is what could be null, and the NullComparator shipped with this API allows me to include null references in my list, and even give them a high or low priority over the rest of not null elements, but doesn't prevent a NullPointerException to occur when the property you're using to compare is null.
After a quick Google search for an "already made" solution for this problem i wasn't able to find the component i was looking for, so i decided to implement it on my own making it reusable for any kind of bean, taking advantage of the dynamic features of the Groovy language. So these are the steps i followed.
Creating a Groovy class that implements the Comparable interface
Any collection in Groovy has a set of sort() methods that returns an ordered list of the collection if it's not a list, or the ordered list itself if the collection is already an instance of the List interface. The several variants of this method allows us to customize the behaviour of the sorting process. The one that interests us now receives a Comparator object as an argument. The first step we are getting into consists in creating a Grovvy class that implements this interface and behaves as we need. As we said earlier, we want a generic comparator that allows us to order any list despite the type of the object we are using to represent its items, so we will deal with beans with properties referenced as instances of the Object class.
Our class int this very first step of construction would look something like this:
/** * Dynamic comparator for any Java Bean, that makes a null safe comparison * of two objects comparing them in a null safe fashion by the natural * ordering of the proprerty whose name is specified in the constructor * User: mnavas * Date: 11/23/11 * Time: 9:51 AM * To change this template use File | Settings | File Templates. */ class DynamicNullSafeComparator implements Comparator { private Log log = LogFactory.getLog(getClass()) List<String> propNames ... int compare(Object o1, Object o2) { ... } }
Just a couple of things to point out in this code snippet:
We have a property in the class, a List of String objects, where we will hold the names of the properties we will have into account when it comes down to order a collection. We will make this comparator able to order by any property of our working bean, and in case of equality it will be able to user a second, third, and so forth property to resolve which element comes first. The desired ordering for all the properties relevant in the comparison will be given in the list itself (the order of its elements will define in which order we want to take the bean properties into account).
As we are implementing the Comparator interface, we need a compare method as defined in it. We will dig into the details of this method implementation later.
Next thing to do: we are implementing a generic comparator, so despite our requirements doesn't force us to deal with null objects, our comparator will support it. On the other hand, we have to order first by the first element on the properties names list, and in case of equality fall down to the next property until we get a non equals comparison result. Let's have a look at a primer version of our compare method trying to achieve this behaviour:
int compare(Object o1, Object o2) { int result = 0; propNames.each {nextName-> if(result == 0) { NullComparator nullComparator = new NullComparator(nullsAreHigh) result = nullComparator.compare(o1, o2) } } //println "Resultado del compare: ${result}" result }
This implementation uses the default internal ComparableComparator of the NullComparator, which uses natural order (given by the implementation of the Comparable interface) of the objects being compared.
First enhacement: selecting the order (asc or desc)
We are already comparing objects in a null safe fashion, and by two or more fields, but, what about ordering? Our class doesn't allow us to select ascendent or descendent order. To do so, let's view directly the code that does the trick:
class DynamicNullSafeComparator implements Comparator { //Log instance private Log log = LogFactory.getLog(getClass()) //List with the properties to order by List<String> propNames //The ordering of this sorting OrderingDirection orderingDirection //A Boolean to indicates wether null values should be pushed first or pulled back in the list Boolean nullsAreHigh //En anumeration that defines the different ordering directions static enum OrderingDirection { ASC, DESC } //The interface's method implementation int compare(Object o1, Object o2) { int result = 0; propNames.each {nextName-> if(result == 0) { NullComparator nullComparator = new NullComparator(new LocalComparator(nextName), nullsAreHigh) result = nullComparator.compare(o1, o2) } } result } //A convenient comparator that does the trick of the ordering direction for us private class LocalComparator implements Comparator { //Bean property to hold the name of the property to compare in the two objects String propName //A public constructor to make it easier to create an instance passing over a property name LocalComparator(String propName) { this.propName=propName } int compare(Object o1, Object o2) { Object first, second if(ordering==OrderingDirection.ASC) { first=o1 second=o2 } else { first=o2 second=o1 } first."${propName}".compare(second."${propName}") } } }
The key to the ordering direction is in the inner class LocalComparator. As we can see, depending on the ordering direction it considers either object as the first or second one. We have added several properties to our class, in order to allow the client code to specificate, among the names of the properties involved in the sorting, whether null values should have hight or low priority and whether we want the sorting to be made in ascendant or descendant order. We also take advantage of the dynamic nature of Groovy to dynamically call the getter for the property being compared in the two beans passed into the compare() method.
Second enhacement; null safety in bean properties
What happens if the objects in the list that is to be sorted are not null but in some or all of them one or several of the properties involved are null? With the current implementation we may get a (guess what ...) NullPointerException !!
... first."${propName}".compare(second."${propName}") ...
This is a problem we can solve again using a NullComparator. The bean property being compared is in fact another object, so we could do something like this in the compare method of the LocalComparator inner class:
int compare(Object o1, Object o2) { Object first, second int ret; if(ordering==OrderingDirection.ASC) { first=o1 second=o2 } else { first=o2 second=o1 } NullComparator localNc = new NullComparator(nullsAreHigh) localNc.compare(first?."${propName}", second?."${propName}") }
Now if either of the properties is null, the null safe Groovy operand (?.) will pass a null value into the compare method of this inner NullComparator without throwing an exception. At the same time, we can control the priority of this null value in the same way we did before.
Third enhacement: putting all together and nicer
For the sake of better encapsulation we will make this comparator class inmutable; that is, we set its state at construction time and we cannot change it; we have to create a new object if we want to use a comparator with different properties. This is a good practice that functional programming brings to us. To do so in Groovy we take this two steps:
Addind a set of constructors that allows client code to create the object with the desired state. We'll make public only one of them and force calling code to stablish all the properties of the comparator in a row.
Implementing setters for the properties of the comparator in a way we don't allow changing them (neither list of property names to be considered in sorting, nor the ascendant or deescendant order or the priority of null values).
This is the final picture of the class, removing comments and other non executable stuff:
package util import org.apache.commons.collections.comparators.NullComparator import org.apache.commons.lang.Validate import org.apache.commons.logging.Log import org.apache.commons.logging.LogFactory class DynamicNullSafeComparator implements Comparator { private Log log = LogFactory.getLog(getClass()) List<String> propNames OrderingDirection ordering Boolean nullsAreHigh static enum OrderingDirection { ASC, DESC } private DynamicNullSafeComparator(List<String> propNames) { Validate.notNull(propNames, 'The list of property names to take into sorting cannot be null') this.propNames = propNames } private DynamicNullSafeComparator(List<String> propNames, OrderingDirection ordering) { this(propNames) Validate.notNull(ordering, 'The ordering parameter cannot be null') this.ordering=ordering } public DynamicNullSafeComparator(List<String> propNames, OrderingDirection ordering, Boolean nullsAreHigh) { this(propNames, ordering) Validate.notNull(nullsAreHigh, 'the nullsAreHight parameter cannot be null') this.nullsAreHigh=nullsAreHigh } public void setPropNames(String pn) { throwInmutableException() } public void setOrdering(OrderingDirection ordering) { throwInmutableException() } public void setNullsAreHigh(Boolean b) { throwInmutableException() } private void throwInmutableException() { throw new UnsupportedOperationException('''This is a read-only property, because this object is inmutable. If you need to compare using another property or another comparison conditions, create another instance''') } int compare(Object o1, Object o2) { int result = 0; propNames.each {nextName-> if(result == 0) { NullComparator nullComparator = new NullComparator(new LocalComparator(nextName), nullsAreHigh) result = nullComparator.compare(o1, o2) } } result } private class LocalComparator implements Comparator { String propName public LocalComparator(String pName) { propName = pName } int compare(Object o1, Object o2) { Object first, second int ret; if(ordering==OrderingDirection.ASC) { first=o1 second=o2 } else { first=o2 second=o1 } NullComparator localNc = new NullComparator(nullsAreHigh) localNc.compare(first?."${propName}", second?."${propName}") } } }
What's odd?
If we had decided to do things in the right an orthodox way, we should already have some unit tests to verify that our implementation works well. We include a complete set of tests here, in which we use Groovy Expando dynamic objects to create different sets of fixtures and beans to test several use cases. You can copy and paste this code and verify that everything works well (or not):
import util.DynamicNullSafeComparator.OrderingDirection as OrderingDirection class DynamicNullSafeComparatorTest extends GroovyTestCase{ DynamicNullSafeComparator comparator /* * Tests about state violations */ void testNullAllArgConstructor() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator([null], null, null) }) } void testNullOrderingParameter() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator(['aProperty'], null, true) }) } private def createNewValidComparator() { new DynamicNullSafeComparator(['aProperty'], OrderingDirection.ASC, true) } void testNullHighOrLowNulls() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator(['aProperty'], OrderingDirection.ASC, null) }) } void testInmutablityNameProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.propNames='anotherProperty' }) } void testInmutabilityOrderingProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.ordering=DynamicNullSafeComparator.OrderingDirection.DESC }) } void testInmutabilityNullOrderProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.nullsAreHigh=false }) } /* * Test about sorting a single property */ private def createListHappyPath() { [new Expando(name: 'Mariano'), new Expando(name: 'Alfredo'), new Expando(name: 'Jose')] } private def createListOneNullObject() { [null, new Expando(name: 'Jose'), new Expando(name: 'Alfredo')] } private def createListOneNullPropertyInOneObject() { [new Expando(), new Expando(name: 'Jose'), new Expando(name: 'Alfredo')] } void testHappyPathAscNullsHigh() { def els = createListHappyPath() doGenericTest(els, 'name', [els[1], els[2], els[0]], OrderingDirection.ASC, true) } void testOneNullObjectAscNullsHight() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[2], els[1], els[0]], OrderingDirection.ASC, true) } void testNullPropertyInOneObjectAscNullsLow() { def els = createListOneNullPropertyInOneObject() doGenericTest(els, 'name', [els[0], els[2], els[1]], OrderingDirection.ASC, false) } void testDescOrderNullsHigh() { def els = createListHappyPath() doGenericTest(els, 'name', [els[0], els[2], els[1]], OrderingDirection.DESC, true) } void testDescOrderWithNullObjectNullsLow() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[0], els[1], els[2]], OrderingDirection.DESC, false) } void testDescOrderWithNullObjectNullsHigh() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[1], els[2], els[0]], OrderingDirection.DESC, true) } private void doGenericTest(def beansList, String comparatorPropName, def expectedList, OrderingDirection order, Boolean nullsHigh) { if(order) { comparator = new DynamicNullSafeComparator([comparatorPropName], order, nullsHigh) } else { comparator = new DynamicNullSafeComparator([comparatorPropName], OrderingDirection.ASC, nullsHigh?:false) } beansList.sort(comparator) assert beansList == expectedList } /* * Test about sorting by two properties */ void testHappyPathWithTwoPropertiesAsc() { def input = [new Expando(name: 'Pepe', edad: 18), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, true, ['name', 'edad']) } void testTwoPropertiesOneNullObjectAscNullsHigh() { def input = [null, new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, true, ['name', 'edad']) } void testTwoPropertiesOneNullPropertyAscNullsHigh() { def input = [new Expando(name: null, edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, true, ['name', 'edad']) } void testTwoPropertiesSecondOneNullAscNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[1], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, true, ['name', 'edad']) } void testTwoPropertiesSecondOneNullAscNullsLow() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, false, ['name', 'edad']) } void testTwoPropertiesSecondOneNullDescNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[0], input[1], input[2]] genericTestTwoProperties(input, expected, OrderingDirection.DESC, true, ['name', 'edad']) } void testTwoPropertiesFirstOneNullAscNullsLow() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: null, edad: 45), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, false, ['name', 'edad']) } void testTwoPropertiesFirstOneNullAscNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: null, edad: 45), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[0], input[1]] genericTestTwoProperties(input, expected, OrderingDirection.ASC, true, ['name', 'edad']) } private void genericTestTwoProperties(def input, def expected, OrderingDirection order, Boolean nullsHigh, List<String> propNames) { comparator = new DynamicNullSafeComparator(propNames, order, nullsHigh) def actual = input.sort(comparator) assert expected == actual } /* * Some corner cases tests */ void testSecondPropertyAlwaysNull() { genericTestOnePropertyAlwaysNull('staffValorationsRounded', 'customerValorationsRounded') } void testFirstPropertyAlwaysNull() { genericTestOnePropertyAlwaysNull('customerValorationsRounded', 'staffValorationsRounded') } private void genericTestOnePropertyAlwaysNull(String prop1, String prop2) { Expando b1 = new Expando(name: 'Circo del Sol', customerValorationsRounded: null, staffValorationsRounded: new Double(8.2)) Expando b2 = new Expando(name: 'Teresa Rabal', customerValorationsRounded: null, staffValorationsRounded: new Double(7.7)) Expando b3 = new Expando(name: 'Zingaros del mundo', customerValorationsRounded: null, staffValorationsRounded: new Double(7.3)) Expando b4 = new Expando(name: 'El Gran circo de los niños', customerValorationsRounded: null, staffValorationsRounded: new Double(8.1)) def inputList = [b2,b4,b3,b1] def expected = [b1,b4,b2,b3] comparator = new DynamicNullSafeComparator([prop1, prop2], OrderingDirection.DESC, true) def actual = inputList.sort(comparator) assert expected == actual } }
What's next (another one improvement)
A new improvement we can include in our comparator is the posibility of stablishing a different ordering for each property on the list. Up to now we are able to sort ie this list by the property 'name' of our bean, and in case of equality use the property 'age' to break even. But the order for the two properties have to be the same for both. Making the order configurable for each property is pretty simple, we just replace the order property of our comparator with a List or OrderingDirection elements, this way:
We convert the Ordering direction property into a list of directions (one for each property, in corresponding order).
List<String> propNames List<OrderingDirection> ordering
We modify the contructors and setters to adapt to this change.
private DynamicNullSafeComparator(List<String> propNames, List<OrderingDirection> ordering) { this(propNames) Validate.notNull(ordering, 'The ordering parameter cannot be null') this.ordering=ordering } public DynamicNullSafeComparator(List<String> propNames, List<OrderingDirection> ordering, Boolean nullsAreHigh) { this(propNames, ordering) Validate.notNull(nullsAreHigh, 'the nullsAreHight parameter cannot be null') this.nullsAreHigh=nullsAreHigh } public void setOrdering(List<OrderingDirection> ordering) { throwInmutableException() }
Now we modify the LocalComparator inner class to receive, along the property name to compare, the order for that property.
private class LocalComparator implements Comparator { String propName OrderingDirection directionForThisProperty public LocalComparator(String pName, OrderingDirection direction) { propName = pName directionForThisProperty=direction } int compare(Object o1, Object o2) { Object first, second int ret; if(directionForThisProperty==OrderingDirection.ASC) { first=o1 second=o2 } else { first=o2 second=o1 } NullComparator localNc = new NullComparator(nullsAreHigh) localNc.compare(first?."${propName}", second?."${propName}") } }
To end with, in compare() method we modify the call to the LocalComparator to pass over the corresponding direction.
int compare(Object o1, Object o2) { int result = 0; int cont = 0; propNames.each {nextName-> if(result == 0) { NullComparator nullComparator = new NullComparator(new LocalComparator(nextName, ordering[cont]), nullsAreHigh) result = nullComparator.compare(o1, o2) cont++ } } result }
And finally we write some tests to verify that everything works fine. We include first the new tests and then the complete test suite with the modifications nededed to make the already made tests work properly:
//New tests /* * Ordering by different order in each property */ void testTwoPropertiesBothAsc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesFirstAscSecondDesc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[1], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.DESC], true, ['name', 'edad']) } void testTwoPropertiesFirstDescSecondAsc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[0], input[1], input[2]] genericTestTwoProperties(input, expected, [OrderingDirection.DESC, OrderingDirection.ASC], false, ['name', 'edad']) } .... //Complete test suite with this new three tests included package util import util.DynamicNullSafeComparator.OrderingDirection as OrderingDirection class DynamicNullSafeComparatorTest extends GroovyTestCase{ DynamicNullSafeComparator comparator /* * Tests about state violations */ void testNullAllArgConstructor() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator([null], null, null) }) } void testNullOrderingParameter() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator(['aProperty'], null, true) }) } private def createNewValidComparator() { new DynamicNullSafeComparator(['aProperty'], [OrderingDirection.ASC], true) } void testNullHighOrLowNulls() { shouldFail(IllegalArgumentException.class, { comparator = new DynamicNullSafeComparator(['aProperty'], [OrderingDirection.ASC], null) }) } void testInmutablityNameProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.propNames='anotherProperty' }) } void testInmutabilityOrderingProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.ordering=[DynamicNullSafeComparator.OrderingDirection.DESC] }) } void testInmutabilityNullOrderProperty() { comparator = createNewValidComparator() shouldFail(UnsupportedOperationException.class, { comparator.nullsAreHigh=false }) } /* * Test about sorting a single property */ private def createListHappyPath() { [new Expando(name: 'Mariano'), new Expando(name: 'Alfredo'), new Expando(name: 'Jose')] } private def createListOneNullObject() { [null, new Expando(name: 'Jose'), new Expando(name: 'Alfredo')] } private def createListOneNullPropertyInOneObject() { [new Expando(), new Expando(name: 'Jose'), new Expando(name: 'Alfredo')] } void testHappyPathAscNullsHigh() { def els = createListHappyPath() doGenericTest(els, 'name', [els[1], els[2], els[0]], [OrderingDirection.ASC], true) } void testOneNullObjectAscNullsHight() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[2], els[1], els[0]], [OrderingDirection.ASC], true) } void testNullPropertyInOneObjectAscNullsLow() { def els = createListOneNullPropertyInOneObject() doGenericTest(els, 'name', [els[0], els[2], els[1]], [OrderingDirection.ASC], false) } void testDescOrderNullsHigh() { def els = createListHappyPath() doGenericTest(els, 'name', [els[0], els[2], els[1]], [OrderingDirection.DESC], true) } void testDescOrderWithNullObjectNullsLow() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[0], els[1], els[2]], [OrderingDirection.DESC], false) } void testDescOrderWithNullObjectNullsHigh() { def els = createListOneNullObject() doGenericTest(els, 'name', [els[1], els[2], els[0]], [OrderingDirection.DESC], true) } private void doGenericTest(def beansList, String comparatorPropName, def expectedList, List<OrderingDirection> order, Boolean nullsHigh) { if(order) { comparator = new DynamicNullSafeComparator([comparatorPropName], order, nullsHigh) } else { comparator = new DynamicNullSafeComparator([comparatorPropName], [OrderingDirection.ASC], nullsHigh?:false) } beansList.sort(comparator) assert beansList == expectedList } /* * Test about sorting by two properties */ void testHappyPathWithTwoPropertiesAsc() { def input = [new Expando(name: 'Pepe', edad: 18), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] //comparator = new DynamicNullSafeComparator(['name', 'edad'], OrderingDirection.ASC, true) def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesOneNullObjectAscNullsHigh() { def input = [null, new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesOneNullPropertyAscNullsHigh() { def input = [new Expando(name: null, edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesSecondOneNullAscNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[1], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesSecondOneNullAscNullsLow() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], false, ['name', 'edad']) } void testTwoPropertiesSecondOneNullDescNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: 'Jose', edad: null), new Expando(name: 'Jose', edad: 30)] def expected = [input[0], input[1], input[2]] genericTestTwoProperties(input, expected, [OrderingDirection.DESC], true, ['name', 'edad']) } void testTwoPropertiesFirstOneNullAscNullsLow() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: null, edad: 45), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC], false, ['name', 'edad']) } void testTwoPropertiesFirstOneNullAscNullsHigh() { def input = [new Expando(name: 'Pepe', edad: 15), new Expando(name: null, edad: 45), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[0], input[1]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC], true, ['name', 'edad']) } private void genericTestTwoProperties(def input, def expected, List<OrderingDirection> order, Boolean nullsHigh, List<String> propNames) { comparator = new DynamicNullSafeComparator(propNames, order, nullsHigh) def actual = input.sort(comparator) assert expected == actual } /* * Corner cases tests */ void testSecondPropertyAlwaysNull() { genericTestOnePropertyAlwaysNull('valoracionReviewsThisYearRounded', 'valoracionSurveyThisYearRounded') } void testFirstPropertyAlwaysNull() { genericTestOnePropertyAlwaysNull('valoracionSurveyThisYearRounded', 'valoracionReviewsThisYearRounded') } private void genericTestOnePropertyAlwaysNull(String prop1, String prop2) { Expando b1 = new Expando(name: '40 Flats', valoracionSurveyThisYearRounded: null, valoracionReviewsThisYearRounded: new Double(8.2)) Expando b2 = new Expando(name: '4C PUERTA EUROPA', valoracionSurveyThisYearRounded: null, valoracionReviewsThisYearRounded: new Double(7.7)) Expando b3 = new Expando(name: '562 Nogaro', valoracionSurveyThisYearRounded: null, valoracionReviewsThisYearRounded: new Double(7.3)) Expando b4 = new Expando(name: 'Van der Valk Brussels Airport', valoracionSurveyThisYearRounded: null, valoracionReviewsThisYearRounded: new Double(8.1)) def inputList = [b2,b4,b3,b1] def expected = [b1,b4,b2,b3] comparator = new DynamicNullSafeComparator([prop1, prop2], [OrderingDirection.DESC], true) def actual = inputList.sort(comparator) assert expected == actual } /* * Ordering by different order in each property */ void testTwoPropertiesBothAsc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[1], input[2], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.ASC], true, ['name', 'edad']) } void testTwoPropertiesFirstAscSecondDesc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[2], input[1], input[0]] genericTestTwoProperties(input, expected, [OrderingDirection.ASC, OrderingDirection.DESC], true, ['name', 'edad']) } void testTwoPropertiesFirstDescSecondAsc() { def input = [new Expando(name: 'Maria', edad: 15), new Expando(name: 'Jose', edad: 25), new Expando(name: 'Jose', edad: 30)] def expected = [input[0], input[1], input[2]] genericTestTwoProperties(input, expected, [OrderingDirection.DESC, OrderingDirection.ASC], false, ['name', 'edad']) } }
Some restrictions of this comparator
We've seen so far the simplicity and power of this generic comparator. Unfortunately it has some limitations we have to keep in mind:
It doesn't allow nested properties. We can sort by any direct property of any bean, but we cannot use the dot notation to "navigate" to nested properties and order by them. We can instead take the parent objects of those properties and order them, associating each object to its corresponding parent, and reconstruct the object graph at later time. Homework for you: implement an example of this use case scenario.
We can only sort items that implements the Comparable interface, so the used algorithm relies on the natural order of the objects in use. We can fix this with another simple impovement, which may be the subject of another post in near future.
Take care and see you soon on the Java Developer Diary.
0 notes
marianong · 14 years ago
Text
Tricky issues in Javascript for Java Developers (part 2)
Note: as it was for the first part of this series, the examples and main ideas of this posts are taken from this article. In this series we are focusing only in Javascript comparing it to Java, in order to provide a valuable guide for Java developers interested in getting in deeper in Javascript. We don't pretend neither to copy/paste this source, nor to cover other topics included in it, like jquery. We take some of the code samples because they show crystal clear what we mean in each chapter.
In the first part of this series we talked about boolean evaluation, comparison operators and object definition in Javascript pointing out what a Java developer has to keep in mind regarding it (as a programmer used to one language the interesting thing is what changes, not what remains the same, you know...). In this second part we are focusing in functions, a concept that differs from it's Java counterpart (assuming that there is a Java counterpart for functions; methods are a different thing really).
Functions as first-class citizens
In Java methods are always attached to classes. In Javascript we can have standalone functions, either global scoped ones or more local scoped ones. This functions can be given a name, executed and passed around as a self contained unit. This is the concept of closure, so we can conclude that closures in Javascript are implemented as functions. If you are curious about how a closure in Java would look like, you can have a look to this post.
We have two ways of defining and invoking a function in Javascript:
//Using it's name direclty var myFunc = function() { //do something } //or function myFunc() { //... } //The two above are completely equivalent //And then invoking it using it's name var myFuncReturnValue = myFunc(); //Or using the call method of the function object var myFuncReturnValue = myFunc.call();
Either we use the bare name or the call form of invocation, we can pass around any arguments to it.
There are two main uses of functions not seen in Java methods that are interesting in Javascript:
Self executing anonymous functions
This allows us to limit the scope of the variables used in the function to the function definition. They are equivalent to straight javascript code. i.e.:
(function(){ var foo = 'Hello world'; })();
In the context of a <script> block this would produce the same effect as var foo = 'Hello world';
With one big difference: in the first case the foo variable is not accesible outside the function definition. This way we can avoid polluting global scope with variables that are ment to be local.
Functions as arguments
As i said earlier, functions are the actual implementation of a closure in Javascript. Therefore we can pass them around as parameters to other functions:
var myFn = function(fn) { var result = fn(); console.log(result); }; myFn(function() { return 'hello world'; }); // logs 'hello world'
The this keyword
In Java all not static code is executed within an object (in the context of an object), so the meaning of the this keyword is unique: it's a pointer to the object in wich the code is executing. In static methods, actually attached to a class and no related to one particular object, the compiler doesn't allow us to use this, so the unique meaning of the keyword remains untouched for every case.
In the case of Javascript there is an execution context that may alter what the this pointer is actually referring to. In the simpler case, as in Java, if the function is being invoked as a method of an object the this pointer will refer to that object. If we invoke the function using call or apply (a way of invoking a function we haven't mentioned, but that exists) the this variable will be set to the first argument passed to that function. In case there is no argument, this will refer to the global object (by default, the window object). For more information regarding the global object, take a look to The global object and global scope chapter (down this paragraph):
var myObject = { sayHello : function() { console.log('Hi! My name is ' + this.myName); }, saySomethigElse : function() { console.log('this value: ' + this); } myName : 'Rebecca' }; var secondObject = { myName : 'Colin' }; myObject.sayHello(); // logs 'Hi! My name is Rebecca' myObject.sayHello.call(secondObject); // logs 'Hi! My name is Colin' myObject.saySomethigElse(); // logs 'this value: [object Object]', the global object
There is another way we haven't mentioned yet to call a Javascript function. It's throuhgt the bind() Javascript functions, wich creates a "copy" of a function binding parameters to it at definition time:
var myName = 'the global object', sayHello = function () { console.log('Hi! My name is ' + this.myName); }, myObject = { myName : 'Rebecca' }; var myObjectHello = sayHello.bind(myObject); sayHello(); // logs 'Hi! My name is the global object' myObjectHello(); // logs 'Hi! My name is Rebecca'
In the earlier example we "copy" the function sayHello() binding to it to myObject parameter, and assigning the resulting function to a variable called myObjectHello. Invoking myObjectHello without parameters will result in function sayHello() invocation with var myObject as a parameter. Here the this pointer will refer to the first parameter passed to the undelying function (sayHello() in this case), as it happened with call() and apply().
Finally, if our function is not attached to any object (is an standalone function), the this keyword will refer to the global object:
var myFunction = function() { console.log(this); // will log global object } myFunction();
The global object and global scope
The global object is the global namespace, in wich every DOM element is nested. I think we have no Java counterpart for this, so let's explain this Javascript feature in isolation.
Usually we won't use the global namespace. In fact, even if we define a global variable (a variable declared with the var keyword outside any function, just inside a block) the browser will use the window object as the holding scope for that variable. In the next example we show that fact:
var myVar = 'Hello world'; console.log(window.myVar);
The same effect will show up with this code:
var myVar = 'Hello world'; console.log(myVar);
This is because if no object is specified as the scope or owner for a variable, Javascript takes the window object as a default. There is another way of getting a reference to the global (window by default) object; as we already know we can get a reference to it using the call() invocation of a function and the this pointer inside that function:
(function() { var global = (function(){return this;}).call(); global.Formula = function (){ // body of function }; })();
In the above example, global refers to the global object. We have used an autocalled anonymous function (a technique we saw earlier to limit the scope of a variable) to get the global object returning this and invoking it with call().
We are not finished yet going throught the most important weird things Javascript has to offer to Java developers, but i think it's enougth for today. In the next part of this series we will talk about arrays and some side effects the changing meaning of the this keyword we have just saw may have in our code. Also, if i have time i'll try to get deeper in how to use closures in Javascript.
See ya!!
0 notes
marianong · 14 years ago
Text
Tricky issues in Javascript for Java Developers (part 1)
After reading carefully one of the best javascript and jquery tutorials i've never run into (here, if you are curious) i'd like to summarize in a brief (as always) series of posts the most tricky things i've found in Javascript that might sound strange to Java developers, but that have to be known in order to code straight in the most browser-important language so far.
First of all we have to take into account that Javascript is an interpreted (ok, Google Chrome uses another approach, but this is the most common one), dynamic language, not strongly typed, where variables can point to primitive types, objects or functions (to put it simple). This fact is the source of the idiosincrasy of this language. From that point, let's dig into the topic.
Boolean evaluation
A first thing to point out is what in Javascript can be evaluated to true or false. Because this evaluation is the conductor of flow control statements it affects the core of our understanding of a Javascript piece of code. In java we are used to have boolean primitive and Boolean wrapper object, and because of it's strongly typed nature we cannot use another types with logical and comparison operators. In Javascript almost any value can be used with this operators, a fact that lead us to define what is evaluated to true or false in Javascript.
Expressions that are evaluated to true:
'any string' (including the string '0').
An empty array ([]) or object ({}); this is a difference with the Groovy language that might mislead every day Groovy developers.
Any non-zero number (1, 2, 1000, etc).
On the other side, expressions thar are evaluated to false:
The zero number (0).
An empty String ('').
Javascript Nan (not a number), null and undefined values.
Logical operators return values (based on boolean evaluation rules)
Another tricky issue is that in Javascript logical operators returns one of the operands they are acting upon on this basis:
the || operator returns the value of the first truthy operand, or, in cases where neither operand is truthy, it'll return the last of both operands.
The && operator returns the value of the first false operand, or the value of the last operand if both operands are truthy.
This two assertions are copied literally from the above mentioned article, so they describe perfectly the rules that apply. Some examples on that:
var foo = 1;//true var bar = 0;//false var baz = 2;//true foo || bar; // returns 1, which is true bar || foo; // returns 1, which is true foo && bar; // returns 0, which is false foo && baz; // returns 2, which is true baz && foo; // returns 1, which is true
Comparison operators
In Java we have one comparison operator (==), that returns true if and only if the two references being compared point to the same object. This is what we call an identity comparison. The closest equivalent in Javascript would be (i guess) the identical operator, wich is represented by three equals symbols (===) instead of two. For exaple:
3 === 3 // returns true 3 === '3' // returns false
If we want a more 'real world' equivalency comparison, in Java we have the equals method. It's somewhat equivalent in Javascript is the == operator:
3 == '3' // returns true
This operator tries to guess if the two expressions are equivalent in some way, although they might not be identical. When we say "some way" we mean an intuitive, predictible way. In case of doubt, test it to know how this comparison behaves.
Object definition
In Java we code classes, and then we instantiate them creating objects. In Javascript definition and instantiation can occur together. We define objects in a Groovy map style, giving first a name and then assigning the name to a value (what in Java would be a class attribute) or to a function (what we would call a Java method):
var myObject = { sayHello : function() { console.log('hello'); }, myName : 'Rebecca' }; myObject.sayHello(); // logs 'hello' console.log(myObject.myName); // logs 'Rebecca'
That's all we have time for in this post. In the second part we will dig into functions and common pointers used in them, like the this keyword. Stay tuned for the second part, that will show up soon in the Java developer diary.
2 notes · View notes
marianong · 14 years ago
Text
An HTML 5 introduction
In this post i'm not trying to write an HTML 5 tutorial. There is already a lot of very good stuff about that out there. Today i'll try to summarize the concepts that change the way we are used to code in HTML; mainly in this ways:
The way we layout our pages.
The way we insert media and graphical content.
The way we code static and dynamic content.
RIA has reach too far for the old HTML specification. JQuery or GTW have made our lifes easier; anyway we feel we are taking the technology far beyond it's primarily purpose. That fact is the foundation of HTML 5.
With this in mind let's point out where exactly is the difference in concept between this new HTML specification and former ones. If you are new to HTML 5 and have some background in former specifications, this post is for you.
New layout elements
Formerly we have used mainly <div> and <span> elements to make the structure of the document. They've been, in some ways, the briks of our site. That means that we have only one type of brik, and we have make all the neccesary work to adapt it to each layout part it'll be included in.
In HTML 5 we have new structural elements with a defined semantic, wich we can use to build each part of our pages. The name of the elements is in most cases self explicative, i.e.:
<header>
<footer>
<nav>
<aside>
<section>
An exhaustive list of those with all related information can be found in the official HTML 5 specification site. A perhaps more convenient documentation for everyday work is in the awesome w3c schools site (where you can find a lot of stuff related to HTML, CSS and Javascript in a very convenient format).
Going back to our topic, this new structural elements allows us to give semantic to some parts of the page in the markup, allowing the browser to recognize the elements and treat them more accurately. This may happen in the future; by now the first benefit we get is a much better readability of our HTML code.
New support for media and graphical content
The most important feature related to media in HTML is possibly the <canvas> element. This new element (wich roots in Safari browser and Mac OS dashboard widgets) deserves an entire book. It is accompanied by a complete javascript API that allows 2D drawing and animations without need of any plugin (such as Flash plugin).
Another elements that deserve special attention are the <audio> and <video> tags. This tags allows to include multimedia content in our pages without the need of an external plugin. Any HTML 5 compliant browser will render the content directly. Issues regarding codecs, patents and licences obscure the actual panorama of this tags. Time will have things to say about that, so at the time of this writing final recomendations and decisions about it are to be made.
Other things
In this "cajon de sastre" i will mention things like local storage, form enhacements (with new types in form elements like date, datetime, email or month, to mention some of them), and other things like new DOM elements and attributes that sure will make our lifes easier. Apart from that, we have new features like web workers (that allows us to execute Javascript code in background processes), geolocation (javascript API for accessing i.e. GPS information in mobile devices), drag & drop API for lot easier implementation of this feature, cross-document messaging that allows documents in different windows and iframes to communicate with each other throught messages, new browser history management and MIME types and protocol handlers registration (something done by browsers but until now not accesible throught an API). Lots of brand, new stuff.
I think it's enougth for a brief, first introductory post. I hope you have found it useful if you were looking for an introduction to HTML5 and the way standards are evolving to meet the actual and upcoming needs of the world wide web.
See ya!!
0 notes
marianong · 14 years ago
Text
A real hands on ATDD case scenario
Today i'm going to talk about a real development task we had to accomplish, and about the actual way we did so. Because of the nature of the task an ATTD (acceptance test driven development) approach fitted perfectly in this scenario, so we used it first to gather all the information required (the customer specification) and then to implement the unit tests to be passed by the final implementation of the functionality. We were working on a customer opinion management system, in wich we collect information about our customer's clients satisfaction and then we present that information in tabular and graphical views. One of our sources of client's information are surveys that they fillout themselves after consuming the services. This surveys gather customer satisfaction on several topics related to the service they receive and asks the customer for a one number general-overall calification between 1 and 5. If the customer doesn't give us an overall calification we calculate it automatically as a weighted average of the rest of califications. This is a general description of the requirement we have. It seems to be pretty clear, but doubts can arise when we fall into corner cases. If a topic involved in the weighted average hasn't been evaluated, do we give it a value of 0? Do we ignore and exclude it from the average? If none of the topics have been evaluated, what value do we give to the weighted average? Following a ATDD approach we asked our customer all these questions, and we gave him real examples for each. To put it clear: each question is given real values (numbers), and the customer must give us both the way to solve the problem (the actual algorithm to use) and the result of it (the calculated result for the given sample values). After doing so, we defined an interface for the service responsible for doing the calculation. Afterwards, we coded a unit test for each question passed to the customer, and made assertions for the expected results. Finally, we coded the actual service and made sure all the test pass. That is, in a nutshell, a real ATTD scenario we have faced recently. It's interesting to note that a goal driven TDD approach (like ATTD or BTTD, behaviour test driven development) gives some useful meaning to our TDD work. We are not just testing code, we are making sure that our code complies with client requirements. And the first part of all this work consists on sitting with the customer and defining corner cases, as well as the straightforward happy path. This information is the basis for the definition or the tests and the assertions to be made.
3 notes · View notes
marianong · 14 years ago
Text
Working with closures in Java
As i promised a while ago, here comes a brief post about how to code closures in Java. To put it in few words, closures can be seen as code fragments you can pass around as parameters to methods, constructors, and so on. Closures are supported natively in some languages, like the *[Groovy](http://groovy.codehaus.org/)* language. It's not Java case, wich doesn't support closures at sintax level. Despite of this it is possible to work with closures in Java. I've used a couple of strategies for that so far, let's review them. ## Using Jakarta Commons Collections API ## The simplest way of using closures in Java is ... guess what ... to have a Closure class. Such a class is provided by *[Jakarta commons collections framework](http://commons.apache.org/collections/)*, packaged as the org.apache.commons.collections.Closure interface. This interface defines one method: execute, wich returns void and takes an Object as a parameter. The usual approach of using this is to define an anonymous implementation of the interface where needed: Closure cl = new Closure() { public void execute(Object obj) { //put your code here ... } }; This allow us to pass a code fragment to a method, and implement some kind of template pattern without inheritance. it's usefull when different code has to be executed inside a common structure, such a loop or an if condition. Another interface we can implement in a similar fashion is the org.apache.commons.collections.Transformer interface. The only difference is that the execute-like method of Transformer is called transform, and it returns an Object. It allows us to use closures with return values and transform an input object into another object. Transformer tr = new Transformer() { public Object transform(Object obj) { //your transformation implementation here ... } }; One thing we do have to keep in mind if we are going to use our transformer implementation within the collections framework is that the transform method specification enforces us to keep the input object unchanged, returning a new instance (doesn't it sound like functional programming music?). If we are using this class just as a closure wrapper in our code is up to us how we implement the actual logic. This two interfaces are not much type safe. If we want to take advantage of the strong typing feature of Java we can use an alternative implementation of collections API, which we move on next. ## Using Collections Generic framework ## The Java 1.5 generic base API analogous to the Jakarta commons collections is the *[Collections Generic API](http://larvalabs.com/collections/)*. This API defines a Closure and Transformer typed version interface: public interface Closure { void execute(T input); } ... public interface Transformer { O transform(I input); } Here we can specify and validate at compile time what exact type of object a Closure can receive, or a transformer will return. This will allow us to write clearer, safer code, as we have a compile time checking of types passed into the *execute* and *transform* methods. Apart from that, the use of this classes is identical to the untyped ones. This two APIs will suffice in most cases. But sometimes is convenient to have a more customized definition for our closure. In Groovy ie we can pass an arbitrary number of parameters to a closure. To do this with this two techniques que need to wrap all the parameters into one single object, which will be passed then to the *execute* or *transform* methods. If we prefer to keep this parameter list as separated elements, we can use a third approach (see below). ## Using a custom inline interface ## Imagine that we need a closure that takes two Strings, and return a concatenation of both ir the order they are received. At the same time, we need another closure with the same signature, but that concatenates in reverse order (the second one received first). Let's see the code first and then point out some things about it: public class MyClass { //Definition of our super custom closure private interface OurCustomClosure { String concatenate(String str1, String str2); } //Implementation of both cases OurCustomClosure directClosure = new OurCustomClosure() { public String concatenate(String str1, String str2) { return str1+str2 } } OurCustomClosure reverseClosure = new OurCustomClosure() { public String concatenate(String str1, String str2) { return str2+str1 } } //A method that receives and use our closure public String applyClosure(OurCustomClosure cl) { //Let's assume str1 and str2 as defined variables for the actual scope return cl.concatenate(str1, str2); } //A method that uses the previous public String passClosure() { //We could have used reverseClosure instance as well applyClosure(directClosure); } } No much explanation needed. In this case we create an interface as a strong typed definition of what our closure receives and returns. This interface is kept locally where it will be used through an in-class private interface definition (we can change the scope of this definition if we need this closure in a more global scope). Class attributes provides implementations for the closure, which are then used in another methods. Despite dynamic languages like Groovy allow us to pass around closures without a well defined signature and use the parameters and return value dinamically, this strong typed approach can be usefull when we prefer more compile time control or what we are doing. It's the only way (as far as i've used) to take advantage of the closure concept in Java, but sometimes even in more dinamic platforms can be a convenient technique. It's i think a matter of taste: dinamic, flexible, quick development vs. strong typed, clearer, self-documented in code and compile time checks for it.
0 notes