Java 4 Ever


Coolest Google search’s new feature

Cong’ to Google for changing the search results page layout.

There is one feature which that I find extremely useful.

The “anytime” one, where once can search for “pages” that has been updated in the last X month/weeks/days

Moreover, one can search for “pages” between a range , e.g. between last July to last Aug..

Why do I find it that useful?

1st because I sent such a “feature request” my self 🙂  (2 years ago!!!)

2nd, in many cases I found my self searching for some “event”, yet the first results were much older “pages” then those I expected, Especially the keyword “new” has no meaning since things were “new” back in 2005 as well.

e.g when you search for “new features in Java” you will probably get new features in JDK 1.4 as those pages have much higher PageRank, yet you probably searched for what’s new in Java for the last few months…NOW you can do that.

Another cool feature that solves the same problem is the “Sorted by date” feature


Quick JDK 8 Suggestion

JDK 7 is just behind that door, and I am really excited about all the goodies that it brings with it.

While I was trying to benchmark JDK7 vs. older JDKs, I realized that the GC(Garbage Collector) is an unknown factor, i.e. while some piece of code is running, one can never know if the GC is running in parallel , in such a case that specific iteration might take much more time, hence the benchmark data will get corrupted.

So my suggestion is adding an awesome new block type: (JDK 8!!!)

no Garbage Collection block – noGC


//some code here;


catch(AlmostOutOfMemoryError err){


While the code inside the noGC block is running, it’s promised that the GC won’t run in parallel

The AlmostOutOfMemoryError will get thrown in case the Heap is X% full (whereas X is configurable as -xnogcf )

Just to be clear, it wouldn’t help me out with benchmarking since the older JDKs do not support it, yet that was the trigger..

how would such a block would work in a multi-threaded environment is another issue..

In my opinion applications with a tiny real-time need would benefit a lot using such a block, much more than all those real-time Java frameworks out there…

Would love to hear your opinion

A* implementation

“A*(A-star) is a best-first, graph search algorithm that finds the least cost path from a given initial node to one goal node It uses a distance-plus-cost heuristic function (usually denoted f(x)) to determine the order in which the search visits nodes in the tree.” (Wikipedia)

I implemented the A* algorithm few weeks ago in Java, I wonder what do you think about this impl, how can it be improved? mostly design and performance wise:

The A* main class

package org.simple.astar; 

import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet; 

import java.util.LinkedList;
import java.util.Set; 

public class AStar<S extends State> { 

 private S init;
 private S goal;
 private int expendedStates;
 private HuristicFunction<S> h;
 private ArrayList<S> openSet;
 private HashSet<S> closedSet; 

 public int getExpendedStates() {
 return expendedStates;

 public AStar(S init,S goal,HuristicFunction<S> h ){
   this.init = init;
   this.goal = goal;
   this.h = h;
   expendedStates = -1;

 public LinkedList<S> find(){
   HComparator hcomp = new HComparator(h,goal);
   openSet = new ArrayList<S>();
   closedSet = new HashSet<S>();
   S current = init;
   boolean foundOptimal = false;
     Collections.sort(openSet, hcomp);//check
     current = openSet.get(0);
     foundOptimal = current.equals(goal);
     Set<S> ne = current.getNeighbours();
     for (S state : ne){
           int stIndex = openSet.indexOf(state);

   LinkedList<S> result = new LinkedList<S>();
     return result;
   while(current.getPrevious()!= null){
     current = (S)current.getPrevious();
   return result;

 private void removeState(S s){

Heuristic interface:

package org.simple.astar; 

public interface HuristicFunction<N extends State> {
 public double getEvaluation(N current,N goal);

Heuristic example (BFS)

package org.simple.astar;

public class BFS<S extends State> implements HuristicFunction<S> {
	public double getEvaluation(State current, State goal) {
		return 0;


package org.simple.astar;
import java.util.Comparator; 

public class HComparator<S extends State> implements Comparator<S> {
 HuristicFunction<S> h;
 S goal; 

 public HComparator(HuristicFunction<S> h,S goal){
   this.h = h;
   this.goal = goal;
 public int compare(S o1, S o2) {
   Double e1 = o1.getDistance()+h.getEvaluation(o1,goal);
   Double e2 = o2.getDistance()+h.getEvaluation(o2,goal);
   if(e1.doubleValue() == e2.doubleValue())
     return o2.getDistance().compareTo(o1.getDistance());
    return e1.compareTo(e2);

State(node) abstract class

package org.simple.astar;
import java.util.Set; 

public abstract class State implements Cloneable {
 Integer distance;
 State previous; 

 public Integer getDistance() {
   return distance;

 public void setDistance(int distance) {
   this.distance = distance;

 public State getPrevious() {
   return previous;

 public void setPrevious(State previous) {
   this.previous = previous;

 public abstract Set getNeighbours() ;

you can read more about path finding using A* here

Avoid memory leaks using Weak&Soft references

Some Java developers believe that there is no such a thing as memory leak in Java (thanks to the fabulous automatic Garbage Collection concept)

Some others had met the OutOfMemoryError and understood that the JVM has encountered some memory issue but they are not sure if it’s all about the code or maybe even an OS issue…

The OutOfMemoryError API docs reveals that it “Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector. ”

As we know, the JVM has a parameter that represents the maximum heap size(-Xmx), hence we can defiantly try to increase the heap size. yet some code can generate new instances all the time, if those instances are accessible(being referenced by the main program – in a recursive manner) for the entire program life span, then the GC won’t reclaim those instances. hence the heap will keep increasing and eventually a OutOfMemoryError will be thrown <- we call that memory leak.

Our job as Java developers is to release references (that are accessible by the main program) that we won’t use in the future. by doing that we are making sure that the GC will reclaim those instances (free the memory that those instances occupying in the heap).

In some cases we reference an instance from 2 different roots. one root represent a fast-retrieval space(e.g. HashMap) and the other manages the real lifespan of that instance. Sometimes we would like to remove the reference of that instance from one root and get the other root(fast retrieval) reference removed automatically.

We wouldn’t want to do it manually due to the fact that we are not C++ developers and we wouldn’t like to manage the memory manually..

Weak references

In order to solve that we can use WeakReference.

Instances that are being referenced by only Weak references will get collected on the next collection! (Weakly reachable), in other words those references don’t protect their value from the garbage collector.

Hence if we would like to manage the life span of an instance by one reference only, we will use the WeakReference object to create all the other references. ( usage: WeakReference wr = new WeakReference(someObject);)

In some apps we would like to add all our existing references to some static list, those references should not be strong, otherwise we would have to clean those references manually, we would add those references to the list using this code.

public static void addWeakReference(Object o){
 refList.add(new WeakReference(o));

since most of the WeakReferences use cases needs a Map data structure, there is an implementation of Map that add a WeakReference automatically for you – WeakHashMap

Soft References

I saw few implementations of Cache using weak references (e.g. the cache is just a WeakHashMap => the GC is cleaning old objects in the cahce), without WeakReferences naive cache can easily cause memory leaks and therefor weak references might be a solution for that.

The main problem is that the GC will clean the cached-object probably and most-likely faster then you need.

Soft references solve that, those references are exactly like weak references, yet the GC won’t claim them as fast. we can be sure that the JVM won’t throw an OutOfMemory before it will claim all the soft and weak references!

using a soft references in order to cache considered the naive generic cache solution. (poor’s men cache)

( usage:SoftReference sr = new SoftReference(someObject);)

How-to speed-up your java code myths

In my last post I covered tips that I  have collected trough out the years on how to speed up your java code,

After reviewing the tips and reading my friends criticism, I updated the list and created a new list of myths, here it is:

final: developers might think that final methods are more efficient due to the fact that the compiler will be able to inline those methods. it’s false, imagine that you are compiling the class Main with the class Inline, the non static method Main.main() creates an instance of Inline and invokes the method inline.finalMethod() which is final. on compile time everything looks great, yet in runtime we might use a different version of the compiled Inline class whereas the finalMethod is not final and can be overwritten….

Synchronization blocks: old VMs used to pay a lot of overhead for running a synchronized method, new VMs mostly knows how to trace a synchronized method that is not running concurrently and treat it as a non-synchronized one.

Calling the garbage collection manually: calling the garbage collector manually (System.gc()) is usually a mistake, the new VMs garbage collection mechanism are state-of-the-art and most likely it will invoke the GC on a better timing. moreover manual GC triggers a full collection of all generations -> that’s not a smart move.

Object pooling: allocating object on the heap is not cheep but for non-complex objects it’s not that expensive as well, design an object-pooling for simple object will cause an over-head of managing the pool  in many cases.

In general it seems like performance tips should always be revisited since new compilers and VMs try to solve exactly those problems.

Immutable objects: in general immutable objects has many advantages (1) automate thread-safety (2) their hashCode value is cacheable (3) easy to work with

a quote from Effective Java: “Classes should be immutable unless there’s a very good reason to make them mutable……..If a class cannot be made immutable, limit its mutability as much as possible.”