JDK 7 is coming, yes finally it seems that will see the light, without some really nice features like Closures, but with other nice improvements, like NIO 2.0, Project Coin, or auto-close resources. One new features that I really like is the inclusion of new concurrency classes specified in jsr166y. In this post I will summarize these new classes that can help us in parallel programming using Java.
Let's make a brief introduction of new classes and creates a simple example:
Interface TransfereQueue with its implementation LinkedTransferQueue. TransferQueue is a BlockingQueue which producers may wait until consumer receives elements. Because it is also a BlockingQueue, programmer can choose to wait until consumers receives elements (TransferQueue.transfer()) or simply put the element without waiting as done in jsr166 (BlockingQueue.put()). This class should be used when your producer sometimes await receipt of elements, and sometimes it should only enqueue elements without waiting.
An example where producer is blocked until consumer polls an element:
And the output is:
Before Transfer.
<producer thread wait 5 seconds>
Before Consumer.
Hello World!!
After Consumer.
After Transfer.
But what's happen if I change transfer call to put call? The output is:
Before Transfer.
After Transfer.
< producer thread wait 5 seconds>
Before Consumer.
Hello World!!
After Consumer.
Producer finishes its work just after enqueue Hello World message.
Class Phaser. This class is like CyclicBarrier class because it waits until all parties reach barrier point for continuing thread execution. The difference is that Phaser class is more flexible. The number of parties are not static like CycleBarrier, one can register and deregister dynamically at any time. Also each Phaser has a phase number which enables independent control of actions upon arrival at a phaser and upon awaiting others. New method like arrive, awaitAdvance are provided. In termination state Phaser also provides a method for avoiding termination, this method by default returns true, meaning that when all parties reach the barrier point barrier is terminated, but overriding onAdvance method you could modify this behavior, doing that all threads perform an iteration over its task.
Let's see an example of using Phaser as CountDownLatch, but as you notice some differences can be observed, first of all is that we initialize Phaser to 1 (self Thread) and then we register each parties dynamically. With CountDownLatch we should done the same but initializing statically to 15+1. arriveAndAwaitAdvance has the same behavior as we call CyclicBarrier.await, and getArrivedParties() returns how many parties have arrived to barrier point. See that in following example when second party arrives, does not call arriveAndAwaitAdvance() but calls arrive, this method notifies to Phaser that it has arrived to barrier point but it will not block, it is going to execute some extra logic, and only after that it will wait until all other parties have arrived to barrier point, calling method awaitAdvance.
I suppose you are wondering what is the returning value of arrive method. Phaser.arrive method is the responsible of notifying that thread has arrived to barrier point and returns immediately. And it returns a phaser number. Phaser Number is an integer managed by Phaser class, initially is 0 and each time all parties arrive to a barrier point, that phaser number is incremented. Phaser.awaitAdvance stops thread execution until current phase number has been incremented.
Output of previous program:
Hello World 2
Hello World 0
<Thread that prints Hello World 0 are executing Thread.sleep(5000) >
Hello World 6
Hello World 10
Hello World 1
Hello World 3
Hello World 8
Hello World 4
Hello World 13
Hello World 11
Hello World 9
Hello World 14
Hello World 7
Hello World 12
<phase number == 0>
Hello World 5
<phase number == 1>
END
After Sleep
See that After Sleep is executed after all threads have been arrived to barrier point, including "the parent thread".
Class ForkJoinTask interface is a lightweight form of Future. Main intended use of this class is for computational tasks calculating pure functions or operating on purely isolated objects. The primary coordination mechanisms are fork(), that arranges asynchronous execution, and join(), that doesn't proceed until the task's result has been computed.
ForkJoinTask have two abstract implementations that can be extended RecursiveAction and RecursiveTask. Imagine next isolated problem, we have a square matrix and we want to sum all its values. Imagine that this matrix is huge, and you want to partitioned it into much smaller matrix so calculations can be executed in parallel. For simplifying the problem and showing how to use ForkJoinTask the matrix will be an 2x2 square matrix, that obviously should not be parallelized in normal circumstances.
Sequential algorithm should be:
Result is 10.
And now parallel solution using RecursiveTask.
And of course the output is 10 too. Take a look that we are using ForkJoinPool for specifying the number of computer processors to maximize usage of system resources.
See how trivial solution cuts the recursive tasks returning a valid result,and how in not trivial solution what we are doing is dividing matrix into four small matrix, and executes the sum of these new matrix into different threads (calling fork()) and join method waits until compute method returns a result.
As you can see, there aren't a lot of new classes for concurrency in JDK 7, but I think that these new classes can help in common concurrency problems, specially ForkJoin classes.
miércoles, marzo 23, 2011
Sa Zebra Que Passa Un Semàfor I Com Se Desmunta Un Bidet, Cosmètics I Margaret Astor, Ja Sé Com S´escriu Juliette!!!
Publicado por
Alex
en
7:16 p. m.
Suscribirse a:
Enviar comentarios (Atom)
1 comentarios:
Would you be interested in exchanging links?
Skillshare.com
Information
Click Here
Visit Web
Publicar un comentario