object ScalaMemoizationMultithread { // do not use case class as there is a mutable member here class Memo[-AT, +B]+R](f: AT => BR) extends (A=>BT => R) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[AConcurrentHashMap[T, B]R] def apply(x: AT):B R = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=fres = f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) // atomic res } } val fibonacci: Memo[Intobject Memo { def apply[T, BigInt]R](f: T => R): T => R = new Memo(f) def Y[T, R](F: (T => R) => T => R): T => R = { //a single instance created lazy val yf: T => R = Memo(F(yf)(_)) case 0 yf } } val fibonacci: Int => 0BigInt = { case 1 def fiboF(f: Int => BigInt)(n: Int): BigInt = { if (n <= 0) 1 case else if (n =>== fibonacci1) 1 else f(n - 1) + fibonaccif(n - 2) } Memo.Y(fiboF) } def main(args: Array[String]) = { ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) } } // do not use case class as there is a mutable member here class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) // atomic res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) object ScalaMemoizationMultithread { // do not use case class as there is a mutable member here class Memo[-T, +R](f: T => R) extends (T => R) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[T, R] def apply(x: T): R = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res = f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) // atomic res } } object Memo { def apply[T, R](f: T => R): T => R = new Memo(f) def Y[T, R](F: (T => R) => T => R): T => R = { lazy val yf: T => R = Memo(F(yf)(_)) yf } } val fibonacci: Int => BigInt = { def fiboF(f: Int => BigInt)(n: Int): BigInt = { if (n <= 0) 1 else if (n == 1) 1 else f(n - 1) + f(n - 2) } Memo.Y(fiboF) } def main(args: Array[String]) = { ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start) } } When using mutable map for memoization, one shall keep in mind that this maywould cause typical concurrency problems, though concurrent usagee.g. doing a get when a write has not completed yet. However, thread-safe attemp of dynamic programming seemsmemoization suggests to be raredo so it's of little value if not none.
The following thread-safe code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) => 1set sometimes gotgets printed more than once. This means a thread A has gotinitiated the result forcalculation of f(2) but for some reason it's not visible to Thread B, has totally no idea of it and starts its own copy of calculation. Such ignorance is so B will dopervasive at the same thing all itself. Even worse, right when Thread A just have finished halfconstructing phase of its insertion work for some f(n)the cache, Thread B might come inbecause all threads see no sub solution established and try to readwould enter the mappingelse clause.
// do not use case class as there is a mutable member here class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) // atomic res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) Update:
Actually the above code demonstrate that memoization under multithreading is mostly useless under the cache construction phase, because all the threads from a to d would do their own copy of calculation, sometimes even the base cases fibonacci(0) and fibonacci(1) are no exception.
When using mutable map for memoization, one shall keep in mind that this may cause concurrency problems, though concurrent usage of dynamic programming seems to be rare.
The following code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) => 1 sometimes got printed more than once. This means a thread A has got the result for f(2) but for some reason it's not visible to Thread B, so B will do the same thing all itself. Even worse, right when Thread A just have finished half of its insertion work for some f(n), Thread B might come in and try to read the mapping.
class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) Update:
Actually the above code demonstrate that memoization under multithreading is mostly useless under the cache construction phase, because all the threads from a to d would do their own copy of calculation, sometimes even the base cases fibonacci(0) and fibonacci(1) are no exception.
When using mutable map for memoization, one shall keep in mind that this would cause typical concurrency problems, e.g. doing a get when a write has not completed yet. However, thread-safe attemp of memoization suggests to do so it's of little value if not none.
The following thread-safe code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) set gets printed more than once. This means a thread A has initiated the calculation of f(2) but Thread B has totally no idea of it and starts its own copy of calculation. Such ignorance is so pervasive at the constructing phase of the cache, because all threads see no sub solution established and would enter the else clause.
// do not use case class as there is a mutable member here class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) // atomic res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) When using mutable map for memoization, one shall keep in mind that this may cause concurrency problems, though concurrent usage of dynamic programming seems to be rare.
The following code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) => 1 sometimes got printed more than once. This means a thread A has got the result for f(2) but for some reason it's not visible to Thread B, so B will do the same thing all itself. Even worse, right when Thread A just have finished half of its insertion work for some f(n), Thread B might come in and try to read the mapping.
class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used herein a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) Update:
Actually the above code demonstrate that memoization under multithreading is mostly useless under the cache construction phase, because all the threads from a to d would do their own copy of calculation, sometimes even the base cases fibonacci(0) and fibonacci(1) are no exception.
When using mutable map for memoization, one shall keep in mind that this may cause concurrency problems, though concurrent usage of dynamic programming seems to be rare.
The following code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) => 1 sometimes got printed more than once. This means a thread A has got the result for f(2) but for some reason it's not visible to Thread B, so B will do the same thing all itself. Even worse, right when Thread A just have finished half of its insertion work for some f(n), Thread B might come in and try to read the mapping.
class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used here private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) Update:
Actually the above code demonstrate that memoization under multithreading is mostly useless under the cache construction phase, because all the threads from a to d would do their own copy of calculation, sometimes even the base cases fibonacci(0) and fibonacci(1) are no exception.
When using mutable map for memoization, one shall keep in mind that this may cause concurrency problems, though concurrent usage of dynamic programming seems to be rare.
The following code creates a memoized fibonacci function, initiates a couple of threads (named from 'a' through to 'd') that make calls to it. Try the code a couple of times (in REPL), one can easily see f(2) => 1 sometimes got printed more than once. This means a thread A has got the result for f(2) but for some reason it's not visible to Thread B, so B will do the same thing all itself. Even worse, right when Thread A just have finished half of its insertion work for some f(n), Thread B might come in and try to read the mapping.
class Memo[-A, +B](f: A => B) extends (A=>B) { // don't even know what would happen if immutable.Map used in a multithreading context private[this] val cache = new java.util.concurrent.ConcurrentHashMap[A, B] def apply(x: A):B = // no synchronized needed as there is no removal during memoization if (cache containsKey x) { Console.println(Thread.currentThread().getName() + ": f(" + x + ") get") cache.get(x) } else { val res=f(x) Console.println(Thread.currentThread().getName() + ": f(" + x + ") set") cache.putIfAbsent(x, res) res } } val fibonacci: Memo[Int, BigInt] = new Memo ({ //a single instance created case 0 => 0 case 1 => 1 case n => fibonacci(n-1) + fibonacci(n-2) }) ('a' to 'd').foreach(ch => new Thread(new Runnable() { def run() { import scala.util.Random val rand = new Random (1 to 2).foreach(_ => { Thread.currentThread().setName("Thread " + ch) fibonacci(5) }) } }).start ) Update:
Actually the above code demonstrate that memoization under multithreading is mostly useless under the cache construction phase, because all the threads from a to d would do their own copy of calculation, sometimes even the base cases fibonacci(0) and fibonacci(1) are no exception.