Nested for loop as a stream
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Please explain why you would want to iterate the same List twice; it seems strange. Unless it is out of simple curiosity, “I just want to know what happens if...” That can justify almost anything in code.
By the way, that doesn't iterate the same List twice; it iterates it n× where n is equal to myList.size(). It will be interesting if you have a 1,000,000‑element List!
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Campbell Ritchie wrote:Obviously that won't work at all well if any of the array elements is 0.
And what then?
Here is a Stream version of OP's code:
If that is shorter/easier? Hmm...
There are three kinds of actuaries: those who can count, and those who can't.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
You can fill the resultant array with 0s and if there is only one 0 in the original array, calculate the product of all the other elements, only for that one element. If there are multiple 0s in the original array, every result will reduce to 0. I am sure OP already knows what happens if you don't compensate for 0s.Piet Souris wrote:
Campbell Ritchie wrote:Obviously that won't work at all well if any of the array elements is 0.
And what then?. . .
You will also get incorrect results if you suffer an arithmetic overflow calculating the product.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Given two integers, l and z, find the maximal value of a xor b, written a^b , where a and b satisfy the following condition:
l<=a<=b<=z
For example, if l=11 and z = 12, then
11 ^ 11 = 0
11 ^ 12 = 7
<s>12 ^ 11</s>
12^12 = 0
Our maximum value is 7.
With two nested for loops, it can be easily done.
Now with Stream
results in
<b><i>Exception in thread "main" java.lang.IllegalStateException: stream has already been operated upon or closed</i></b>
Understand that it may not provide the correct output, but the problem is how to make a nested stream work on the same list (even though I am using two different streams).
Open for all criticism and suggestions.
PS: JR never stops to amaze you.
I got the following error:
The specific error message is: "\r\" is a silly English abbreviation; use "are" instead.
Now replacing it with z.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
After reading again https://www.logicbig.com/tutorials/core-java-tutorial/java-util-stream/lazy-evaluation.html , I think when internally a newly created stream is trying to read the value for "i" at line no# 8, the outer stream(intStream) is terminated so no "i" is available, hence the exception.
Not a workaround, I think the way to solve this is
Any link to understand the stream's internal operations is welcome.
Any suggestions to improve it are welcome.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Thank you for your sharing.
I have a question . If l=1, r=4
IntStream intStream = IntStream.rangeClosed(l, r);
IntStream xorStream = intStream
.flatMap(i -> {
IntStream intStreamInner = IntStream.rangeClosed(i + 1, r);
IntStream ins= intStreamInner.map(j -> i ^ j);
return ins;
});
The intStreamInner will have a closed range like this:
l=1, inner stream will be 2, 3, 4 . The result will be 1^2 , 1^3 , 1^4
l=2, inner stream will be 3,4.
l=3, inner stream will be 4,4
l=4, inner stream will be 5, 4. According to the rangeClosed method, the start index > end index. It will return an empty instream. The empty stream will not
map to any values.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
I think it should be intStreamInner = Instream.rangeClosed (i,4):
IntStream xorStream = intStream
.flatMap(i -> {
IntStream intStreamInner = IntStream.rangeClosed(i , r);
IntStream ins= intStreamInner.map(j -> i ^ j);
return ins;
});
I used your original nested loop for a reference:
for(int a = l; a <= r; a++) {
for(int b = a; b <= r; b++) {
xorList.add(a^b);
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Result:
Why is the map-version so much slower? And using parallel Intstreams does not make it any better.
There are three kinds of actuaries: those who can count, and those who can't.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Himai Minh wrote:
l=4, inner stream will be 5, 4. According to the rangeClosed method, the start index > end index. It will return an empty instream. The empty stream will not
map to any values.
I did not think about it, but I guess now that's the reason this program terminates, else it would have gone in infinite loop

"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Himai Minh wrote:
I think it should be intStreamInner = Instream.rangeClosed (i,4):
I used your original nested loop for a reference:
for(int a = l; a <= r; a++) {
for(int b = a; b <= r; b++) {
xorList.add(a^b);
In this approach, please correct me, I think if the range is let's say 10 - 12, then we will end up XORing 10 and 11 and then 11 and 10, which I was trying to avoid.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Piet Souris wrote:
Why is the map-version so much slower? And using parallel Intstreams does not make it any better.
This is funny.
Please have a look at the following code, basically no negative numbers from the range and an artificial map for the normal range list.
SIZE : 100000
getMaxXor duurde: 4.77 seconden
max = 2097151
duurde: 4.771 seconden
SIZE : 100000
getMaxXor duurde: 35.124 seconden
max1 = 2097151
duurde: 35.124 seconden
Why ???
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Its not about map or list. Its about second time calling the method.
SIZE : 100000
getMaxXor duurde: 4.921 seconden
max1 = 2097151
duurde: 4.923 seconden
SIZE : 100000
getMaxXor duurde: 32.228 seconden
max = 2097151
duurde: 32.228 seconden
But still how and why it should matter if its being called second time??
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
SIZE : 100000
getMaxXor duurde: 4.535 seconden
max1 = 2097151
duurde: 4.536 seconden
SIZE : 100000
getMaxXor duurde: 4.956 seconden
max = 2097151
duurde: 4.956 seconden
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
and
and I did the map in two ways:
and
And I used two Xor-methods:
Stream-version
non-stream version
and my timings were:
So, the java-map-with-default-list-versions performs poorly, and the Arraylist-list-version performs poorly in the non-stream method. Strange.
There are three kinds of actuaries: those who can count, and those who can't.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Just proves once again that proper choice in an algorithm is your best optimization.
That's true
But I am wondering why the stream functional program method is slow when called with the same set of data again.
I did not profile, but I can see that my system has enough memory available.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Carey Brown wrote:only about 13 seconds longer than the brute force approach. doesn't surprise me.
getMaxXor duurde: 4.921 seconden
getMaxXor duurde: 32.228 seconden <<<<<<<<<<< almost 8 times.
When the same list is processed again, it's taking almost 8 times
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Range [0, 1000000]: Actual max XOR = 1048575 duration: 00:02:48.809
Streaming...
Range : 0-->1000000 max XOR = 1048575 duration: 00:03:02.605
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
The times were identical.
This must mean that this code was having some side effect that impacted run "two".
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
If the input list is empty (or has one item), the .range(0, list.size() - 1) becomes range(0, -1), which still runs but very inefficiently depending on JVM optimizations. Even worse, if either list has few values, the short-circuiting, branch prediction, and JVM optimization degrade.
This is why streamPerfTwo() becomes slow after streamPerfOne() runs that expanded version — the JVM memory and GC are now burdened by creating several intermediate streams and possible overhead from mapMap.get(false).
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
I made small change in streamPerfTwo
public static void streamPerfTwo()
{
long start = System.currentTimeMillis();
var max = getMaxXor( mapMap.get( true ) );
long eind = System.currentTimeMillis();
System.out.println( "max = " + max );
System.out
.println( "duurde: " + ( eind - start ) / 1000. + " seconden" );
}
And following is the result
SIZE : 100000
getMaxXor duurde: 4.917 seconden
max1 = 2097151
duurde: 4.919 seconden
SIZE : 100000
getMaxXor duurde: 45.041 seconden <<<<<<<<<<<<<<<<< Killing me
max = 2097151
duurde: 45.041 seconden
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Please correct me, we are not calling mapMap.get(false) at all.
Let me rephrase my question.
When we are processing the same list, but the containers are different, in our case, one container is a List and the other container is a Map.
First processing is always much faster than second processing.
Why is the second processing from a different container for the same items slower than the first processing?
I hope I am able to put my question properly.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
If I swap the calls to call two() before one() then two() runs faster.
ChatGPT's analysis:
🔍 Why This Happens
Java’s JVM is adaptive:
JIT Warm-up
The first time you call a method like getMaxXor() with large inputs, the JVM interprets it. After some threshold (e.g. 10,000 invocations), it JIT-compiles it. When the method is called later with different input shapes (e.g. empty vs. large), optimization paths can diverge.
Speculative Optimization Invalidation
If the JVM sees that getMaxXor() is "usually called with big lists," it optimizes accordingly. But once you call it with a tiny or empty list, it may de-optimize or bail out of compiled code.
Memory Effects / GC
Even creating streams with no output (empty list) still allocates stream pipeline objects. That can increase allocation rate and trigger GC before your next benchmark (in this case, streamPerfTwo()).
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Following is the output
SIZE : 100000
getMaxXor duurde: 4.954 seconden
max = 2097151
duurde: 4.955 seconden
SIZE : 100000
getMaxXor duurde: 36.003 seconden <<<<<<<<<<<<<<< Means container is not the issue, second run is the issue.
max1 = 2097151
duurde: 36.003 seconden
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Carey Brown wrote:
ChatGPT's analysis:
🔍 Why This Happens
Java’s JVM is adaptive:
JIT Warm-up
The first time you call a method like getMaxXor() with large inputs, the JVM interprets it. After some threshold (e.g. 10,000 invocations), it JIT-compiles it. When the method is called later with different input shapes (e.g. empty vs. large), optimization paths can diverge.
Speculative Optimization Invalidation
If the JVM sees that getMaxXor() is "usually called with big lists," it optimizes accordingly. But once you call it with a tiny or empty list, it may de-optimize or bail out of compiled code.
Memory Effects / GC
Even creating streams with no output (empty list) still allocates stream pipeline objects. That can increase allocation rate and trigger GC before your next benchmark (in this case, streamPerfTwo()).
I am re-reading again and again but chatGPT is not helping me to get answer
Will check on this later ... till then open for all to provide input.
Thanks a lot
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
In getMaxXor() I printed out the java type of the 'list' variable, when it was ArrayList it ran slow when it was java.util.ImmutableCollections.ListN it ran fast. ListN is an internal Java class an you can't make it directly but you can do it this way...
Apparently ListN is optimized, compact, immutable form of ArrayList.
Modified code:
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
The data set is large enough to cause the just-in-time compiler to kick in, so, if the first call is an ArrayList and the subsequent call is also an ArrayList then no problem, or similarly ListN and ListN. If, on the other hand one is ListN and the other is ArrayList the JVM will have to deconstruct the JIT code and create a new one thus slowing things down. My machine revs it's fans when this happens.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
https://coderanch.com/t/713708#3594115
We observed the same earlier that if it's a new list, then there is no performance issue.
Please check line# 4 and line# 11.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
If the two lists are of the same type then there's no performance issue. If the two lists were created differently is possible that they are of differing types and would therefore present a performance problem. In general ListN type has a much better performance overall with get(n) calls, but if you mix Arraylist and ListN the JVM gets heavily bogged down with just-in-time compiler work. It is trivial in both code and performance to convert any ArrayList to a ListN and make it a universally performing method which is not dependent on the type of list that it is given or the order that various lists are given.R K Singh wrote:Please revisit the following:
https://coderanch.com/t/713708#3594115
We observed the same earlier that if it's a new list, then there is no performance issue.
Please check line# 4 and line# 11.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
In the code you link to here, both lists are of type ArrayList so performance is ok.R K Singh wrote:Please revisit the following:
https://coderanch.com/t/713708#3594115
We observed the same earlier that if it's a new list, then there is no performance issue.
Please check line# 4 and line# 11.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
so it is unclear what type listN is. I got some very strange performance issues somewhere in this topic, that I tried to see whether toList or toArrayList made any difference. But sofar I find the performance-issue very strange.
There are three kinds of actuaries: those who can count, and those who can't.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
But the real sticking point was that giving a very large data set to getMaxXor(), which we were, we inadvertently caused the JVM to use JIT to compile the method so I would run faster. I ran so many experiments with this code I could start to tell when the JIT kicked in because my system fans would crank up. This by itself is not a bad thing but based on experimental experience the JIT for ArrayList was not compatible with JIT for ListN and so the JVM was forced to undo the first JIT and redo it, consuming quite a bit of clock time. This would happen any time an ArrayList was followed by a ListN or a ListN followed by an ArrayList. If you could guarantee that getMaxXor() would ALWAYS get one or the other then there would be no performance penalties.
If you are trying to write the getMaxXor() method so that it wouldn't care which one you got (which you should), then a simple workaround would be to put inside the method a line to convert any ArrayLists to ListNs before entering the loop. Experimentation seems to bare this out because with this change I could intermix ArrayList an ListN calls with no performance penalties.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
I thought the same, though it would look very absurd to create a new list (for others it would be useless step
) Thanks to all for sharing your info.
"Thanks to Indian media who has over the period of time swiped out intellectual taste from mass Indian population." - Chetan Parekh
| Doe, a deer, a female deer. Ray, a pockeful of sun. Me, a name, I call my tiny ad ... The new gardening playing cards kickstarter is now live! https://www.kickstarter.com/projects/paulwheaton/garden-cards |







