A new solution
I realized that comparing each and every value in the sections might be inefficient, especially in cases where the sections are long. Instead we need only the relative ordering of these elements from which we can compute the number of Less pairs.
Here is my solution:
Edit: Ray Koopman provided a greatly improved counting method (applied to the binarized orderings) which I have now incorporated. To see my original code look at the edit history of this post and the compiled count function.
f5[v_, n_] := Min[UnitStep[(n + 0.5) - Ordering /@ Partition[Reverse@v, 2 n, n, -n]].Range[2 n]] - n (n + 1)/2
With long segments this is much faster than my first method (f2) and also a lot faster than Michael's code.
If you desire LessEqual behavior you can use:
f5LE[v_, n_] := Min[UnitStep[Ordering /@ Partition[v, 2 n, n, -n] - (n + 0.5)].Range[2 n]] - n(n+1)/2
Explanation
This method works on the principle that a direct comparison as performed in the question or the other answers is an $n^2$ operation, whereas a sort is on average an $n\log(n)$ operation.
I start by splitting the list into lengths of $2 n$ with an offset of $n$ and an alignment of $-n$, which looks like this:
v = CharacterRange["a", "h"]; n = 2; Partition[v, 2n , n, -n]
{{g,h,a,b}, {a,b,c,d}, {c,d,e,f}, {e,f,g,h}}
Each partition represents two adjacent segments, i.e. 4;;6 and 7;;9 in the question example.
I actually need the elements of each of these partitions in reverse order (see below), and it's more efficient to do that before partitioning, and for my purpose equivalent:
Sort[Reverse /@ Partition[v, 2 n, n, -n]] === Sort[Partition[Reverse@v, 2 n, n, -n]]
True
(The Sorts are only to show that the same partitions exist.)
The next step, and the core of the method as explained above, is Ordering, which gives the relative position of every element in a sorted list. For example:
Ordering @ {"c", "a", "f", "e", "b", "d"}
{2, 5, 1, 6, 4, 3}
Note that "a" would appear at position 1 in a sorted list but is at position 2 in the original - therefore the first element of the ordering is 2; likewise "b" would appear at position 2 in a sorted list but is at position 5 in the original, therefore the second element of the ordering is 5.
In this ordering list I then want to round every value up to n down to 0 and every value above that to 1, which I did like this:
UnitStep[{2, 5, 1, 6, 4, 3} - 3.5]
{0, 1, 0, 1, 1, 0}
Every zero represents a value from the first half of the partition, representing the first segment, and every one represents a value from the second half of the partition, representing the second segment. Equivalent but slower is a decorate-and-sort as follows:
Transpose @ {{"c", "a", "f", "e", "b", "d"}, {0, 0, 0, 1, 1, 1}} SortBy[%, {First}] Last /@ %
{{"c", 0}, {"a", 0}, {"f", 0}, {"e", 1}, {"b", 1}, {"d", 1}}
{{"a", 0}, {"b", 1}, {"c", 0}, {"d", 1}, {"e", 1}, {"f", 0}}
{0, 1, 0, 1, 1, 0}
We can then count the number of ones that are to the left of every zero to determine how many values from segment 2 are less than a value from segment one. This is what the compiled count function does. Edit: now done with Ray Koopman's dot product method.
The Reverse was needed to get the behavior requested for the case of duplicate elements. Note the difference in the ranking of the "c" elements in this example:
s = {"c", "a", "f", "e", "c", "d"}; Transpose @ {s, {1, 1, 1, 0, 0, 0}}; SortBy[%, {First}] Transpose @ {Reverse[s], {0, 0, 0, 1, 1, 1}}; SortBy[%, {First}]
{{"a", 1}, {"c", 1}, {"c", 0}, {"d", 0}, {"e", 0}, {"f", 1}}
{{"a", 1}, {"c", 0}, {"c", 1}, {"d", 0}, {"e", 0}, {"f", 1}}
Room for improvement
I believe that there is still considerable redundancy in this code. Specifically, as you are seeking only the minimum value we would ideally abort an Ordering operation as soon as it was apparent that the existing minimum was reached or exceeded. This would require a manual implementation of the Ordering function and incorporation of the counting. Doing this efficiently it out of my experience and beyond my reach, as version 7 which I use cannot compile to C. Others with the experience and tools may which to implement this idea to see if it has merit.
Timings
With short segments Michael's f0 is fastest:
SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ Timing@Do[func, {5^i}], {i, 0, 15}] a = RandomInteger[9999, 50000]; f2[a, 5] // timeAvg f0[a, 5] // timeAvg f5[a, 5] // timeAvg
0.1934
0.0018224
0.008488
However, with longer segments the picture changes considerably:
a = RandomInteger[9999, 50000]; f2[a, 500] // timeAvg f0[a, 500] // timeAvg f5[a, 500] // timeAvg
7.363
0.1154
0.006992
With even longer segments f2 is too slow to compare, and we find that f0 consumes a great deal of memory, while f3 does not:
a = RandomInteger[9999, 150000]; f0[a, 1500] // timeAvg MaxMemoryUsed[]
0.905
1816949160
a = RandomInteger[9999, 150000]; f5[a, 1500] // timeAvg MaxMemoryUsed[]
0.02308
19170792
In summary f5 is efficient with all segment lengths on long vectors:
a = RandomInteger[99999, 1*^6]; timeAvg @ f5[a, #] & /@ {2, 10, 100, 1000, 10000, 100000}
{0.374, 0.1434, 0.1248, 0.1498, 0.1872, 0.312}