11

I use algorithm with division.

According to https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations the division has time complexity (one of following):

O(n log n log log n) O(n log n 2O(log* n)) O(n**2) O(M(n)) 

So far I use this algorithm in Python but I need to describe it independently on platform. which one of those time complexity definitions is the correct one today for Python (or similar language) users?

13
  • Nowadays integer division is taking a couple of cycles within the CPU (floating point div also), that's "cabled" inside and has a lot of parallel operations done at the same time. Do you mean programming the division like you would do it manually? Commented Dec 17, 2015 at 13:14
  • 1
    Where in that link do you see division has a logn? Commented Dec 17, 2015 at 13:15
  • @ringø No, just plain division like 1 / 2., so what definition is correct for this? Commented Dec 17, 2015 at 13:16
  • 4
    The time complexity of arithmetic operations like multiplication and division is usually only relevant for arbitrary-precision. When you operate with a fixed width of 64 or 128-bit at most, you can safely assume it as O(1), since runtime is capped at a few cycles then. Commented Dec 17, 2015 at 13:17
  • 2
    It depends what you consider n... Commented Dec 17, 2015 at 13:32

1 Answer 1

5
  1. As mentioned if you use ALU or FPU for basic variable types

    you can assume division complexity is O(1) because the instruction overhead is usually comparable with the actual runtime of the division used. If used HW platform does not support division (some older MCU's for example) then it must be computed via program (not single instruction) and this no longer apply.

    Also if you have arbitrary precision variables (bignums) then the actual number bit or digit width start to matter and you are no longer in O(1) In that case see the #2.

  2. most division algorithms use multiplication as a core function

    The complexity of division is then defined by the used algorithm and components used by it. For example if you have basic variables but computing division (no HW divider support) then the used operations are still O(1) but the division used is not.

    Let us take Division by repeated subtraction as example.

     variable a=...,b=...,d,r; for (d=0,r=a;r>=b;) { r-=b; d++; } // d=a/b // r=a%b 

    If n is the bit width of result then this is O(2^n) for basic variables. But if the variables are arbitrary precision then the used operations are no longer O(1) This use substraction,comparison and increment which are all O(n) so the division complexity will became O(n*(2^n)) without any change in the algorithm or code ... So you always have to know what complexity you are talking about

    • base algorithm complexity O(2^n)
    • combined total complexity O(n*(2^n))

    This algorithm is not used because is painfully slow. Instead more advanced thing are used. Most division algorithms use multiplication as a core function so Schönhage–Strassen and Karatsuba are relevant for division algorithms. See:

  3. Now how to determine the complexity of custom division?

    Take the base complexity of your algorithm and multiply it by the slowest complexity of its core function. In case the core functions are not used each iteration this can became very tricky ... Do not forget to use the same meaning of n while combining/comparing complexities !!!

    If you do not have access to source code of the algorithm used then you can try to measure division for BIG set of numbers with big enough range of n and try to estimate the complexity from the graph of measured times ... This is not reliable because many things can screw this up like multithreading , scheduling granularity, unknown n ,etc ...

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.