Skip to main content
deleted 2 characters in body
Source Link
Pacerier
  • 5.1k
  • 8
  • 42
  • 60

I'm playing around with the .NET BigInteger and basically I'm wondering what number --an estimated answer would be fine-- is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))?

or are they designed with no such deviation such that if we plot the increase of time required for operations vs value of BigInteger from 1 to infinity, we will have a smooth curve all the way?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and, operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items, operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51).

If implemented properly the complexity of BigInteger arithmetic should be a smooth curve. For example the time complexity of multiplication should be O(NM) where N is the number of digits in the first multiplicand, and M is the number of digits in the second multiplicand. Of course there are practical limits in that you could pick N and M so large that the numbers wouldn't fit in your machine.

Are there any / does anyone know of any documents claiming that it is implemented as such?

I'm playing around with the .NET BigInteger and basically I'm wondering what number --an estimated answer would be fine-- is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))?

or are they designed with no such deviation such that if we plot the increase of time required for operations vs value of BigInteger from 1 to infinity, we will have a smooth curve all the way?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51).

If implemented properly the complexity of BigInteger arithmetic should be a smooth curve. For example the time complexity of multiplication should be O(NM) where N is the number of digits in the first multiplicand, and M is the number of digits in the second multiplicand. Of course there are practical limits in that you could pick N and M so large that the numbers wouldn't fit in your machine.

Are there any / does anyone know of any documents claiming that it is implemented as such?

I'm playing around with the .NET BigInteger and basically I'm wondering what number --an estimated answer would be fine-- is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))?

or are they designed with no such deviation such that if we plot the increase of time required for operations vs value of BigInteger from 1 to infinity, we will have a smooth curve all the way?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item, operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items, operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51).

If implemented properly the complexity of BigInteger arithmetic should be a smooth curve. For example the time complexity of multiplication should be O(NM) where N is the number of digits in the first multiplicand, and M is the number of digits in the second multiplicand. Of course there are practical limits in that you could pick N and M so large that the numbers wouldn't fit in your machine.

Are there any / does anyone know of any documents claiming that it is implemented as such?

added 321 characters in body
Source Link
Pacerier
  • 5.1k
  • 8
  • 42
  • 60

an estimate of any document which says exactly what range of numbers are .NET BigIntegers designed for?

I'm playing around with the .NET BigInteger and basically I'm wondering if anyone had any documents saying exactly what rangenumber --an estimated answer would be fine-- is the point of numbers wasdeviation of the BigInteger designed in mindcurve of (the graph of (increase of time required for operations) vs (value of BigInteger))?

I mean of course I know that it can support up to infinity but i'm coming from the reason that usually a component has a target something it isor are they designed for and any attempts to go pastwith no such deviation such that target will haveif we plot the component performing abit more unappealing than it should.

i meanincrease of course i know we should use normal primitives whenever we can.. i mean like say is BigInteger designedtime required for numbers 100 times bigger than ULong.MaxValue or isoperations vs value of BigInteger designed for numbers 100k times bigger than ULong.MaxValue? i mean I know it can support 100k times bigger than ULong.MaxValue but is it designed with this range in mindfrom 1 to infinity, or is it designed with this range declared "out-of-ordinary requirement"we will have a smooth curve all the way?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51).

If implemented properly the complexity of BigInteger arithmetic should be a smooth curve. For example the time complexity of multiplication should be O(NM) where N is the number of digits in the first multiplicand, and M is the number of digits in the second multiplicand. Of course there are practical limits in that you could pick N and M so large that the numbers wouldn't fit in your machine.

so basically i'm wondering what number "an estimated answer would be fine" is the point of deviation of the curve of (the graph of (increase of time required for operations) vsAre there any (value/ does anyone know of BigInteger))any documents claiming that it is implemented as such?

an estimate of what range of numbers are .NET BigIntegers designed for?

I'm playing around with the .NET BigInteger and basically I'm wondering if anyone had any documents saying exactly what range of numbers was the BigInteger designed in mind for?

I mean of course I know that it can support up to infinity but i'm coming from the reason that usually a component has a target something it is designed for and any attempts to go past that target will have the component performing abit more unappealing than it should.

i mean of course i know we should use normal primitives whenever we can.. i mean like say is BigInteger designed for numbers 100 times bigger than ULong.MaxValue or is BigInteger designed for numbers 100k times bigger than ULong.MaxValue? i mean I know it can support 100k times bigger than ULong.MaxValue but is it designed with this range in mind, or is it designed with this range declared "out-of-ordinary requirement"?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51)

so basically i'm wondering what number "an estimated answer would be fine" is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))

any document which says exactly what range of numbers are .NET BigIntegers designed for?

I'm playing around with the .NET BigInteger and basically I'm wondering what number --an estimated answer would be fine-- is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))?

or are they designed with no such deviation such that if we plot the increase of time required for operations vs value of BigInteger from 1 to infinity, we will have a smooth curve all the way?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51).

If implemented properly the complexity of BigInteger arithmetic should be a smooth curve. For example the time complexity of multiplication should be O(NM) where N is the number of digits in the first multiplicand, and M is the number of digits in the second multiplicand. Of course there are practical limits in that you could pick N and M so large that the numbers wouldn't fit in your machine.

Are there any / does anyone know of any documents claiming that it is implemented as such?

added 584 characters in body
Source Link
Pacerier
  • 5.1k
  • 8
  • 42
  • 60

I'm playing around with the .NET BigInteger and basically I'm wondering if anyone had any documents saying exactly what range of numbers was the BigInteger designed in mind for?

I mean of course I know that it can support up to infinity but i'm coming from the reason that usually a component has a target something it is designed for and any attempts to go past that target will have the component performing abit more unappealing than it should.

i mean of course i know we should use normal primitives whenever we can.. i mean like say is BigInteger designed for numbers 100 times bigger than ULong.MaxValue or is BigInteger designed for numbers 100k times bigger than ULong.MaxValue? i mean I know it can support 100k times bigger than ULong.MaxValue but is it designed with this range in mind, or is it designed with this range declared "out-of-ordinary requirement"?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51)

so basically i'm wondering what number "an estimated answer would be fine" is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))

I'm playing around with the .NET BigInteger and basically I'm wondering if anyone had any documents saying exactly what range of numbers was the BigInteger designed in mind for?

I mean of course I know that it can support up to infinity but i'm coming from the reason that usually a component has a target something it is designed for and any attempts to go past that target will have the component performing abit more unappealing than it should.

i mean of course i know we should use normal primitives whenever we can.. i mean like say is BigInteger designed for numbers 100 times bigger than ULong.MaxValue or is BigInteger designed for numbers 100k times bigger than ULong.MaxValue? i mean I know it can support 100k times bigger than ULong.MaxValue but is it designed with this range in mind, or is it designed with this range declared "out-of-ordinary requirement"?

I'm playing around with the .NET BigInteger and basically I'm wondering if anyone had any documents saying exactly what range of numbers was the BigInteger designed in mind for?

I mean of course I know that it can support up to infinity but i'm coming from the reason that usually a component has a target something it is designed for and any attempts to go past that target will have the component performing abit more unappealing than it should.

i mean of course i know we should use normal primitives whenever we can.. i mean like say is BigInteger designed for numbers 100 times bigger than ULong.MaxValue or is BigInteger designed for numbers 100k times bigger than ULong.MaxValue? i mean I know it can support 100k times bigger than ULong.MaxValue but is it designed with this range in mind, or is it designed with this range declared "out-of-ordinary requirement"?

for example, assuming arrays are designed with capability of handling 50 items . this means that if i have 1 item and operations are f(1) time. and when i have 2 items, operations are f(2) time. if i have 50 items operations are f(50) time. but since it is designed for handling 50 items only, the operations done when we have 51 items will be g(51) where g(51) > f(51)

so basically i'm wondering what number "an estimated answer would be fine" is the point of deviation of the curve of (the graph of (increase of time required for operations) vs (value of BigInteger))

deleted 2 characters in body
Source Link
Pacerier
  • 5.1k
  • 8
  • 42
  • 60
Loading
Source Link
Pacerier
  • 5.1k
  • 8
  • 42
  • 60
Loading