Skip to main content
deleted 38 characters in body
Source Link
user14972
user14972

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a methodcontinued fractions to estimate the proportion with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic (and probably most reliable) approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm to reduce the rows of the matrix. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a method to estimate the proportion with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm to reduce the rows of the matrix. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to compute the ratio of every term to the first, and use continued fractions to estimate the proportion with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic (and probably most reliable) approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm to reduce the rows of the matrix. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

added 12 characters in body
Source Link
user14972
user14972

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a method to estimate itthe proportion with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithmalgorithm to reduce the rows of the matrix. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a method to estimate it with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a method to estimate the proportion with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm to reduce the rows of the matrix. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.

Source Link
user14972
user14972

For the given data, you can determine the smallest common divisor simply by inspection: all six entries are extremely close to the arithmetic progression of multiples of the first entry.


For other data, you can perform the Euclidean algorithm (or a multi-argument variant), but the main point is that if you ever reach a sudden, dramatic drop in the size of a number, then it's likely that you have computed what should be zero. You can then spin off a new problem where you replace the extremely small number with zero, and see if you get a result that looks good.

(if you don't, you might go back to the original problem and continue without doing the substitution)


Another mildly ad-hoc approach is to use rational reconstruction; to compute the ratio of every term to the first, and use such a method to estimate it with a rational number. Hopefully these are the true ratios, at which point you can use an exact GCD algorithm.


The systematic approach to the problem is by lattice reduction; e.g. the basic approach is to set up a matrix

$$\begin{matrix} W & 0 & 0 & 0 & 0 & 0 & 0.24232239557963048 \\ 0 & W & 0 & 0 & 0 & 0 & 0.4846105998701098 \\ 0 & 0 & W & 0 & 0 & 0 & 0.7269290249651083 \\ 0 & 0 & 0 & W & 0 & 0 & 0.9692579815933621 \\ 0 & 0 & 0 & 0 & W & 0 & 1.2115343992719871 \\ 0 & 0 & 0 & 0 & 0 & W & 1.4538572457953847 \\ \end{matrix}$$

and use a lattice reduction algorithm. Hopefully, five of the rows will have an extremely small value in the last column, and the sixth row will have in its last column an estimate of the greatest common divisor of your values.

Setting the value of $W$ is an art. You want:

  • $W$ to be small enough so that lattice reduction will do enough work to find the answer.
  • $W$ to be large enough that lattice reduction will stop doing work before errors accumulate enough to spoil the result

The second bullet point is the reason for the extra columns, by the way — they keep track of how much arithmetic you've done on the rows, and thus estimate how much error accumulation has happened.

People who actually know the subject can do math to inform their value of $W$. Us mortals should just run lattice reduction a lot and keep decreasing $W$ until they get a result that looks right, and stop when the the results of lattice reduction start becoming garbage.