Skip to main content
AI Assist is now on Stack Overflow. Start a chat to get instant answers from across the network. Sign up to save and share your chats.
Gramma
Source Link
Argyll
  • 10.1k
  • 4
  • 30
  • 52

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

#include <vector> struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0;; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j]j]; } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implementtime it so that the result is comparable to what you had earlier (on the same machine). With the comparison, you will show exactly how significant memory access time can be!

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

#include <vector> struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j] } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

#include <vector> struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; double &operator()(int i, int j); }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j]; } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to time it so that the result is comparable to what you had earlier (on the same machine). With the comparison, you will show exactly how significant memory access time can be!

Corrected code
Source Link
Argyll
  • 10.1k
  • 4
  • 30
  • 52

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

#include <vector> struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j] } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j] } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

#include <vector> struct matrix { matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j] } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

Provided complete code
Source Link
Argyll
  • 10.1k
  • 4
  • 30
  • 52

theThe sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

doublestruct matrix {  matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return v[n_colM[n_col * i + j] } 

where v is the a std vector object of length n_col*n_row created earlier. And n_col and n_row is the intended size of the matrix. The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n).

I askI'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

the sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

double matrix::operator()(int i, int j) { return v[n_col * i + j] } 

where v is the a std vector object of length n_col*n_row created earlier. And n_col and n_row is the intended size of the matrix. The same multiplication algorithm should be used so that the number of flop is the same (n^3).

I ask you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

struct matrix {  matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {} int n_row; int n_col; std::vector<double> M; virtual double &operator()(int i, int j) = 0; }; 

And

double &matrix::operator()(int i, int j) { return M[n_col * i + j] } 

The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

I'm asking you to implement it so that the result is comparable to what you had earlier. With the comparison, you will show exactly how significant memory access time can be!

Source Link
Argyll
  • 10.1k
  • 4
  • 30
  • 52
Loading