While the matrix interface is always identical, performance characteristics are implementation dependent. In general, performance of a matrix operation is a function of {data structure, density, type and kind of method arguments}. This library takes great care about performance. When in doubt about the detailed character of an operation, have a look at the source code.
In practice, sparse matrices are used for one of two reasons: To safe memory or to speed up computation. Hash based sparse matrices (SparseDoubleMatrix2D) are neither the smallest possible matrix representation nor the fastest. They implement a reasonable trade-off between performance and memory: Very good average performance on get/set operations at quite small memory footprint. They are also suited for special-purpose algorithms exploiting explicit knowledge about what regions are zero and non-zero, but not quite as good as other sparse matrix representations. For example, sparse linear algebraic matrix multiplies, inversions, etc. better work on sparse row compressed (RCDoubleMatrix2D). However, alternative sparse matrix representations are really only usable for special purposes, because their get/set performance can be very bad. In contrast, hash based sparse matrices are more generally applicable data structures.
Here is a list describing which combinations are particularly optimized. (F is used as shortcut for cern.jet.math.Functions)
For good performance B,C may have any type. For A={SparseDoubleMatrix2D,RCDoubleMatrix2D} this is only fast if the density of A is small. For A={DenseDoubleMatrix2D} density does not matter. If A is dense and B is sparse, this is no problem, because even then the quick sparse mult is used.
Operation | Method | Comment |
read | get,getQuick | always |
write | set,setQuick |
only fast if the matrix is really sparse and in a loop iterating upwards: |
matrix-matrix and matrix-vector multiplication | zMult | see above in Section "General" |
elementwise scaling | assign(f) where f is one of {F.mult(a),F.div(a)} | x[i,j] = x[i,j] {*,/} a |
elementwise scaling | assign(y,f) where f is one of {F.plus,F.minus, F.mult,F.div, F.plusMult(a),F.minusMult(a)} | x[i,j] = x[i,j] {+,-,*,/} y[i,j] x[i,j] = x[i,j] {+,-} y[i,j] {*,/} a |
copying | assign(othermatrix) | always fast, best if othermatrix is a RCDoubleMatrix2D |
iteration | forEachNonzero(function) | most of the time the preferred way for iteration and modification |
Operation | Method | Comment |
read | get,getQuick | always |
write | set,setQuick |
always |
matrix-matrix and matrix-vector multiplication | zMult | slightly slower than RCDoubleMatrix when size is large |
elementwise scaling | assign(f) where f is one of {F.mult(a),F.div(a)} | x[i,j] = x[i,j] {*,/} a |
elementwise scaling | assign(y,f) where f is one of {F.plus,F.minus, F.mult,F.div, F.plusMult(a),F.minusMult(a)} | x[i,j] = x[i,j] {+,-,*,/} y[i,j] x[i,j] = x[i,j] {+,-} y[i,j] {*,/} a |
copying | assign(othermatrix) | best if othermatrix is a SparseDoubleMatrix2D |
iteration | forEachNonzero(function) | often the preferred way for iteration and modification |