Matrix and linear algebra in F#, Part V: Sparse matrix implementation in PowerPack, and PInvoke a large scale SVD library as an application[z]
Posted on 2010-08-16 18:19 白途思 阅读(620) 评论(0) 编辑 收藏 举报In this post, we move to sparse matrices. In many applications in data mining, the data table is stored as a sparse matrix. For example, in text mining, a document can be represented by a row-vector(or column-vector), the value at index i means that how many times word i occurs in this document. While the vocabulary could be quite large (tens of thousands), thus the document vector is highly sparse. If we store the documents in a matrix, then the matrix is also highly sparse. In DNA sequence analysis applications, the same situation occurs, that is we need to deal with sparse matrices!
We first dig into the implementation of F# sparse matrix in PowerPack. This is important, as we need to know how the sparse matrices are represented in F#, what algorithms are supported, etc. Based on the F# implementation, we can build our own routines, e.g. change the row based sparse representation to column based. At last, we PInvoke a SVD library in C as a example to show how to link existing stable linear algebra routines for sparse matrices.
The sparse matrix implementation in PowerPack
Sparse matrices have several different representation, which also have different strength. A good introduction is this Wikipedia page.
F# uses Yale Sparse Matrix Format, or another name, Compressed Sparse Row. (ref to the above Wiki article.)
Let’s check this out:
> let B = Matrix.initSparse 4 3 [0,0,2.3; 2,0,3.8; 1,1,1.3; 0,2,4.2; 1,2,2.2; 2,2,0.5]
val B : matrix = matrix [[2.3; 0.0; 4.2]
[0.0; 1.3; 2.2]
[3.8; 0.0; 0.5]
[0.0; 0.0; 0.0]]
> B.InternalSparseValues
val it : float [] = [|2.3; 4.2; 1.3; 2.2; 3.8; 0.5|]
> B.InternalSparseColumnValues
val it : int [] = [|0; 2; 1; 2; 0; 2|]
> B.InternalSparseRowOffsets
val it : int [] = [|0; 2; 4; 6; 6|]
Non-zero entries are sequentially in B.InternalSparseValues in a row-by-row fashion. B.InternalSparseColumnValues stores their column indices. B.InternalSparseRowOffsets stores the row offsets, for row i, it starts at B.InternalSparseRowOffsets[i] and ends at B.InternalSparseRowOffsets[i+1]-1 in the value array(B.InternalSparseValues).
This representation allows to take out one row of a sparse matrix very efficiently. In data mining, data instances/samples are usually stored as a row vector in a matrix, so it supports efficient handling on the instance level.
However, this representation does not support efficient column operations. In numerical computing, taking columns is more often (following the old Fortran tradition), thus a lot of libraries use Compressed Sparse Column representation for sparse matrices. It is also called Harwell-Boeing sparse matrix format (HB format for short) in the numerical computing community. An example is here.
PowerPack’s aim for providing the Internal* member functions of matrix type is to let users to interpolate with other libraries. However, it does not provide a function to convert the row representation to the column representation. Thus, this could be our first exercise:
let toColumnSparse (A:matrix) =
if A.IsDense then failwith "only for sparse matrix!"
let nrow = A.NumRows
let ncol = A.NumCols
let cidx = A.InternalSparseColumnValues
let roffset = A.InternalSparseRowOffsets
let vals = A.InternalSparseValues
let nval = vals.Length
// 1. scan to get column offset
let colCnt = Array.create ncol 0
for i=0 to nrow-1 do
for idx=roffset.[i] to roffset.[i+1]-1 do
colCnt.[cidx.[idx]] <- colCnt.[cidx.[idx]] + 1
let colOffset = Array.zeroCreate (ncol+1)
for i=1 to ncol do
colOffset.[i] <- colOffset.[i-1] + colCnt.[i-1]
Array.Clear(colCnt, 0, ncol) // clear cnt array
// 2. score the value
let rowIdx = Array.create nval 0
let vals2 = Array.create nval 0.0
for i=0 to nrow-1 do
for idx=roffset.[i] to roffset.[i+1]-1 do
let j = cidx.[idx]
vals2.[colOffset.[j] + colCnt.[j]] <- vals.[idx]
rowIdx.[colOffset.[j] + colCnt.[j]] <- i
colCnt.[j] <- colCnt.[j] + 1
vals2, rowIdx, colOffset
This implementation is quite efficient. It does two scans and returns three arrays for the HB format. This implementation currently only supports sparse matrices, it should also support dense matrices as we often need to transform a dense matrix into a sparse one. We can add the support in this function, or we can first transfer a dense matrix into a sparse one and use the above function. Both are OK. However, PowerPack currently does not support dense to sparse operation (it supports sparse to dense, which is used more often). Let this dense-to-sparse operation be our second exercise:
let toSparse (A:matrix) =
if A.IsSparse then failwith "A should be a desne matrix"
let l =
seq {
let B = A.InternalDenseValues // use the internal array
for i=0 to (Array2D.length1 B)-1 do
for j=0 to (Array2D.length2 B)-1 do
if B.[i,j] > 1e-30 then
yield (i,j,B.[i,j])
}
Matrix.initSparse A.NumRows A.NumCols l
To get the HB format of a dense matrix A, we simply use:
let vals, rowIdx, colOffset = A |> toSparse |> toColumnSparse
At last of this section, let’s have a look at the annotated implementation for Matrix.initSparse in matrix.fs to have a better understanding of the F# sparse matrix type:
/// Create a matrix from a sparse sequence
let initSparseMatrixGU maxi maxj ops s =
(* 1. the matrix is an array of dictionarys
tab[i] is the dictionary for row i *)
let tab = Array.create maxi null
let count = ref 0
for (i,j,v) in s do
if i < 0 || i >= maxi || j <0 || j >= maxj then failwith "initial value out of range";
count := !count + 1;
let tab2 =
match tab.[i] with
| null ->
let tab2 = new Dictionary<_,_>(3)
tab.[i] <- tab2;
tab2
| tab2 -> tab2
tab2.[j] <- v
// 2. calcuate the offset for each row
// need to be optimized!
let offsA =
let rowsAcc = Array.zeroCreate (maxi + 1)
let mutable acc = 0
// 3. this loop could be optimized using
// sorted map for tab
for i = 0 to maxi-1 do
rowsAcc.[i] <- acc;
acc <- match tab.[i] with
| null -> acc
| tab2 -> acc+tab2.Count
rowsAcc.[maxi] <- acc;
rowsAcc
// 4. get the column indices and values
let colsA,valsA =
let colsAcc = new ResizeArray<_>(!count)
let valsAcc = new ResizeArray<_>(!count)
for i = 0 to maxi-1 do
match tab.[i] with
| null -> ()
| tab2 -> tab2 |> Seq.toArray |> Array.sortBy (fun kvp -> kvp.Key) |> Array.iter (fun kvp -> colsAcc.Add(kvp.Key); valsAcc.Add(kvp.Value));
colsAcc.ToArray(), valsAcc.ToArray()
// 5. call the SparseMatrix constructor
SparseMatrix(opsData=ops, sparseValues=valsA, sparseRowOffsets=offsA, ncols=maxj, columnValues=colsA)
As noted in the code, one possible optimization is to use SortedMap for table, rather than an array. But this SortedMap has more overhead, the current implementation is already good. The other possible way is to sort the (i,j,val) sequence, which avoids the overhead in using a Dictionary structure.
Only a few matrix operations are implemented for sparse matrices, e.g. +, – and * are supported. However, map, columns and rows are not supported. This does not quite matter as when we need sparse matrices, we will be usually dealing with large datasets. For large datasets, calling a specialized library or writing the code ourselves is a better solution, as we will see the SVD example blow:
PInvoke SVDLIBC
In a previous post, We already know how to write a simple matrix multiplication in C, and call it from F# using P/Invoke. Here we move to a more useful one, a large scale SVD library, SVDLIBC. For a small dense SVD, using lapack’s svd is just fine. However, for a 10000-by-10000 sparse matrix, we need a more powerful one. (ARPACK project is dedicated to this kind of decompositions. SVDLIBC is a C translation of a small part ARPACK.)
The SVDLIBC is a very good svd solver. It also provides a command line tool to do SVD for sparse or dense matrices. However, it uses some non-standard headers for I/O. To make it compile, we need to delete some code for IO. The main svd solver (in las2.c) is:
SVDRec svdLAS2A(SMat A, longdimensions)
A wrapper with a clear interface is need for this solver:
#define CHECK(ptr) if (!(ptr)) return 0;
__declspec(dllexport)
int svds(int nrow, int ncol, int nval, double *val, int *row, int *offset,
int dim,
double *Uval,
double *Sval,
double *Vval)
{
SMat A;
SVDRec res;
DMat U, V; double *S;
FILE *fp;
A = svdNewSMat(nrow, ncol, nval);
CHECK(A);
memcpy(A->value, val, sizeof(double) * nval);
memcpy(A->rowind, row, sizeof(int) * nval);
memcpy(A->pointr, offset, sizeof(int) * (ncol+1));
res = svdLAS2A(A, dim);
CHECK(res);
CHECK(res->d == dim); // the dimension passed in must be correct!
S = res->S;
memcpy(Sval, S, sizeof(double) * dim);
U = svdTransposeD(res->Ut); CHECK(U);
V = svdTransposeD(res->Vt); CHECK(V);
memcpy(Uval, &U->value[0][0], sizeof(double) * (U->rows * U->cols));
memcpy(Vval, &V->value[0][0], sizeof(double) * (V->rows * U->cols));
svdFreeDMat(U);
svdFreeDMat(V);
svdFreeSMat(A);
svdFreeSVDRec(res);
return 1; // successful
}
and in F#:
module Native =
[<System.Runtime.InteropServices.DllImport(@"svdlibc.dll",EntryPoint="svds")>]
extern int svds(int nrow, int ncol, int nval, double *vals, int *row, int *offset, int dim, double *Uval, double *Sval, double *Vval);
module LinearAlgebra =
// Sparse SVD
// A: F# sparse matrix
// di: the dimension
let svds (A:matrix) (di:int)=
/// A = U * S * Vt
if A.IsDense then failwith "only for sparse matrix!"
else
let nrow = A.NumRows
let ncol = A.NumCols
let nval = A.InternalSparseValues.Length
// let dim = min nrow ncol
let dim = max 1 (min (min nrow ncol) di) // choose a valid value
let U = Matrix.zero nrow dim
let S = Vector.zero dim
let V = Matrix.zero ncol dim
let colVals, rowIdx, colOffset = MatrixUtility.toColumnSparse A
let valsP = NativeUtilities.pinA colVals
let rowP = NativeUtilities.pinA rowIdx
let offsetP = NativeUtilities.pinA colOffset
let Up, Vp = NativeUtilities.pinMM U V
let Sp = NativeUtilities.pinV S
let ret = Native.svds(nrow, ncol, nval, valsP.Ptr, rowP.Ptr, offsetP.Ptr, dim, Up.Ptr, Sp.Ptr, Vp.Ptr)
valsP.Free()
rowP.Free()
offsetP.Free()
Up.Free()
Vp.Free()
Sp.Free()
if ret = 0 then
failwith "error in pinvoke svds"
U, S, V
// default Sparse SVD
// A: F# sparse matrix
let svds0 A =
svds A (min A.NumCols A.NumRows)
test it:
let test2() =
let r = new System.Random()
let nrow = 1000
let ncol = 1000
let nval = 100000
let l =
seq {
for i=1 to nval do
yield (r.Next()%nrow, r.Next()%ncol, r.NextDouble())
}
let A = Matrix.initSparse nrow ncol l
tic()
let U, S, V = svds A 100
toc("svds")
()
We will in later posts to show the data mining applications of SVD.
NativeUtilities module
I used utility functions in NativeUtilities to convert the F# array/matrix/vector and their native array representations. Here’s its implementation from F# math-provider source code:
open System
open System.Runtime.InteropServices
open Microsoft.FSharp.NativeInterop
open Microsoft.FSharp.Math
// from math-provider source code
module NativeUtilities = begin
let nativeArray_as_CMatrix_colvec (arr: 'T NativeArray) = new CMatrix<_>(arr.Ptr,arr.Length,1)
let nativeArray_as_FortranMatrix_colvec (arr: 'T NativeArray) = new FortranMatrix<_>(arr.Ptr,arr.Length,1)
let pinM m = PinnedArray2.of_matrix(m)
let pinV v = PinnedArray.of_vector(v)
let pinA arr = PinnedArray.of_array(arr)
let pinA2 arr = PinnedArray2.of_array2D(arr)
let pinMV m1 v2 = pinM m1,pinV v2
let pinVV v1 v2 = pinV v1,pinV v2
let pinAA v1 v2 = pinA v1,pinA v2
let pinMVV m1 v2 m3 = pinM m1,pinV v2,pinV m3
let pinMM m1 m2 = pinM m1,pinM m2
let pinMMM m1 m2 m3 = pinM m1,pinM m2,pinM m3
let freeM (pA: PinnedArray2<'T>) = pA.Free()
let freeV (pA: PinnedArray<'T>) = pA.Free()
let freeA (pA: PinnedArray<'T>) = pA.Free()
let freeA2 a = freeM a
let freeMV (pA: PinnedArray2<'T>,pB : PinnedArray<'T>) = pA.Free(); pB.Free()
let freeVV (pA: PinnedArray<'T>,pB : PinnedArray<'T>) = pA.Free(); pB.Free()
let freeAA (pA: PinnedArray<'T>,pB : PinnedArray<'T>) = pA.Free(); pB.Free()
let freeMM (pA: PinnedArray2<'T>,(pB: PinnedArray2<'T>)) = pA.Free();pB.Free()
let freeMMM (pA: PinnedArray2<'T>,(pB: PinnedArray2<'T>),(pC: PinnedArray2<'T>)) = pA.Free();pB.Free();pC.Free()
let freeMVV (pA: PinnedArray2<'T>,(pB: PinnedArray<'T>),(pC: PinnedArray<'T>)) = pA.Free();pB.Free();pC.Free()
let matrixDims (m:Matrix<_>) = m.NumRows, m.NumCols
let matrixDim1 (m:Matrix<_>) = m.NumRows
let matrixDim2 (m:Matrix<_>) = m.NumCols
let vectorDim (v:Vector<_>) = v.Length
let assertDimensions functionName (aName,bName) (a,b) =
if a=b then () else
failwith (sprintf "Require %s = %s, but %s = %d and %s = %d in %s" aName bName aName a bName b functionName)
end
A Note on Compressed Sparse Row and Compressed Sparse Column(HB format)
We can see both representations have deficiency, is there a magic structure or an engraining trick taking the advantage of both? Let’s check the standard software Matlab.
In Matlab, we usually write A(:,j) to take the j-th column of sparse matrix A or A(i,:) to take the i-th row. Does Matlab have a magic to let both operations run efficiently. Then answer is NO. The following script is used to test this:
function [ ret ] = testOctaveSvds ()
nrow = 10000;
ncol = 10000;
nval = 100000;
rows = floor(rand(1, nval)*nrow)+1;
cols = floor(rand(1, nval)*ncol)+1;
vals = rand(1, nval);
m = sparse(rows, cols, vals, nrow, ncol);
tic;
s = 0;
for i=1:10000,
c = floor(rand(1) * ncol) + 1;
s = s + sum(m(:,c));
end
fprintf('cols: ')
toc;
tic;
s = 0;
for i=1:10000,
r = floor(rand(1) * nrow) + 1;
s = s + sum(m(r,:));
end
fprintf('rows: ')
toc;
endfunction
In Matlab 2009a, the output is:
cols: Elapsed time is 0.204693 seconds.
rows: Elapsed time is 16.058717 seconds.
Octave also has similar result. From the result, we can deduce that Matlab/Octave use HB format for sparse matrix and it does not do heavy optimization for the operation for taking the rows. This is reasonable, as mentioned before, that when using sparse matrices, the user/programmer has the obligation to optimize the program, rather than the matrix library.