-
Notifications
You must be signed in to change notification settings - Fork 73
Generic Matrices
Having introduced BIDMat's matrix types, we'd now like to encourage you to forget about them. Matrices as mathematical types are just indexed sets of real (or complex) values. When you write high-level functions, you want the code to be able to run on various kinds of input data, and certainly to be able to run on either CPU or GPU. Using the generic matrix type Mat
allows you to do this. Going back to the LDA example:
def uupdate(sdata:Mat, user:Mat, ipass:Int):Unit = { for (i <- 0 until opts.uiter) { val preds = DDS(mm, user, sdata) val dc = sdata.contents val pc = preds.contents max(opts.weps, pc, pc) pc ~ dc / pc val unew = user ∘ (mm * preds) + opts.alpha if (opts.exppsi) exppsi(unew, unew) user <-- unew } }
both the input matrices (sdata
, user
) and the model matrix mm
have type Mat
. But the operators and functions can accept a variety of specializations: sdata
can be SMat
, FMat
, SDMat
, DMat
, GMat
or GSMat
. i.e it can be any dense or sparse floating point type, on CPU or GPU. And user
and mm
can be any dense floating point type on CPU or GPU.
Not all specializations are possible though, and most would not be useful (e.g. mixing GPU and CPU matrices would cause expensive copies). Still, the code author doesnt need to worry about any of this, since its only the logic of the program that matters.
This reasoning applies to high-level code, but not to low-level code. By low-level, we mean routines which explicitly manipulate elements. While its technically possible to write generic code over Scala's "Numeric" type, this is going to be slow. Its better to write custom code for each specialization of the matrix element field.