Compute $$X.row X.row^T$$ for a Filebacked Big Matrix X after applying a particular scaling to it.

big_tcrossprodSelf(
X,
fun.scaling = big_scale(center = FALSE, scale = FALSE),
ind.row = rows_along(X),
ind.col = cols_along(X),
block.size = block_size(nrow(X))
)

# S4 method for FBM,missing
tcrossprod(x, y)

## Arguments

X An object of class FBM. A function that returns a named list of mean and sd for every column, to scale each of their elements such as followed: $$\frac{X_{i,j} - mean_j}{sd_j}.$$ Default doesn't use any scaling. An optional vector of the row indices that are used. If not specified, all rows are used. Don't use negative indices. An optional vector of the column indices that are used. If not specified, all columns are used. Don't use negative indices. Maximum number of columns read at once. Default uses block_size. A 'double' FBM. Missing.

## Value

A temporary FBM, with the following two attributes:

• a numeric vector center of column scaling,

• a numeric vector scale of column scaling.

## Matrix parallelization

Large matrix computations are made block-wise and won't be parallelized in order to not have to reduce the size of these blocks. Instead, you may use Microsoft R Open or OpenBLAS in order to accelerate these block matrix computations. You can also control the number of cores used with bigparallelr::set_blas_ncores().

## Examples

X <- FBM(13, 17, init = rnorm(221))
true <- tcrossprod(X[])

# No scaling
K1 <- tcrossprod(X)
class(K1)#>  "matrix"all.equal(K1, true)#>  TRUE
K2 <- big_tcrossprodSelf(X)
class(K2)#>  "FBM"
#> attr(,"package")
#>  "bigstatsr"K2\$backingfile#>  "C:\\Users\\au639593\\AppData\\Local\\Temp\\Rtmpq8tKLv\\file2e34663cc85.bk"all.equal(K2[], true)#>  TRUE
# big_tcrossprodSelf() provides some scaling and subsetting
# Example using only half of the data:
n <- nrow(X)
ind <- sort(sample(n, n/2))
K3 <- big_tcrossprodSelf(X, fun.scaling = big_scale(), ind.row = ind)
true2 <- tcrossprod(scale(X[ind, ]))
all.equal(K3[], true2)#>  TRUE