From 3fe3042a62caaca8d50c746c8f92901ecd76841f Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 14 Jun 2024 11:53:36 +0000 Subject: [PATCH] build based on fa2a3ff --- dev/.documenter-siteinfo.json | 2 +- dev/demos/amino-acids/index.html | 4 +- dev/demos/main/index.html | 2 +- dev/dev/kernels/index.html | 2 +- dev/dev/private/index.html | 2 +- dev/index.html | 2 +- dev/man/algorithms/index.html | 4 +- dev/man/constraints/index.html | 2 +- dev/man/losses/index.html | 4 +- dev/man/main/index.html | 6 +- dev/objects.inv | Bin 1145 -> 1131 bytes dev/quickstart/index.html | 102 +++++++++++++++---------------- dev/search_index.js | 2 +- 13 files changed, 67 insertions(+), 67 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index dc1c91c..baecada 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-14T09:24:16","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-14T11:53:33","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/demos/amino-acids/index.html b/dev/demos/amino-acids/index.html index 9d20ed6..0db380d 100644 --- a/dev/demos/amino-acids/index.html +++ b/dev/demos/amino-acids/index.html @@ -226,7 +226,7 @@

+ + diff --git a/dev/demos/main/index.html b/dev/demos/main/index.html index 805070b..3f7e6dd 100644 --- a/dev/demos/main/index.html +++ b/dev/demos/main/index.html @@ -1,2 +1,2 @@ -Overview · GCPDecompositions.jl

Overview of demos

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

+Overview · GCPDecompositions.jl

Overview of demos

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

diff --git a/dev/dev/kernels/index.html b/dev/dev/kernels/index.html index 15ddc77..cfbfcd7 100644 --- a/dev/dev/kernels/index.html +++ b/dev/dev/kernels/index.html @@ -1,2 +1,2 @@ -Tensor Kernels · GCPDecompositions.jl

Tensor Kernels

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.TensorKernels.mttkrpFunction
mttkrp(X, (U1, U2, ..., UN), n)

Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n.

See also: mttkrp!

source
GCPDecompositions.TensorKernels.mttkrp!Function
mttkrp!(G, X, (U1, U2, ..., UN), n, buffer=create_mttkrp_buffer(X, U, n))

Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n and store the result in G.

Optionally, provide a buffer for intermediate calculations. Always use create_mttkrp_buffer to make the buffer; the internal details of buffer may change in the future and should not be relied upon.

Algorithm is based on Section III-B of the paper:

Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903

See also: mttkrp, create_mttkrp_buffer

source
GCPDecompositions.TensorKernels.create_mttkrp_bufferFunction
create_mttkrp_buffer(X, U, n)

Create buffer to hold intermediate calculations in mttkrp!.

Always use create_mttkrp_buffer to make a buffer for mttkrp!; the internal details of buffer may change in the future and should not be relied upon.

See also: mttkrp!

source
GCPDecompositions.TensorKernels.mttkrpsFunction
mttkrps(X, (U1, U2, ..., UN))

Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN.

See also: mttkrps!

source
GCPDecompositions.TensorKernels.mttkrps!Function
mttkrps!(G, X, (U1, U2, ..., UN))

Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN and store the result in G.

See also: mttkrps

source
+Tensor Kernels · GCPDecompositions.jl

Tensor Kernels

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.TensorKernels.mttkrpFunction
mttkrp(X, (U1, U2, ..., UN), n)

Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n.

See also: mttkrp!

source
GCPDecompositions.TensorKernels.mttkrp!Function
mttkrp!(G, X, (U1, U2, ..., UN), n, buffer=create_mttkrp_buffer(X, U, n))

Compute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n and store the result in G.

Optionally, provide a buffer for intermediate calculations. Always use create_mttkrp_buffer to make the buffer; the internal details of buffer may change in the future and should not be relied upon.

Algorithm is based on Section III-B of the paper:

Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903

See also: mttkrp, create_mttkrp_buffer

source
GCPDecompositions.TensorKernels.create_mttkrp_bufferFunction
create_mttkrp_buffer(X, U, n)

Create buffer to hold intermediate calculations in mttkrp!.

Always use create_mttkrp_buffer to make a buffer for mttkrp!; the internal details of buffer may change in the future and should not be relied upon.

See also: mttkrp!

source
GCPDecompositions.TensorKernels.mttkrpsFunction
mttkrps(X, (U1, U2, ..., UN))

Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN.

See also: mttkrps!

source
GCPDecompositions.TensorKernels.mttkrps!Function
mttkrps!(G, X, (U1, U2, ..., UN))

Compute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN and store the result in G.

See also: mttkrps

source
diff --git a/dev/dev/private/index.html b/dev/dev/private/index.html index 1334226..4152bfa 100644 --- a/dev/dev/private/index.html +++ b/dev/dev/private/index.html @@ -1,2 +1,2 @@ -Private functions · GCPDecompositions.jl

Private functions

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPAlgorithms._gcpFunction
_gcp(X, r, loss, constraints, algorithm)

Internal function to compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and the constraints constraints using the algorithm algorithm, returning a CPD object.

source
GCPDecompositions.TensorKernels._checked_mttkrp_dimsFunction
_checked_mttkrp_dims(X, (U1, U2, ..., UN), n)

Check that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.

source
GCPDecompositions.TensorKernels._checked_mttkrps_dimsFunction
_checked_mttkrps_dims(X, (U1, U2, ..., UN))

Check that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.

source
+Private functions · GCPDecompositions.jl

Private functions

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPAlgorithms._gcpFunction
_gcp(X, r, loss, constraints, algorithm)

Internal function to compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and the constraints constraints using the algorithm algorithm, returning a CPD object.

source
GCPDecompositions.TensorKernels._checked_mttkrp_dimsFunction
_checked_mttkrp_dims(X, (U1, U2, ..., UN), n)

Check that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.

source
GCPDecompositions.TensorKernels._checked_mttkrps_dimsFunction
_checked_mttkrps_dims(X, (U1, U2, ..., UN))

Check that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.

source
diff --git a/dev/index.html b/dev/index.html index 1558fcf..0e6c187 100644 --- a/dev/index.html +++ b/dev/index.html @@ -19,4 +19,4 @@ number = "4", pages = "1066--1095", DOI = "10.1137/19M1266265", -} +} diff --git a/dev/man/algorithms/index.html b/dev/man/algorithms/index.html index 8f2cdac..33053ae 100644 --- a/dev/man/algorithms/index.html +++ b/dev/man/algorithms/index.html @@ -1,6 +1,6 @@ -Algorithms · GCPDecompositions.jl

Algorithms

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPAlgorithms.ALSType
ALS

Alternating Least Squares. Workhorse algorithm for LeastSquaresLoss with no constraints.

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.GCPAlgorithms.FastALSType
FastALS

Fast Alternating Least Squares.

Efficient ALS algorithm proposed in:

Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.GCPAlgorithms.LBFGSBType
LBFGSB

Limited-memory BFGS with Box constraints.

Brief description of algorithm parameters:

  • m::Int : max number of variable metric corrections (default: 10)
  • factr::Float64 : function tolerance in units of machine epsilon (default: 1e7)
  • pgtol::Float64 : (projected) gradient tolerance (default: 1e-5)
  • maxfun::Int : max number of function evaluations (default: 15000)
  • maxiter::Int : max number of iterations (default: 15000)
  • iprint::Int : verbosity (default: -1)
    • iprint < 0 means no output
    • iprint = 0 prints only one line at the last iteration
    • 0 < iprint < 99 prints f and |proj g| every iprint iterations
    • iprint = 99 prints details of every iteration except n-vectors
    • iprint = 100 also prints the changes of active set and final x
    • iprint > 100 prints details of every iteration including x and g

See documentation of LBFGSB.jl for more details.

source
GCPDecompositions.GCPAlgorithms.FastALS_iter!Method
FastALS_iter!(X, U, λ) 
+Algorithms · GCPDecompositions.jl

Algorithms

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPAlgorithms.ALSType
ALS

Alternating Least Squares. Workhorse algorithm for LeastSquares loss with no constraints.

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.GCPAlgorithms.FastALSType
FastALS

Fast Alternating Least Squares.

Efficient ALS algorithm proposed in:

Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.GCPAlgorithms.LBFGSBType
LBFGSB

Limited-memory BFGS with Box constraints.

Brief description of algorithm parameters:

  • m::Int : max number of variable metric corrections (default: 10)
  • factr::Float64 : function tolerance in units of machine epsilon (default: 1e7)
  • pgtol::Float64 : (projected) gradient tolerance (default: 1e-5)
  • maxfun::Int : max number of function evaluations (default: 15000)
  • maxiter::Int : max number of iterations (default: 15000)
  • iprint::Int : verbosity (default: -1)
    • iprint < 0 means no output
    • iprint = 0 prints only one line at the last iteration
    • 0 < iprint < 99 prints f and |proj g| every iprint iterations
    • iprint = 99 prints details of every iteration except n-vectors
    • iprint = 100 also prints the changes of active set and final x
    • iprint > 100 prints details of every iteration including x and g

See documentation of LBFGSB.jl for more details.

source
GCPDecompositions.GCPAlgorithms.FastALS_iter!Method
FastALS_iter!(X, U, λ) 
 
 Algorithm for computing MTTKRP sequences is from "Fast Alternating LS Algorithms
 for High Order CANDECOMP/PARAFAC Tensor Factorizations" by Phan et al., specifically
-section III-C.
source
+section III-C.
source
diff --git a/dev/man/constraints/index.html b/dev/man/constraints/index.html index 9d1b8bc..be36f5d 100644 --- a/dev/man/constraints/index.html +++ b/dev/man/constraints/index.html @@ -1,2 +1,2 @@ -Constraints · GCPDecompositions.jl

Constraints

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

+Constraints · GCPDecompositions.jl

Constraints

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

diff --git a/dev/man/losses/index.html b/dev/man/losses/index.html index 02009bd..bf84026 100644 --- a/dev/man/losses/index.html +++ b/dev/man/losses/index.html @@ -1,4 +1,4 @@ -Loss functions · GCPDecompositions.jl

Loss functions

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPLosses.AbstractLossType
AbstractLoss

Abstract type for GCP loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.

Concrete types ConcreteLoss <: AbstractLoss should implement:

  • value(loss::ConcreteLoss, x, m) that computes the value of the loss function $f(x,m)$
  • deriv(loss::ConcreteLoss, x, m) that computes the value of the partial derivative $\partial_m f(x,m)$ with respect to $m$
  • domain(loss::ConcreteLoss) that returns an Interval from IntervalSets.jl defining the domain for $m$
source
GCPDecompositions.GCPLosses.BernoulliLogitLossType
BernoulliLogitLoss(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Bernouli data X with log odds-success rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
  • Link function: $m_i = \log(\frac{\rho_i}{1 - \rho_i})$
  • Loss function: $f(x, m) = \log(1 + e^m) - xm$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.BernoulliOddsLossType
BernoulliOddsLoss(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Bernouli data X with odds-sucess rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
  • Link function: $m_i = \frac{\rho_i}{1 - \rho_i}$
  • Loss function: $f(x, m) = \log(m + 1) - x\log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.BetaDivergenceLossType
BetaDivergenceLoss(β::Real, eps::Real)
+Loss functions · GCPDecompositions.jl

Loss functions

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.GCPLosses.AbstractLossType
AbstractLoss

Abstract type for GCP loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.

Concrete types ConcreteLoss <: AbstractLoss should implement:

  • value(loss::ConcreteLoss, x, m) that computes the value of the loss function $f(x,m)$
  • deriv(loss::ConcreteLoss, x, m) that computes the value of the partial derivative $\partial_m f(x,m)$ with respect to $m$
  • domain(loss::ConcreteLoss) that returns an Interval from IntervalSets.jl defining the domain for $m$
source
GCPDecompositions.GCPLosses.BernoulliLogitType
BernoulliLogit(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Bernouli data X with log odds-success rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
  • Link function: $m_i = \log(\frac{\rho_i}{1 - \rho_i})$
  • Loss function: $f(x, m) = \log(1 + e^m) - xm$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.BernoulliOddsType
BernoulliOdds(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Bernouli data X with odds-sucess rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
  • Link function: $m_i = \frac{\rho_i}{1 - \rho_i}$
  • Loss function: $f(x, m) = \log(m + 1) - x\log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.BetaDivergenceType
BetaDivergence(β::Real, eps::Real)
 
-BetaDivergence Loss for given β
  • Loss function: $f(x, m; β) = \frac{1}{\beta}m^{\beta} - \frac{1}{\beta - 1}xm^{\beta - 1} if \beta \in \mathbb{R} \{0, 1\}, m - x\log(m) if \beta = 1, \frac{x}{m} + \log(m) if \beta = 0$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.GammaLossType
GammaLoss(eps::Real = 1e-10)

Loss corresponding to a statistical assumption of Gamma-distributed data X with scale given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Gamma}(k, \sigma_i)$
  • Link function: $m_i = k \sigma_i$
  • Loss function: $f(x,m) = \frac{x}{m + \epsilon} + \log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.HuberLossType
HuberLoss(Δ::Real)

Huber Loss for given Δ

  • Loss function: $f(x, m) = (x - m)^2 if \abs(x - m)\leq\Delta, 2\Delta\abs(x - m) - \Delta^2 otherwise$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.LeastSquaresLossType
LeastSquaresLoss()

Loss corresponding to conventional CP decomposition. Corresponds to a statistical assumption of Gaussian data X with mean given by the low-rank model tensor M.

  • Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
  • Link function: $m_i = \mu_i$
  • Loss function: $f(x,m) = (x-m)^2$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.NegativeBinomialOddsLossType
NegativeBinomialOddsLoss(r::Integer, eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Negative Binomial data X with log odds failure rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{NegativeBinomial}(r, \rho_i)$
  • Link function: $m = \frac{\rho}{1 - \rho}$
  • Loss function: $f(x, m) = (r + x) \log(1 + m) - x\log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.NonnegativeLeastSquaresLossType
NonnegativeLeastSquaresLoss()

Loss corresponding to nonnegative CP decomposition. Corresponds to a statistical assumption of Gaussian data X with nonnegative mean given by the low-rank model tensor M.

  • Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
  • Link function: $m_i = \mu_i$
  • Loss function: $f(x,m) = (x-m)^2$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.PoissonLogLossType
PoissonLogLoss()

Loss corresponding to a statistical assumption of Poisson data X with log-rate given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
  • Link function: $m_i = \log \lambda_i$
  • Loss function: $f(x,m) = e^m - x m$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.PoissonLossType
PoissonLoss(eps::Real = 1e-10)

Loss corresponding to a statistical assumption of Poisson data X with rate given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
  • Link function: $m_i = \lambda_i$
  • Loss function: $f(x,m) = m - x \log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.RayleighLossType
RayleighLoss(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Rayleigh data X with sacle given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Rayleigh}(\theta_i)$
  • Link function: $m_i = \sqrt{\frac{\pi}{2}\theta_i}$
  • Loss function: $f(x, m) = 2\log(m + \epsilon) + \frac{\pi}{4}(\frac{x}{m + \epsilon})^2$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.UserDefinedLossType
UserDefinedLoss

Type for user-defined loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.

Contains three fields:

  1. func::Function : function that evaluates the loss function $f(x,m)$
  2. deriv::Function : function that evaluates the partial derivative $\partial_m f(x,m)$ with respect to $m$
  3. domain::Interval : Interval from IntervalSets.jl defining the domain for $m$

The constructor is UserDefinedLoss(func; deriv, domain). If not provided,

  • deriv is automatically computed from func using forward-mode automatic differentiation
  • domain gets a default value of Interval(-Inf, +Inf)
source
GCPDecompositions.GCPLosses.grad_U!Function
grad_U!(GU, M::CPD, X::AbstractArray, loss)

Compute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).

source
+BetaDivergence Loss for given β
  • Loss function: $f(x, m; β) = \frac{1}{\beta}m^{\beta} - \frac{1}{\beta - 1}xm^{\beta - 1} if \beta \in \mathbb{R} \{0, 1\}, m - x\log(m) if \beta = 1, \frac{x}{m} + \log(m) if \beta = 0$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.GammaType
Gamma(eps::Real = 1e-10)

Loss corresponding to a statistical assumption of Gamma-distributed data X with scale given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Gamma}(k, \sigma_i)$
  • Link function: $m_i = k \sigma_i$
  • Loss function: $f(x,m) = \frac{x}{m + \epsilon} + \log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.HuberType
Huber(Δ::Real)

Huber Loss for given Δ

  • Loss function: $f(x, m) = (x - m)^2 if \abs(x - m)\leq\Delta, 2\Delta\abs(x - m) - \Delta^2 otherwise$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.LeastSquaresType
LeastSquares()

Loss corresponding to conventional CP decomposition. Corresponds to a statistical assumption of Gaussian data X with mean given by the low-rank model tensor M.

  • Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
  • Link function: $m_i = \mu_i$
  • Loss function: $f(x,m) = (x-m)^2$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.NegativeBinomialOddsType
NegativeBinomialOdds(r::Integer, eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Negative Binomial data X with log odds failure rate given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{NegativeBinomial}(r, \rho_i)$
  • Link function: $m = \frac{\rho}{1 - \rho}$
  • Loss function: $f(x, m) = (r + x) \log(1 + m) - x\log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.NonnegativeLeastSquaresType
NonnegativeLeastSquares()

Loss corresponding to nonnegative CP decomposition. Corresponds to a statistical assumption of Gaussian data X with nonnegative mean given by the low-rank model tensor M.

  • Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
  • Link function: $m_i = \mu_i$
  • Loss function: $f(x,m) = (x-m)^2$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.PoissonType
Poisson(eps::Real = 1e-10)

Loss corresponding to a statistical assumption of Poisson data X with rate given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
  • Link function: $m_i = \lambda_i$
  • Loss function: $f(x,m) = m - x \log(m + \epsilon)$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.PoissonLogType
PoissonLog()

Loss corresponding to a statistical assumption of Poisson data X with log-rate given by the low-rank model tensor M.

  • Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
  • Link function: $m_i = \log \lambda_i$
  • Loss function: $f(x,m) = e^m - x m$
  • Domain: $m \in \mathbb{R}$
source
GCPDecompositions.GCPLosses.RayleighType
Rayleigh(eps::Real = 1e-10)

Loss corresponding to the statistical assumption of Rayleigh data X with sacle given by the low-rank model tensor M

  • Distribution: $x_i \sim \operatorname{Rayleigh}(\theta_i)$
  • Link function: $m_i = \sqrt{\frac{\pi}{2}\theta_i}$
  • Loss function: $f(x, m) = 2\log(m + \epsilon) + \frac{\pi}{4}(\frac{x}{m + \epsilon})^2$
  • Domain: $m \in [0, \infty)$
source
GCPDecompositions.GCPLosses.UserDefinedType
UserDefined

Type for user-defined loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.

Contains three fields:

  1. func::Function : function that evaluates the loss function $f(x,m)$
  2. deriv::Function : function that evaluates the partial derivative $\partial_m f(x,m)$ with respect to $m$
  3. domain::Interval : Interval from IntervalSets.jl defining the domain for $m$

The constructor is UserDefined(func; deriv, domain). If not provided,

  • deriv is automatically computed from func using forward-mode automatic differentiation
  • domain gets a default value of Interval(-Inf, +Inf)
source
GCPDecompositions.GCPLosses.grad_U!Function
grad_U!(GU, M::CPD, X::AbstractArray, loss)

Compute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).

source
diff --git a/dev/man/main/index.html b/dev/man/main/index.html index 7f87862..0064b29 100644 --- a/dev/man/main/index.html +++ b/dev/man/main/index.html @@ -1,6 +1,6 @@ -Overview · GCPDecompositions.jl

Overview

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositionsModule

Generalized CP Decomposition module. Provides approximate CP tensor decomposition with respect to general losses.

source
GCPDecompositions.gcpFunction
gcp(X::Array, r;
-    loss = GCPLosses.LeastSquaresLoss(),
+Overview · GCPDecompositions.jl

Overview

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositionsModule

Generalized CP Decomposition module. Provides approximate CP tensor decomposition with respect to general losses.

source
GCPDecompositions.gcpFunction
gcp(X::Array, r;
+    loss = GCPLosses.LeastSquares(),
     constraints = default_constraints(loss),
     algorithm = default_algorithm(X, r, loss, constraints),
-    init = default_init(X, r, loss, constraints, algorithm))

Compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and return a CPD object.

Keyword arguments:

  • constraints : a Tuple of constraints on the factor matrices U = (U[1],...,U[N]).
  • algorithm : algorithm to use

Conventional CP corresponds to the default GCPLosses.LeastSquaresLoss() loss with the default of no constraints (i.e., constraints = ()).

If the LossFunctions.jl package is also loaded, loss can also be a loss function from that package. Check GCPDecompositions.LossFunctionsExt.SupportedLosses to see what losses are supported.

See also: CPD, GCPLosses, GCPConstraints, GCPAlgorithms.

source
GCPDecompositions.CPDType
CPD

Tensor decomposition type for the canonical polyadic decompositions (CPD) of a tensor (i.e., a multi-dimensional array) A. This is the return type of gcp(_), the corresponding tensor decomposition function.

If M::CPD is the decomposition object, the weights λ and the factor matrices U = (U[1],...,U[N]) can be obtained via M.λ and M.U, such that A = Σ_j λ[j] U[1][:,j] ∘ ⋯ ∘ U[N][:,j].

source
GCPDecompositions.default_algorithmFunction
default_algorithm(X, r, loss, constraints)

Return a default algorithm for the data tensor X, rank r, loss function loss, and tuple of constraints constraints.

See also: gcp.

source
GCPDecompositions.default_initFunction
default_init([rng=default_rng()], X, r, loss, constraints, algorithm)

Return a default initialization for the data tensor X, rank r, loss function loss, tuple of constraints constraints, and algorithm algorithm, using the random number generator rng if needed.

See also: gcp.

source
+ init = default_init(X, r, loss, constraints, algorithm))

Compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and return a CPD object.

Keyword arguments:

  • constraints : a Tuple of constraints on the factor matrices U = (U[1],...,U[N]).
  • algorithm : algorithm to use

Conventional CP corresponds to the default GCPLosses.LeastSquares() loss with the default of no constraints (i.e., constraints = ()).

If the LossFunctions.jl package is also loaded, loss can also be a loss function from that package. Check GCPDecompositions.LossFunctionsExt.SupportedLosses to see what losses are supported.

See also: CPD, GCPLosses, GCPConstraints, GCPAlgorithms.

source
GCPDecompositions.CPDType
CPD

Tensor decomposition type for the canonical polyadic decompositions (CPD) of a tensor (i.e., a multi-dimensional array) A. This is the return type of gcp(_), the corresponding tensor decomposition function.

If M::CPD is the decomposition object, the weights λ and the factor matrices U = (U[1],...,U[N]) can be obtained via M.λ and M.U, such that A = Σ_j λ[j] U[1][:,j] ∘ ⋯ ∘ U[N][:,j].

source
GCPDecompositions.default_algorithmFunction
default_algorithm(X, r, loss, constraints)

Return a default algorithm for the data tensor X, rank r, loss function loss, and tuple of constraints constraints.

See also: gcp.

source
GCPDecompositions.default_initFunction
default_init([rng=default_rng()], X, r, loss, constraints, algorithm)

Return a default initialization for the data tensor X, rank r, loss function loss, tuple of constraints constraints, and algorithm algorithm, using the random number generator rng if needed.

See also: gcp.

source
diff --git a/dev/objects.inv b/dev/objects.inv index 1493131a9fa12f65a84ec01d84cc1397515619bc..909cf97d7788134ecbcf051d2742424ad3fdda48 100644 GIT binary patch delta 1010 zcmV#*i7A1M!|~+_anl19Z}eP^!W(#$l#~h7K%8A*xr~~eYDWOXn&LNKrz8%0|@kklG>4S zP_-o=&8ruKBjHv^rgo77t5$q8EuTFKk%*l%Dr+r~%t#`}1ttNbv5TlJ>QmAeG)-SL z_UP|HatJL{Y4*`q;IWYCQ}%8k2g!~^fep#cK3eAMRq+;rP)vy;1{0Dsgr~#1dyUe) zDBbJLt#X$*yMN#sg}rA52Yf%lUlNWV=gS8OM05}_GZ8fLwGO(=V1xVV8DP4O?k7mE zB4^c`HkoZiSmV|N_yoTcjkt-C9>J()FL(8b5#aKOuoC$se4w8iIU!YEz zt>$_!U6q=>qAapUWedeS<2LJv8+}&Q`a)hpVB1}%T3rk&zA+|ku!6YKJ#|qlhCyZP zBW_0$CO3VO6ys5tKbE&UnCa4rBn(+kS!*rIQkps|EH`f)`h4ZhK#>Q1x#bpk06jV_ zZ?HZeKkt(@$bQOlBl?j<5xJd9N0F{`SBNvIv)`UU=ft*b=e?^HnAz8`3)_JNEf delta 1024 zcmV+b1poW%2>A$*hkvzKO_SO<5WVv&l$vl`;lNZSRosB7Om;&}*i3S!gw=q8EfGm3 znN96~Us<;ChYa{*a{{&WTHXCxPcq{aej=smlnNi=F+v|i2!rnt^AAcBsf%;D)kA;B z|537J7cmNhkZ>;~j297^%|o9~AWsZ_*kyrG3RHDrV)f9{{C}ct#C^>KR~F!#KUx|` z+CdejJhaFM7Ds}ekWBAG2bRxxXmq>)<-i6ll6GA=Oj8C5Q#Flly)2WS=# z4EE$5KynBwsI%EaUw|ti(Z^(LAxFthL_-ylnmx2i=kww%1)-P{O)MrL8wl#dyL*k| zwJ2UYSX9b=#(z}{Zcupetl^07r}%Tk@#A9k0KN#1LNOab8DHq2y9748pIrdP^XPtx z%q((d2eUG>r3ee$+5n&6r^d>-yAsW76t6;dRg%$UivKj3k|1cE)g;d9GT3G|nWA(> zO?LKlVe1mx({(Ly_bA2%U951)X@drcE4^5u5^Rw5yMJ3|dme0o)4s#!zN>DQ_o!L# zc4@W;7F%p1s|z;U?i`TVCNl|L+pyIH$#02M#$?{0JN*3D`eC zoibNKzjNU6zSkYNJE8)>VMq-UrNAExf>Nq|CMl+P6;X+$_7ZCaNw(rp5@jdGQQ|pu zJUZ-&mVZnV@d|wlrWp=Z;%V$aR1|HiIE(0f6VD^+bAn}VdShwtWZ zhm*x{^2Tg1eD_crVY#fK)(3gKMofL47szSvD1R?H1tSJM3%=g@b;%SI=XwmYP24-a zwk>Y_=$9jO2*XfQL09jgX1u6gM<(@g>A>V!e);zi$*j#sZlF;?nD(S@n-bP=d zyML}X{)hIu*?5KeV4o;$-&5Ijz&O+G`B;=Mky88(8mU~UwZ8}d0Catp z$u(tR^>%8YQy1JOMEnk)s-x4K#3#Wy>_~i<_L2x=sH|^xqUNBF3kTsP5n3$kba*|WVzBc3lsbKKaiXX<|tg8o)-b?_Mg diff --git a/dev/quickstart/index.html b/dev/quickstart/index.html index ebdf222..f0e05ed 100644 --- a/dev/quickstart/index.html +++ b/dev/quickstart/index.html @@ -25,57 +25,57 @@ \in \mathbb{R}^{10 \times 20 \times 30} .\]

We can extract each of these as follows:

julia> M.λ    # to write `λ`, type `\lambda` then hit tab1-element Vector{Float64}:
- 76.75927031034217
julia> M.U[1]10×1 Matrix{Float64}: - 0.310752978963137 - 0.32713262426667544 - 0.30932462812712513 - 0.30274349165829867 - 0.33986647535969816 - 0.32882154547389214 - 0.31213408424338657 - 0.31144809948788715 - 0.31523415747059946 - 0.302734992691554
julia> M.U[2]20×1 Matrix{Float64}: - 0.2238551407772389 - 0.23354127655632634 - 0.2156721660010898 - 0.19522286521159765 - 0.2223622245996549 - 0.2259487574302278 - 0.2595416510762822 - 0.24785872743221574 - 0.21850883838244434 - 0.20299280989988994 - 0.237979808167972 - 0.19872732240562474 - 0.22684198904922712 - 0.2045269514882981 - 0.2271935537952971 - 0.20129934439021221 - 0.20646119387582373 - 0.22478792948719384 - 0.24352794347512716 - 0.24178410514689877
julia> M.U[3]30×1 Matrix{Float64}: - 0.1926559467631958 - 0.160487683648533 - 0.1721606158946293 - 0.19473858154214607 - 0.2139981802096812 - 0.18229717337081208 - 0.17490495697227865 - 0.1715476796347638 - 0.18975384715247073 - 0.16749265635376887 + 77.5914989341606
julia> M.U[1]10×1 Matrix{Float64}: + 0.3122197388481964 + 0.3335235863064808 + 0.31810420486035285 + 0.3365764844808427 + 0.3176770983688962 + 0.328633959280547 + 0.3081066795350176 + 0.3078652309331893 + 0.300605011257006 + 0.29633379792145365
julia> M.U[2]20×1 Matrix{Float64}: + 0.22624340535318735 + 0.23386183292306872 + 0.19863441208473548 + 0.22864473431239174 + 0.19891951283947462 + 0.22341691279807133 + 0.24375878470576728 + 0.20264100160339846 + 0.22568581525014605 + 0.22036057686545937 + 0.22761998079126 + 0.2191099332406937 + 0.25061867409760147 + 0.2207186268571073 + 0.22996059575908698 + 0.22320768598927948 + 0.21434433685656026 + 0.23783723994975345 + 0.21370076938397853 + 0.22517054806813
julia> M.U[3]30×1 Matrix{Float64}: + 0.1775386288721477 + 0.1749026179348701 + 0.18813579690583387 + 0.17811537709685396 + 0.18619603580981378 + 0.18004976279903548 + 0.16521007850557676 + 0.1672560766611624 + 0.18433031853112303 + 0.18764761737633992 ⋮ - 0.19373885077464628 - 0.18549244834742695 - 0.17325820712032763 - 0.1745570907602657 - 0.19864427622623881 - 0.1935246803658731 - 0.17431316825565052 - 0.1814216094326709 - 0.17093417688289494

Let's check how close the factor matrices $U_1$, $U_2$, and $U_3$ (which were estimated from the noisy data) are to the true signal components $\mathbf{1}_{10}$, $\mathbf{1}_{20}$, and $\mathbf{1}_{30}$. We use the angle between the vectors since the scale of each factor matrix isn't meaningful on its own (read Overview to learn more):

julia> using LinearAlgebra: normalize
julia> vecangle(u, v) = acos(normalize(u)'*normalize(v)) # angle between vectors in radiansvecangle (generic function with 1 method)
julia> vecangle(M.U[1][:,1], ones(10))0.03631185154070675
julia> vecangle(M.U[2][:,1], ones(20))0.0777240675497008
julia> vecangle(M.U[3][:,1], ones(30))0.06544675200217101

The decomposition does a pretty good job of extracting the signal from the noise!

The power of Generalized CP Decomposition is that we can fit CP decompositions to data using different losses (i.e., different notions of fit). By default, gcp uses the (conventional) least-squares loss, but we can easily try another! For example, to try non-negative least-squares simply run

julia> M_nonneg = gcp(X, 1; loss = GCPLosses.NonnegativeLeastSquaresLoss())10×20×30 CPD{Float64, 3, Vector{Float64}, Matrix{Float64}} with 1 component
+ 0.19265643067880456
+ 0.1902947207229845
+ 0.20176035239985882
+ 0.17154959335559222
+ 0.18220159766937652
+ 0.17173217640579802
+ 0.16764161654530416
+ 0.20738347870207136
+ 0.18493503819719267

Let's check how close the factor matrices $U_1$, $U_2$, and $U_3$ (which were estimated from the noisy data) are to the true signal components $\mathbf{1}_{10}$, $\mathbf{1}_{20}$, and $\mathbf{1}_{30}$. We use the angle between the vectors since the scale of each factor matrix isn't meaningful on its own (read Overview to learn more):

julia> using LinearAlgebra: normalize
julia> vecangle(u, v) = acos(normalize(u)'*normalize(v)) # angle between vectors in radiansvecangle (generic function with 1 method)
julia> vecangle(M.U[1][:,1], ones(10))0.04080160115157532
julia> vecangle(M.U[2][:,1], ones(20))0.05861605002045093
julia> vecangle(M.U[3][:,1], ones(30))0.06383084883004053

The decomposition does a pretty good job of extracting the signal from the noise!

The power of Generalized CP Decomposition is that we can fit CP decompositions to data using different losses (i.e., different notions of fit). By default, gcp uses the (conventional) least-squares loss, but we can easily try another! For example, to try non-negative least-squares simply run

julia> M_nonneg = gcp(X, 1; loss = GCPLosses.NonnegativeLeastSquares())10×20×30 CPD{Float64, 3, Vector{Float64}, Matrix{Float64}} with 1 component
 λ weights:
 1-element Vector{Float64}: …
 U[1] factor matrix:
@@ -83,4 +83,4 @@
 U[2] factor matrix:
 20×1 Matrix{Float64}: …
 U[3] factor matrix:
-30×1 Matrix{Float64}: …
Congratulations!

Congratulations! You have successfully installed GCPDecompositions and run some Generalized CP decompositions!

Next steps

Ready to learn more?

  • If you are new to tensor decompositions (or to GCP decompositions in particular), check out the Overview page in the manual. Also check out some of the demos!
  • To learn about all the different loss functions you can use or about how to add your own, check out the Loss functions page in the manual.
  • To learn about different constraints you can add, check out the Constraints page in the manual.
  • To learn about different algorithms you can choose, check out the Algorithms page in the manual.

Want to understand the internals and possibly contribute? Check out the developer docs.

+30×1 Matrix{Float64}: …
Congratulations!

Congratulations! You have successfully installed GCPDecompositions and run some Generalized CP decompositions!

Next steps

Ready to learn more?

  • If you are new to tensor decompositions (or to GCP decompositions in particular), check out the Overview page in the manual. Also check out some of the demos!
  • To learn about all the different loss functions you can use or about how to add your own, check out the Loss functions page in the manual.
  • To learn about different constraints you can add, check out the Constraints page in the manual.
  • To learn about different algorithms you can choose, check out the Algorithms page in the manual.

Want to understand the internals and possibly contribute? Check out the developer docs.

diff --git a/dev/search_index.js b/dev/search_index.js index 44cb683..c9c7fec 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"man/main/#Overview","page":"Overview","title":"Overview","text":"","category":"section"},{"location":"man/main/","page":"Overview","title":"Overview","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/main/","page":"Overview","title":"Overview","text":"GCPDecompositions\ngcp\nCPD\nncomponents\nGCPDecompositions.default_constraints\nGCPDecompositions.default_algorithm\nGCPDecompositions.default_init","category":"page"},{"location":"man/main/#GCPDecompositions","page":"Overview","title":"GCPDecompositions","text":"Generalized CP Decomposition module. Provides approximate CP tensor decomposition with respect to general losses.\n\n\n\n\n\n","category":"module"},{"location":"man/main/#GCPDecompositions.gcp","page":"Overview","title":"GCPDecompositions.gcp","text":"gcp(X::Array, r;\n loss = GCPLosses.LeastSquaresLoss(),\n constraints = default_constraints(loss),\n algorithm = default_algorithm(X, r, loss, constraints),\n init = default_init(X, r, loss, constraints, algorithm))\n\nCompute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and return a CPD object.\n\nKeyword arguments:\n\nconstraints : a Tuple of constraints on the factor matrices U = (U[1],...,U[N]).\nalgorithm : algorithm to use\n\nConventional CP corresponds to the default GCPLosses.LeastSquaresLoss() loss with the default of no constraints (i.e., constraints = ()).\n\nIf the LossFunctions.jl package is also loaded, loss can also be a loss function from that package. Check GCPDecompositions.LossFunctionsExt.SupportedLosses to see what losses are supported.\n\nSee also: CPD, GCPLosses, GCPConstraints, GCPAlgorithms.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.CPD","page":"Overview","title":"GCPDecompositions.CPD","text":"CPD\n\nTensor decomposition type for the canonical polyadic decompositions (CPD) of a tensor (i.e., a multi-dimensional array) A. This is the return type of gcp(_), the corresponding tensor decomposition function.\n\nIf M::CPD is the decomposition object, the weights λ and the factor matrices U = (U[1],...,U[N]) can be obtained via M.λ and M.U, such that A = Σ_j λ[j] U[1][:,j] ∘ ⋯ ∘ U[N][:,j].\n\n\n\n\n\n","category":"type"},{"location":"man/main/#GCPDecompositions.ncomponents","page":"Overview","title":"GCPDecompositions.ncomponents","text":"ncomponents(M::CPD)\n\nReturn the number of components in M.\n\nSee also: ndims, size.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_constraints","page":"Overview","title":"GCPDecompositions.default_constraints","text":"default_constraints(loss)\n\nReturn a default tuple of constraints for the loss function loss.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_algorithm","page":"Overview","title":"GCPDecompositions.default_algorithm","text":"default_algorithm(X, r, loss, constraints)\n\nReturn a default algorithm for the data tensor X, rank r, loss function loss, and tuple of constraints constraints.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_init","page":"Overview","title":"GCPDecompositions.default_init","text":"default_init([rng=default_rng()], X, r, loss, constraints, algorithm)\n\nReturn a default initialization for the data tensor X, rank r, loss function loss, tuple of constraints constraints, and algorithm algorithm, using the random number generator rng if needed.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#Private-functions","page":"Private functions","title":"Private functions","text":"","category":"section"},{"location":"dev/private/","page":"Private functions","title":"Private functions","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"dev/private/","page":"Private functions","title":"Private functions","text":"GCPAlgorithms._gcp\nGCPDecompositions.TensorKernels._checked_khatrirao_dims\nGCPDecompositions.TensorKernels._checked_mttkrp_dims\nGCPDecompositions.TensorKernels._checked_mttkrps_dims","category":"page"},{"location":"dev/private/#GCPDecompositions.GCPAlgorithms._gcp","page":"Private functions","title":"GCPDecompositions.GCPAlgorithms._gcp","text":"_gcp(X, r, loss, constraints, algorithm)\n\nInternal function to compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and the constraints constraints using the algorithm algorithm, returning a CPD object.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_khatrirao_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_khatrirao_dims","text":"_checked_khatrirao_dims(A1, A2, ...)\n\nCheck that A1, A2, etc. have compatible dimensions for the Khatri-Rao product. If so, return a tuple of the number of rows and the shared number of columns. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_mttkrp_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_mttkrp_dims","text":"_checked_mttkrp_dims(X, (U1, U2, ..., UN), n)\n\nCheck that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_mttkrps_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_mttkrps_dims","text":"_checked_mttkrps_dims(X, (U1, U2, ..., UN))\n\nCheck that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"quickstart/#Quick-start-guide","page":"Quick start guide","title":"Quick start guide","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's install GCPDecompositions and run our first Generalized CP Tensor Decomposition!","category":"page"},{"location":"quickstart/#Step-1:-Install-Julia","page":"Quick start guide","title":"Step 1: Install Julia","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Go to https://julialang.org/downloads and install the current stable release.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"To check your installation, open up Julia and try a simple calculation like 1+1:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"1 + 1","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"More info: https://docs.julialang.org/en/v1/manual/getting-started/","category":"page"},{"location":"quickstart/#Step-2:-Install-GCPDecompositions","page":"Quick start guide","title":"Step 2: Install GCPDecompositions","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"GCPDecompositions can be installed using Julia's excellent builtin package manager.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"julia> import Pkg; Pkg.add(\"GCPDecompositions\")","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"This downloads, installs, and precompiles GCPDecompositions (and all its dependencies). Don't worry if it takes a few minutes to complete.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Tip: Interactive package management with the Pkg REPL mode\nHere we used the functional API for the builtin package manager. Pkg also has a very nice interactive interface (called the Pkg REPL) that is built right into the Julia REPL!Learn more here: https://pkgdocs.julialang.org/v1/getting-started/","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Tip: Pkg environments\nThe package manager has excellent support for creating separate installation environments. We strongly recommend using environments to create isolated and reproducible setups.Learn more here: https://pkgdocs.julialang.org/v1/environments/","category":"page"},{"location":"quickstart/#Step-3:-Run-GCPDecompositions","page":"Quick start guide","title":"Step 3: Run GCPDecompositions","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's create a simple three-way (a.k.a. order-three) data tensor that has a rank-one signal plus noise!","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"dims = (10, 20, 30)\nX = ones(dims) + randn(dims); # semicolon suppresses the output","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Mathematically, this data tensor can be written as follows:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"X\n=\nunderbrace\n mathbf1_10 circ mathbf1_20 circ mathbf1_30\n_textrank-one signal\n+\nN\nin\nmathbbR^10 times 20 times 30\n","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"where mathbf1_n in mathbbR^n denotes a vector of n ones, circ denotes the outer product, and N_ijk oversetiidsim mathcalN(01).","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Now, to get a rank r=1 GCP decomposition simply load the package and run gcp.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"using GCPDecompositions\nr = 1 # desired rank\nM = gcp(X, r)","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"This returns a CPD (short for CP Decomposition) with weights lambda and factor matrices U_1, U_2, and U_3. Mathematically, this is the following decomposition (read Overview to learn more):","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M\n=\nsum_i=1^r\nlambdai\ncdot\nU_1i circ U_2i circ U_3i\nin\nmathbbR^10 times 20 times 30\n","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"We can extract each of these as follows:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M.λ # to write `λ`, type `\\lambda` then hit tab\nM.U[1]\nM.U[2]\nM.U[3]","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's check how close the factor matrices U_1, U_2, and U_3 (which were estimated from the noisy data) are to the true signal components mathbf1_10, mathbf1_20, and mathbf1_30. We use the angle between the vectors since the scale of each factor matrix isn't meaningful on its own (read Overview to learn more):","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"using LinearAlgebra: normalize\nvecangle(u, v) = acos(normalize(u)'*normalize(v)) # angle between vectors in radians\nvecangle(M.U[1][:,1], ones(10))\nvecangle(M.U[2][:,1], ones(20))\nvecangle(M.U[3][:,1], ones(30))","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"The decomposition does a pretty good job of extracting the signal from the noise!","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"The power of Generalized CP Decomposition is that we can fit CP decompositions to data using different losses (i.e., different notions of fit). By default, gcp uses the (conventional) least-squares loss, but we can easily try another! For example, to try non-negative least-squares simply run","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M_nonneg = gcp(X, 1; loss = GCPLosses.NonnegativeLeastSquaresLoss())","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Congratulations!\nCongratulations! You have successfully installed GCPDecompositions and run some Generalized CP decompositions!","category":"page"},{"location":"quickstart/#Next-steps","page":"Quick start guide","title":"Next steps","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Ready to learn more?","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"If you are new to tensor decompositions (or to GCP decompositions in particular), check out the Overview page in the manual. Also check out some of the demos!\nTo learn about all the different loss functions you can use or about how to add your own, check out the Loss functions page in the manual.\nTo learn about different constraints you can add, check out the Constraints page in the manual.\nTo learn about different algorithms you can choose, check out the Algorithms page in the manual.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Want to understand the internals and possibly contribute? Check out the developer docs.","category":"page"},{"location":"dev/kernels/#Tensor-Kernels","page":"Tensor Kernels","title":"Tensor Kernels","text":"","category":"section"},{"location":"dev/kernels/","page":"Tensor Kernels","title":"Tensor Kernels","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"dev/kernels/","page":"Tensor Kernels","title":"Tensor Kernels","text":"GCPDecompositions.TensorKernels\nGCPDecompositions.TensorKernels.khatrirao\nGCPDecompositions.TensorKernels.khatrirao!\nGCPDecompositions.TensorKernels.mttkrp\nGCPDecompositions.TensorKernels.mttkrp!\nGCPDecompositions.TensorKernels.create_mttkrp_buffer\nGCPDecompositions.TensorKernels.mttkrps\nGCPDecompositions.TensorKernels.mttkrps!","category":"page"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels","text":"Tensor kernels for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.khatrirao","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.khatrirao","text":"khatrirao(A1, A2, ...)\n\nCompute the Khatri-Rao product (i.e., the column-wise Kronecker product) of the matrices A1, A2, etc.\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.khatrirao!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.khatrirao!","text":"khatrirao!(K, A1, A2, ...)\n\nCompute the Khatri-Rao product (i.e., the column-wise Kronecker product) of the matrices A1, A2, etc. and store the result in K.\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrp","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrp","text":"mttkrp(X, (U1, U2, ..., UN), n)\n\nCompute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n.\n\nSee also: mttkrp!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrp!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrp!","text":"mttkrp!(G, X, (U1, U2, ..., UN), n, buffer=create_mttkrp_buffer(X, U, n))\n\nCompute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n and store the result in G.\n\nOptionally, provide a buffer for intermediate calculations. Always use create_mttkrp_buffer to make the buffer; the internal details of buffer may change in the future and should not be relied upon.\n\nAlgorithm is based on Section III-B of the paper:\n\nFast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903\n\nSee also: mttkrp, create_mttkrp_buffer\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.create_mttkrp_buffer","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.create_mttkrp_buffer","text":"create_mttkrp_buffer(X, U, n)\n\nCreate buffer to hold intermediate calculations in mttkrp!.\n\nAlways use create_mttkrp_buffer to make a buffer for mttkrp!; the internal details of buffer may change in the future and should not be relied upon.\n\nSee also: mttkrp!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrps","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrps","text":"mttkrps(X, (U1, U2, ..., UN))\n\nCompute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN.\n\nSee also: mttkrps!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrps!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrps!","text":"mttkrps!(G, X, (U1, U2, ..., UN))\n\nCompute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN and store the result in G.\n\nSee also: mttkrps\n\n\n\n\n\n","category":"function"},{"location":"man/algorithms/#Algorithms","page":"Algorithms","title":"Algorithms","text":"","category":"section"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"GCPAlgorithms\nGCPAlgorithms.AbstractAlgorithm","category":"page"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms","text":"Algorithms for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.AbstractAlgorithm","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.AbstractAlgorithm","text":"AbstractAlgorithm\n\nAbstract type for GCP algorithms.\n\nConcrete types ConcreteAlgorithm <: AbstractAlgorithm should implement _gcp(X, r, loss, constraints, algorithm::ConcreteAlgorithm) that returns a CPD.\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"Modules = [GCPAlgorithms]\nFilter = t -> t in subtypes(GCPAlgorithms.AbstractAlgorithm) || (t isa Function && t != GCPAlgorithms._gcp)","category":"page"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.ALS","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.ALS","text":"ALS\n\nAlternating Least Squares. Workhorse algorithm for LeastSquaresLoss with no constraints.\n\nAlgorithm parameters:\n\nmaxiters::Int : max number of iterations (default: 200)\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.FastALS","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.FastALS","text":"FastALS\n\nFast Alternating Least Squares.\n\nEfficient ALS algorithm proposed in:\n\nFast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903\n\nAlgorithm parameters:\n\nmaxiters::Int : max number of iterations (default: 200)\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.LBFGSB","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.LBFGSB","text":"LBFGSB\n\nLimited-memory BFGS with Box constraints.\n\nBrief description of algorithm parameters:\n\nm::Int : max number of variable metric corrections (default: 10)\nfactr::Float64 : function tolerance in units of machine epsilon (default: 1e7)\npgtol::Float64 : (projected) gradient tolerance (default: 1e-5)\nmaxfun::Int : max number of function evaluations (default: 15000)\nmaxiter::Int : max number of iterations (default: 15000)\niprint::Int : verbosity (default: -1)\niprint < 0 means no output\niprint = 0 prints only one line at the last iteration\n0 < iprint < 99 prints f and |proj g| every iprint iterations\niprint = 99 prints details of every iteration except n-vectors\niprint = 100 also prints the changes of active set and final x\niprint > 100 prints details of every iteration including x and g\n\nSee documentation of LBFGSB.jl for more details.\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.FastALS_iter!-NTuple{6, Any}","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.FastALS_iter!","text":"FastALS_iter!(X, U, λ) \n\nAlgorithm for computing MTTKRP sequences is from \"Fast Alternating LS Algorithms\nfor High Order CANDECOMP/PARAFAC Tensor Factorizations\" by Phan et al., specifically\nsection III-C.\n\n\n\n\n\n","category":"method"},{"location":"man/constraints/#Constraints","page":"Constraints","title":"Constraints","text":"","category":"section"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"GCPConstraints\nGCPConstraints.AbstractConstraint","category":"page"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints","page":"Constraints","title":"GCPDecompositions.GCPConstraints","text":"Constraints for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints.AbstractConstraint","page":"Constraints","title":"GCPDecompositions.GCPConstraints.AbstractConstraint","text":"AbstractConstraint\n\nAbstract type for GCP constraints on the factor matrices U = (U[1],...,U[N]).\n\n\n\n\n\n","category":"type"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"Modules = [GCPConstraints]\nFilter = t -> t in subtypes(GCPConstraints.AbstractConstraint)","category":"page"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints.LowerBound","page":"Constraints","title":"GCPDecompositions.GCPConstraints.LowerBound","text":"LowerBound(value::Real)\n\nLower-bound constraint on the entries of the factor matrices U = (U[1],...,U[N]), i.e., U[i][j,k] >= value.\n\n\n\n\n\n","category":"type"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"\n\n\n\n\n

Amino Acids

\"Download\"Author\"\"Created\"

\n\n\n

This demo considers a well-known dataset of fluorescence measurements for three amino acids.

Website: https://ucphchemometrics.com/2023/05/04/amino-acids-fluorescence-data/

Analogous tutorial in Tensor Toolbox: https://www.tensortoolbox.org/cp_als_doc.html

Relevant papers:

  1. Bro, R, Multi-way Analysis in the Food Industry. Models, Algorithms, and Applications. 1998. Ph.D. Thesis, University of Amsterdam (NL) & Royal Veterinary and Agricultural University (DK).

  2. Kiers, H.A.L. (1998) A three-step algorithm for Candecomp/Parafac analysis of large data sets with multicollinearity, Journal of Chemometrics, 12, 155-171.

\n\n
using CairoMakie, GCPDecompositions, LinearAlgebra
\n\n\n","category":"page"},{"location":"demos/amino-acids/#Load-data","page":"Amino Acids","title":"Load data","text":"","category":"section"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"
\n

The following code downloads the data file, extracts the data, and caches it.

\n\n
using CacheVariables, MAT, ZipFile
\n\n\n
data = @cache \"amino-acids-cache/data.bson\" let\n    # Download file\n    url = \"https://ucphchemometrics.com/wp-content/uploads/2023/05/claus.zip\"\n    zipname = download(url, tempname(@__DIR__))\n\n    # Extract MAT file\n    zipfile = ZipFile.Reader(zipname)\n    matname = tempname(@__DIR__)\n    write(matname, only(zipfile.files))\n    close(zipfile)\n\n    # Extract data\n    data = matread(matname)\n\n    # Clean up and output data\n    rm(zipname)\n    rm(matname)\n    data\nend
\n
Dict{String, Any} with 6 entries:\n  \"evalme\" => \"delta=input(' How many nm above exc. would you like to cut off: ');id=on…\n  \"EmAx\"   => [250.0 251.0 … 449.0 450.0]\n  \"X\"      => [0.858 0.32 … 11.931 12.159; 1.312 1.742 … 2.376 1.716; … ; 1.243 -0.071 …\n  \"ExAx\"   => [240.0 241.0 … 299.0 300.0]\n  \"readme\" => Any[\"1.Column Trp in M\", \"2.Column Tyr in M\", \"3.Column Phe in M\"]\n  \"y\"      => [2.67e-6 0.0 0.0; 0.0 1.33e-5 0.0; … ; 1.58e-6 5.44e-6 0.000355; 8.79e-7 …
\n\n\n

The data tensor X is 5×201×61 and consists of measurements across 5 samples, 201 emissions, and 61 excitations.

\n\n
X = data[\"X\"]
\n
5×201×61 Array{Float64, 3}:\n[:, :, 1] =\n 0.858   0.32   0.863   0.301  -0.16   …  14.291  15.286  14.53   11.931  12.159\n 1.312   1.742  0.987  -0.166   0.928      2.33    2.23    1.975   2.376   1.716\n 1.248  -0.319  0.658   0.414  -0.518      2.989   2.938   2.739   2.782   3.264\n 1.243  -0.071  1.017  -0.253  -1.151     10.961   9.597   8.76    7.894   7.189\n 3.963   1.425  1.764   1.769   1.107      6.05    6.34    5.011   5.558   4.724\n\n[:, :, 2] =\n  1.868  1.01    1.904   1.198   0.559  …  14.359  14.087  13.352  13.123  11.265\n  1.659  1.703  -0.317   0.882   0.91       1.934   2.378   2.221   1.675   1.443\n -1.364  1.393   0.814  -0.744  -0.322      3.194   2.536   2.81    2.381   2.026\n -0.246  2.12    0.247   0.442   0.157      9.526   9.73    8.261   8.198   7.19\n  2.635  3.581   2.284   1.63    1.298      5.654   5.92    5.058   5.224   4.642\n\n[:, :, 3] =\n 3.017   2.368  2.014   0.323  -0.24   …  14.335  14.674  14.09   13.342  13.222\n 3.014   2.774  2.727   1.206   0.325      2.288   1.786   2.39    1.825   1.912\n 0.373  -0.129  0.432   0.103  -0.92       3.154   3.12    3.332   2.696   3.175\n 1.303   0.397  1.933  -1.355  -0.101      9.673   9.766   9.351   8.018   8.948\n 4.02    3.929  4.573   0.297   1.691      6.076   5.386   6.276   5.264   5.301\n\n;;; … \n\n[:, :, 59] =\n  0.067   0.035   0.102   0.011   0.074  0.111  …  7.214  7.337  6.813  7.07   6.23\n -0.05   -0.025   0.035  -0.093  -0.066  0.037     1.993  1.825  1.794  1.856  1.525\n -0.032  -0.008  -0.001  -0.086   0.004  0.078     0.899  1.093  0.974  0.943  0.918\n  0.077   0.106   0.1     0.018   0.005  0.103     4.614  4.481  4.553  4.214  4.206\n  0.071   0.096   0.101   0.013  -0.074  0.035     2.899  2.876  2.52   2.911  2.201\n\n[:, :, 60] =\n 0.104  0.111   0.071  0.236  0.107  0.117  …  6.268  6.174  6.293  5.325  5.7\n 0.0    0.051  -0.033  0.1    0.008  0.052     1.967  1.784  1.788  1.535  1.537\n 0.11   0.052  -0.01   0.025  0.102  0.051     1.077  1.168  1.103  0.826  0.823\n 0.035  0.051   0.07   0.037  0.101  0.121     3.767  3.935  4.128  3.616  3.337\n 0.103  0.118   0.068  0.103  0.104  0.051     2.428  2.578  2.333  2.64   2.194\n\n[:, :, 61] =\n 0.111  0.092  -0.037  0.05   0.037   0.058  …  5.26   4.682  4.518  4.678  4.52\n 0.034  0.033  -0.088  0.017  0.001  -0.008     1.767  1.847  1.494  1.439  1.522\n 0.091  0.032  -0.03   0.108  0.075  -0.007     1.029  1.047  0.805  0.7    0.775\n 0.001  0.093  -0.033  0.052  0.032  -0.007     3.476  3.523  3.151  3.204  2.728\n 0.035  0.101   0.022  0.12   0.102   0.06      1.869  2.142  1.901  1.782  1.993
\n\n\n

The emission and excitation wavelengths are:

\n\n
em_wave = dropdims(data[\"EmAx\"]; dims=1)
\n
201-element Vector{Float64}:\n 250.0\n 251.0\n 252.0\n 253.0\n 254.0\n 255.0\n 256.0\n   ⋮\n 445.0\n 446.0\n 447.0\n 448.0\n 449.0\n 450.0
\n\n
ex_wave = dropdims(data[\"ExAx\"]; dims=1)
\n
61-element Vector{Float64}:\n 240.0\n 241.0\n 242.0\n 243.0\n 244.0\n 245.0\n 246.0\n   ⋮\n 295.0\n 296.0\n 297.0\n 298.0\n 299.0\n 300.0
\n\n\n

Next, we plot the fluorescence landscape for each sample.

\n\n
with_theme() do\n    fig = Figure(; size=(800,500))\n\n    # Loop through samples\n    for i in 1:size(X,1)\n        ax = Axis3(fig[fldmod1(i, 3)...]; title=\"Sample $i\",\n            xlabel=\"Emission\\nWavelength\", xticks=250:100:450,\n            ylabel=\"Excitation\\nWavelength\", yticks=240:30:300,\n            zlabel=\"\"\n        )\n        surface!(ax, em_wave, ex_wave, X[i,:,:])\n    end\n\n    rowgap!(fig.layout, 40)\n    colgap!(fig.layout, 50)\n    resize_to_layout!(fig)\n    fig\nend
\n\n\n","category":"page"},{"location":"demos/amino-acids/#Run-CP-Decomposition","page":"Amino Acids","title":"Run CP Decomposition","text":"","category":"section"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"
\n
\n\n\n

Conventional CP decomposition (i.e., with respect to the least-squares loss) can be computed using gcp with its default arguments.

\n\n
M = gcp(X, 3)
\n
5×201×61 CPD{Float64, 3, Vector{Float64}, Matrix{Float64}} with 3 components\nλ weights:\n3-element Vector{Float64}: …\nU[1] factor matrix:\n5×3 Matrix{Float64}: …\nU[2] factor matrix:\n201×3 Matrix{Float64}: …\nU[3] factor matrix:\n61×3 Matrix{Float64}: …
\n\n\n

Now, we plot the (normalized) factors.

\n\n
with_theme() do\n    fig = Figure()\n\n    # Plot factors (normalized by max)\n    for row in 1:ncomponents(M)\n        barplot(fig[row,1], 1:size(X,1), normalize(M.U[1][:,row], Inf))\n        lines(fig[row,2], em_wave, normalize(M.U[2][:,row], Inf))\n        lines(fig[row,3], ex_wave, normalize(M.U[3][:,row], Inf))\n    end\n\n    # Link and hide x axes\n    linkxaxes!(contents(fig[:,1])...)\n    linkxaxes!(contents(fig[:,2])...)\n    linkxaxes!(contents(fig[:,3])...)\n    hidexdecorations!.(contents(fig[1:2,:]); ticks=false, grid=false)\n\n    # Link and hide y axes\n    linkyaxes!(contents(fig.layout)...)\n    hideydecorations!.(contents(fig.layout); ticks=false, grid=false)\n\n    # Add labels\n    Label(fig[0,1], \"Samples\"; tellwidth=false, fontsize=20)\n    Label(fig[0,2], \"Emission\"; tellwidth=false, fontsize=20)\n    Label(fig[0,3], \"Excitation\"; tellwidth=false, fontsize=20)\n\n    fig\nend
\n\n
\n

Built with Julia 1.10.4 and

\nCacheVariables 0.1.4
\nCairoMakie 0.12.2
\nGCPDecompositions 0.1.2
\nMAT 0.10.7
\nZipFile 0.10.1\n
\n\n","category":"page"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"EditURL = \"https://github.com/dahong67/GCPDecompositions.jl/blob/master/docs/src/demos/amino-acids.jl\"","category":"page"},{"location":"#GCPDecompositions:-Generalized-CP-Decompositions","page":"Home","title":"GCPDecompositions: Generalized CP Decompositions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Documentation for GCPDecompositions.","category":"page"},{"location":"","page":"Home","title":"Home","text":"👋 This package provides research code and work is ongoing. If you are interested in using it in your own research, I'd love to hear from you and collaborate! Feel free to write: hong@udel.edu","category":"page"},{"location":"","page":"Home","title":"Home","text":"Please cite the following papers for this technique:","category":"page"},{"location":"","page":"Home","title":"Home","text":"David Hong, Tamara G. Kolda, Jed A. Duersch. \"Generalized Canonical Polyadic Tensor Decomposition\", SIAM Review 62:133-163, 2020. https://doi.org/10.1137/18M1203626 https://arxiv.org/abs/1808.07452Tamara G. Kolda, David Hong. \"Stochastic Gradients for Large-Scale Tensor Decomposition\", SIAM Journal on Mathematics of Data Science 2:1066-1095, 2020. https://doi.org/10.1137/19M1266265 https://arxiv.org/abs/1906.01687","category":"page"},{"location":"","page":"Home","title":"Home","text":"In BibTeX form:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@Article{hkd2020gcp,\n title = \"Generalized Canonical Polyadic Tensor Decomposition\",\n author = \"David Hong and Tamara G. Kolda and Jed A. Duersch\",\n journal = \"{SIAM} Review\",\n year = \"2020\",\n volume = \"62\",\n number = \"1\",\n pages = \"133--163\",\n DOI = \"10.1137/18M1203626\",\n}\n\n@Article{kh2020sgf,\n title = \"Stochastic Gradients for Large-Scale Tensor Decomposition\",\n author = \"Tamara G. Kolda and David Hong\",\n journal = \"{SIAM} Journal on Mathematics of Data Science\",\n year = \"2020\",\n volume = \"2\",\n number = \"4\",\n pages = \"1066--1095\",\n DOI = \"10.1137/19M1266265\",\n}","category":"page"},{"location":"demos/main/#Overview-of-demos","page":"Overview","title":"Overview of demos","text":"","category":"section"},{"location":"demos/main/","page":"Overview","title":"Overview","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/losses/#Loss-functions","page":"Loss functions","title":"Loss functions","text":"","category":"section"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"GCPLosses\nGCPLosses.AbstractLoss","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses","page":"Loss functions","title":"GCPDecompositions.GCPLosses","text":"Loss functions for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/losses/#GCPDecompositions.GCPLosses.AbstractLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.AbstractLoss","text":"AbstractLoss\n\nAbstract type for GCP loss functions f(xm), where x is the data entry and m is the model entry.\n\nConcrete types ConcreteLoss <: AbstractLoss should implement:\n\nvalue(loss::ConcreteLoss, x, m) that computes the value of the loss function f(xm)\nderiv(loss::ConcreteLoss, x, m) that computes the value of the partial derivative partial_m f(xm) with respect to m\ndomain(loss::ConcreteLoss) that returns an Interval from IntervalSets.jl defining the domain for m\n\n\n\n\n\n","category":"type"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"Modules = [GCPLosses]\nFilter = t -> t in subtypes(GCPLosses.AbstractLoss)","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BernoulliLogitLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BernoulliLogitLoss","text":"BernoulliLogitLoss(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Bernouli data X with log odds-success rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameBernouli(rho_i)\nLink function: m_i = log(fracrho_i1 - rho_i)\nLoss function: f(x m) = log(1 + e^m) - xm\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BernoulliOddsLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BernoulliOddsLoss","text":"BernoulliOddsLoss(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Bernouli data X with odds-sucess rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameBernouli(rho_i)\nLink function: m_i = fracrho_i1 - rho_i\nLoss function: f(x m) = log(m + 1) - xlog(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BetaDivergenceLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BetaDivergenceLoss","text":"BetaDivergenceLoss(β::Real, eps::Real)\n\nBetaDivergence Loss for given β\n\nLoss function: f(x m β) = frac1betam^beta - frac1beta - 1xm^beta - 1 if beta in mathbbR 0 1 m - xlog(m) if beta = 1 fracxm + log(m) if beta = 0\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.GammaLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.GammaLoss","text":"GammaLoss(eps::Real = 1e-10)\n\nLoss corresponding to a statistical assumption of Gamma-distributed data X with scale given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornameGamma(k sigma_i)\nLink function: m_i = k sigma_i\nLoss function: f(xm) = fracxm + epsilon + log(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.HuberLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.HuberLoss","text":"HuberLoss(Δ::Real)\n\nHuber Loss for given Δ\n\nLoss function: f(x m) = (x - m)^2 if abs(x - m)leqDelta 2Deltaabs(x - m) - Delta^2 otherwise\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.LeastSquaresLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.LeastSquaresLoss","text":"LeastSquaresLoss()\n\nLoss corresponding to conventional CP decomposition. Corresponds to a statistical assumption of Gaussian data X with mean given by the low-rank model tensor M.\n\nDistribution: x_i sim mathcalN(mu_i sigma)\nLink function: m_i = mu_i\nLoss function: f(xm) = (x-m)^2\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.NegativeBinomialOddsLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.NegativeBinomialOddsLoss","text":"NegativeBinomialOddsLoss(r::Integer, eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Negative Binomial data X with log odds failure rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameNegativeBinomial(r rho_i)\nLink function: m = fracrho1 - rho\nLoss function: f(x m) = (r + x) log(1 + m) - xlog(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.NonnegativeLeastSquaresLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.NonnegativeLeastSquaresLoss","text":"NonnegativeLeastSquaresLoss()\n\nLoss corresponding to nonnegative CP decomposition. Corresponds to a statistical assumption of Gaussian data X with nonnegative mean given by the low-rank model tensor M.\n\nDistribution: x_i sim mathcalN(mu_i sigma)\nLink function: m_i = mu_i\nLoss function: f(xm) = (x-m)^2\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.PoissonLogLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.PoissonLogLoss","text":"PoissonLogLoss()\n\nLoss corresponding to a statistical assumption of Poisson data X with log-rate given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornamePoisson(lambda_i)\nLink function: m_i = log lambda_i\nLoss function: f(xm) = e^m - x m\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.PoissonLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.PoissonLoss","text":"PoissonLoss(eps::Real = 1e-10)\n\nLoss corresponding to a statistical assumption of Poisson data X with rate given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornamePoisson(lambda_i)\nLink function: m_i = lambda_i\nLoss function: f(xm) = m - x log(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.RayleighLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.RayleighLoss","text":"RayleighLoss(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Rayleigh data X with sacle given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameRayleigh(theta_i)\nLink function: m_i = sqrtfracpi2theta_i\nLoss function: f(x m) = 2log(m + epsilon) + fracpi4(fracxm + epsilon)^2\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.UserDefinedLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.UserDefinedLoss","text":"UserDefinedLoss\n\nType for user-defined loss functions f(xm), where x is the data entry and m is the model entry.\n\nContains three fields:\n\nfunc::Function : function that evaluates the loss function f(xm)\nderiv::Function : function that evaluates the partial derivative partial_m f(xm) with respect to m\ndomain::Interval : Interval from IntervalSets.jl defining the domain for m\n\nThe constructor is UserDefinedLoss(func; deriv, domain). If not provided,\n\nderiv is automatically computed from func using forward-mode automatic differentiation\ndomain gets a default value of Interval(-Inf, +Inf)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"GCPLosses.value\nGCPLosses.deriv\nGCPLosses.domain\nGCPLosses.objective\nGCPLosses.grad_U!","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses.value","page":"Loss functions","title":"GCPDecompositions.GCPLosses.value","text":"value(loss, x, m)\n\nCompute the value of the (entrywise) loss function loss for data entry x and model entry m.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.deriv","page":"Loss functions","title":"GCPDecompositions.GCPLosses.deriv","text":"deriv(loss, x, m)\n\nCompute the derivative of the (entrywise) loss function loss at the model entry m for the data entry x.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.domain","page":"Loss functions","title":"GCPDecompositions.GCPLosses.domain","text":"domain(loss)\n\nReturn the domain of the (entrywise) loss function loss.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.objective","page":"Loss functions","title":"GCPDecompositions.GCPLosses.objective","text":"objective(M::CPD, X::AbstractArray, loss)\n\nCompute the GCP objective function for the model tensor M, data tensor X, and loss function loss.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.grad_U!","page":"Loss functions","title":"GCPDecompositions.GCPLosses.grad_U!","text":"grad_U!(GU, M::CPD, X::AbstractArray, loss)\n\nCompute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).\n\n\n\n\n\n","category":"function"}] +[{"location":"man/main/#Overview","page":"Overview","title":"Overview","text":"","category":"section"},{"location":"man/main/","page":"Overview","title":"Overview","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/main/","page":"Overview","title":"Overview","text":"GCPDecompositions\ngcp\nCPD\nncomponents\nGCPDecompositions.default_constraints\nGCPDecompositions.default_algorithm\nGCPDecompositions.default_init","category":"page"},{"location":"man/main/#GCPDecompositions","page":"Overview","title":"GCPDecompositions","text":"Generalized CP Decomposition module. Provides approximate CP tensor decomposition with respect to general losses.\n\n\n\n\n\n","category":"module"},{"location":"man/main/#GCPDecompositions.gcp","page":"Overview","title":"GCPDecompositions.gcp","text":"gcp(X::Array, r;\n loss = GCPLosses.LeastSquares(),\n constraints = default_constraints(loss),\n algorithm = default_algorithm(X, r, loss, constraints),\n init = default_init(X, r, loss, constraints, algorithm))\n\nCompute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and return a CPD object.\n\nKeyword arguments:\n\nconstraints : a Tuple of constraints on the factor matrices U = (U[1],...,U[N]).\nalgorithm : algorithm to use\n\nConventional CP corresponds to the default GCPLosses.LeastSquares() loss with the default of no constraints (i.e., constraints = ()).\n\nIf the LossFunctions.jl package is also loaded, loss can also be a loss function from that package. Check GCPDecompositions.LossFunctionsExt.SupportedLosses to see what losses are supported.\n\nSee also: CPD, GCPLosses, GCPConstraints, GCPAlgorithms.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.CPD","page":"Overview","title":"GCPDecompositions.CPD","text":"CPD\n\nTensor decomposition type for the canonical polyadic decompositions (CPD) of a tensor (i.e., a multi-dimensional array) A. This is the return type of gcp(_), the corresponding tensor decomposition function.\n\nIf M::CPD is the decomposition object, the weights λ and the factor matrices U = (U[1],...,U[N]) can be obtained via M.λ and M.U, such that A = Σ_j λ[j] U[1][:,j] ∘ ⋯ ∘ U[N][:,j].\n\n\n\n\n\n","category":"type"},{"location":"man/main/#GCPDecompositions.ncomponents","page":"Overview","title":"GCPDecompositions.ncomponents","text":"ncomponents(M::CPD)\n\nReturn the number of components in M.\n\nSee also: ndims, size.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_constraints","page":"Overview","title":"GCPDecompositions.default_constraints","text":"default_constraints(loss)\n\nReturn a default tuple of constraints for the loss function loss.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_algorithm","page":"Overview","title":"GCPDecompositions.default_algorithm","text":"default_algorithm(X, r, loss, constraints)\n\nReturn a default algorithm for the data tensor X, rank r, loss function loss, and tuple of constraints constraints.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"man/main/#GCPDecompositions.default_init","page":"Overview","title":"GCPDecompositions.default_init","text":"default_init([rng=default_rng()], X, r, loss, constraints, algorithm)\n\nReturn a default initialization for the data tensor X, rank r, loss function loss, tuple of constraints constraints, and algorithm algorithm, using the random number generator rng if needed.\n\nSee also: gcp.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#Private-functions","page":"Private functions","title":"Private functions","text":"","category":"section"},{"location":"dev/private/","page":"Private functions","title":"Private functions","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"dev/private/","page":"Private functions","title":"Private functions","text":"GCPAlgorithms._gcp\nGCPDecompositions.TensorKernels._checked_khatrirao_dims\nGCPDecompositions.TensorKernels._checked_mttkrp_dims\nGCPDecompositions.TensorKernels._checked_mttkrps_dims","category":"page"},{"location":"dev/private/#GCPDecompositions.GCPAlgorithms._gcp","page":"Private functions","title":"GCPDecompositions.GCPAlgorithms._gcp","text":"_gcp(X, r, loss, constraints, algorithm)\n\nInternal function to compute an approximate rank-r CP decomposition of the tensor X with respect to the loss function loss and the constraints constraints using the algorithm algorithm, returning a CPD object.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_khatrirao_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_khatrirao_dims","text":"_checked_khatrirao_dims(A1, A2, ...)\n\nCheck that A1, A2, etc. have compatible dimensions for the Khatri-Rao product. If so, return a tuple of the number of rows and the shared number of columns. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_mttkrp_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_mttkrp_dims","text":"_checked_mttkrp_dims(X, (U1, U2, ..., UN), n)\n\nCheck that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"dev/private/#GCPDecompositions.TensorKernels._checked_mttkrps_dims","page":"Private functions","title":"GCPDecompositions.TensorKernels._checked_mttkrps_dims","text":"_checked_mttkrps_dims(X, (U1, U2, ..., UN))\n\nCheck that X and U have compatible dimensions for the mode-n MTTKRP. If so, return a tuple of the number of rows and the shared number of columns for the Khatri-Rao product. If not, throw an error.\n\n\n\n\n\n","category":"function"},{"location":"quickstart/#Quick-start-guide","page":"Quick start guide","title":"Quick start guide","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's install GCPDecompositions and run our first Generalized CP Tensor Decomposition!","category":"page"},{"location":"quickstart/#Step-1:-Install-Julia","page":"Quick start guide","title":"Step 1: Install Julia","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Go to https://julialang.org/downloads and install the current stable release.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"To check your installation, open up Julia and try a simple calculation like 1+1:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"1 + 1","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"More info: https://docs.julialang.org/en/v1/manual/getting-started/","category":"page"},{"location":"quickstart/#Step-2:-Install-GCPDecompositions","page":"Quick start guide","title":"Step 2: Install GCPDecompositions","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"GCPDecompositions can be installed using Julia's excellent builtin package manager.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"julia> import Pkg; Pkg.add(\"GCPDecompositions\")","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"This downloads, installs, and precompiles GCPDecompositions (and all its dependencies). Don't worry if it takes a few minutes to complete.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Tip: Interactive package management with the Pkg REPL mode\nHere we used the functional API for the builtin package manager. Pkg also has a very nice interactive interface (called the Pkg REPL) that is built right into the Julia REPL!Learn more here: https://pkgdocs.julialang.org/v1/getting-started/","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Tip: Pkg environments\nThe package manager has excellent support for creating separate installation environments. We strongly recommend using environments to create isolated and reproducible setups.Learn more here: https://pkgdocs.julialang.org/v1/environments/","category":"page"},{"location":"quickstart/#Step-3:-Run-GCPDecompositions","page":"Quick start guide","title":"Step 3: Run GCPDecompositions","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's create a simple three-way (a.k.a. order-three) data tensor that has a rank-one signal plus noise!","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"dims = (10, 20, 30)\nX = ones(dims) + randn(dims); # semicolon suppresses the output","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Mathematically, this data tensor can be written as follows:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"X\n=\nunderbrace\n mathbf1_10 circ mathbf1_20 circ mathbf1_30\n_textrank-one signal\n+\nN\nin\nmathbbR^10 times 20 times 30\n","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"where mathbf1_n in mathbbR^n denotes a vector of n ones, circ denotes the outer product, and N_ijk oversetiidsim mathcalN(01).","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Now, to get a rank r=1 GCP decomposition simply load the package and run gcp.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"using GCPDecompositions\nr = 1 # desired rank\nM = gcp(X, r)","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"This returns a CPD (short for CP Decomposition) with weights lambda and factor matrices U_1, U_2, and U_3. Mathematically, this is the following decomposition (read Overview to learn more):","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M\n=\nsum_i=1^r\nlambdai\ncdot\nU_1i circ U_2i circ U_3i\nin\nmathbbR^10 times 20 times 30\n","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"We can extract each of these as follows:","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M.λ # to write `λ`, type `\\lambda` then hit tab\nM.U[1]\nM.U[2]\nM.U[3]","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Let's check how close the factor matrices U_1, U_2, and U_3 (which were estimated from the noisy data) are to the true signal components mathbf1_10, mathbf1_20, and mathbf1_30. We use the angle between the vectors since the scale of each factor matrix isn't meaningful on its own (read Overview to learn more):","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"using LinearAlgebra: normalize\nvecangle(u, v) = acos(normalize(u)'*normalize(v)) # angle between vectors in radians\nvecangle(M.U[1][:,1], ones(10))\nvecangle(M.U[2][:,1], ones(20))\nvecangle(M.U[3][:,1], ones(30))","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"The decomposition does a pretty good job of extracting the signal from the noise!","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"The power of Generalized CP Decomposition is that we can fit CP decompositions to data using different losses (i.e., different notions of fit). By default, gcp uses the (conventional) least-squares loss, but we can easily try another! For example, to try non-negative least-squares simply run","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"M_nonneg = gcp(X, 1; loss = GCPLosses.NonnegativeLeastSquares())","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"tip: Congratulations!\nCongratulations! You have successfully installed GCPDecompositions and run some Generalized CP decompositions!","category":"page"},{"location":"quickstart/#Next-steps","page":"Quick start guide","title":"Next steps","text":"","category":"section"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Ready to learn more?","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"If you are new to tensor decompositions (or to GCP decompositions in particular), check out the Overview page in the manual. Also check out some of the demos!\nTo learn about all the different loss functions you can use or about how to add your own, check out the Loss functions page in the manual.\nTo learn about different constraints you can add, check out the Constraints page in the manual.\nTo learn about different algorithms you can choose, check out the Algorithms page in the manual.","category":"page"},{"location":"quickstart/","page":"Quick start guide","title":"Quick start guide","text":"Want to understand the internals and possibly contribute? Check out the developer docs.","category":"page"},{"location":"dev/kernels/#Tensor-Kernels","page":"Tensor Kernels","title":"Tensor Kernels","text":"","category":"section"},{"location":"dev/kernels/","page":"Tensor Kernels","title":"Tensor Kernels","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"dev/kernels/","page":"Tensor Kernels","title":"Tensor Kernels","text":"GCPDecompositions.TensorKernels\nGCPDecompositions.TensorKernels.khatrirao\nGCPDecompositions.TensorKernels.khatrirao!\nGCPDecompositions.TensorKernels.mttkrp\nGCPDecompositions.TensorKernels.mttkrp!\nGCPDecompositions.TensorKernels.create_mttkrp_buffer\nGCPDecompositions.TensorKernels.mttkrps\nGCPDecompositions.TensorKernels.mttkrps!","category":"page"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels","text":"Tensor kernels for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.khatrirao","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.khatrirao","text":"khatrirao(A1, A2, ...)\n\nCompute the Khatri-Rao product (i.e., the column-wise Kronecker product) of the matrices A1, A2, etc.\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.khatrirao!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.khatrirao!","text":"khatrirao!(K, A1, A2, ...)\n\nCompute the Khatri-Rao product (i.e., the column-wise Kronecker product) of the matrices A1, A2, etc. and store the result in K.\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrp","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrp","text":"mttkrp(X, (U1, U2, ..., UN), n)\n\nCompute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n.\n\nSee also: mttkrp!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrp!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrp!","text":"mttkrp!(G, X, (U1, U2, ..., UN), n, buffer=create_mttkrp_buffer(X, U, n))\n\nCompute the Matricized Tensor Times Khatri-Rao Product (MTTKRP) of an N-way tensor X with the matrices U1, U2, ..., UN along mode n and store the result in G.\n\nOptionally, provide a buffer for intermediate calculations. Always use create_mttkrp_buffer to make the buffer; the internal details of buffer may change in the future and should not be relied upon.\n\nAlgorithm is based on Section III-B of the paper:\n\nFast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903\n\nSee also: mttkrp, create_mttkrp_buffer\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.create_mttkrp_buffer","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.create_mttkrp_buffer","text":"create_mttkrp_buffer(X, U, n)\n\nCreate buffer to hold intermediate calculations in mttkrp!.\n\nAlways use create_mttkrp_buffer to make a buffer for mttkrp!; the internal details of buffer may change in the future and should not be relied upon.\n\nSee also: mttkrp!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrps","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrps","text":"mttkrps(X, (U1, U2, ..., UN))\n\nCompute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN.\n\nSee also: mttkrps!\n\n\n\n\n\n","category":"function"},{"location":"dev/kernels/#GCPDecompositions.TensorKernels.mttkrps!","page":"Tensor Kernels","title":"GCPDecompositions.TensorKernels.mttkrps!","text":"mttkrps!(G, X, (U1, U2, ..., UN))\n\nCompute the Matricized Tensor Times Khatri-Rao Product Sequence (MTTKRPS) of an N-way tensor X with the matrices U1, U2, ..., UN and store the result in G.\n\nSee also: mttkrps\n\n\n\n\n\n","category":"function"},{"location":"man/algorithms/#Algorithms","page":"Algorithms","title":"Algorithms","text":"","category":"section"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"GCPAlgorithms\nGCPAlgorithms.AbstractAlgorithm","category":"page"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms","text":"Algorithms for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.AbstractAlgorithm","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.AbstractAlgorithm","text":"AbstractAlgorithm\n\nAbstract type for GCP algorithms.\n\nConcrete types ConcreteAlgorithm <: AbstractAlgorithm should implement _gcp(X, r, loss, constraints, algorithm::ConcreteAlgorithm) that returns a CPD.\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/","page":"Algorithms","title":"Algorithms","text":"Modules = [GCPAlgorithms]\nFilter = t -> t in subtypes(GCPAlgorithms.AbstractAlgorithm) || (t isa Function && t != GCPAlgorithms._gcp)","category":"page"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.ALS","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.ALS","text":"ALS\n\nAlternating Least Squares. Workhorse algorithm for LeastSquares loss with no constraints.\n\nAlgorithm parameters:\n\nmaxiters::Int : max number of iterations (default: 200)\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.FastALS","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.FastALS","text":"FastALS\n\nFast Alternating Least Squares.\n\nEfficient ALS algorithm proposed in:\n\nFast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903\n\nAlgorithm parameters:\n\nmaxiters::Int : max number of iterations (default: 200)\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.LBFGSB","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.LBFGSB","text":"LBFGSB\n\nLimited-memory BFGS with Box constraints.\n\nBrief description of algorithm parameters:\n\nm::Int : max number of variable metric corrections (default: 10)\nfactr::Float64 : function tolerance in units of machine epsilon (default: 1e7)\npgtol::Float64 : (projected) gradient tolerance (default: 1e-5)\nmaxfun::Int : max number of function evaluations (default: 15000)\nmaxiter::Int : max number of iterations (default: 15000)\niprint::Int : verbosity (default: -1)\niprint < 0 means no output\niprint = 0 prints only one line at the last iteration\n0 < iprint < 99 prints f and |proj g| every iprint iterations\niprint = 99 prints details of every iteration except n-vectors\niprint = 100 also prints the changes of active set and final x\niprint > 100 prints details of every iteration including x and g\n\nSee documentation of LBFGSB.jl for more details.\n\n\n\n\n\n","category":"type"},{"location":"man/algorithms/#GCPDecompositions.GCPAlgorithms.FastALS_iter!-NTuple{6, Any}","page":"Algorithms","title":"GCPDecompositions.GCPAlgorithms.FastALS_iter!","text":"FastALS_iter!(X, U, λ) \n\nAlgorithm for computing MTTKRP sequences is from \"Fast Alternating LS Algorithms\nfor High Order CANDECOMP/PARAFAC Tensor Factorizations\" by Phan et al., specifically\nsection III-C.\n\n\n\n\n\n","category":"method"},{"location":"man/constraints/#Constraints","page":"Constraints","title":"Constraints","text":"","category":"section"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"GCPConstraints\nGCPConstraints.AbstractConstraint","category":"page"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints","page":"Constraints","title":"GCPDecompositions.GCPConstraints","text":"Constraints for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints.AbstractConstraint","page":"Constraints","title":"GCPDecompositions.GCPConstraints.AbstractConstraint","text":"AbstractConstraint\n\nAbstract type for GCP constraints on the factor matrices U = (U[1],...,U[N]).\n\n\n\n\n\n","category":"type"},{"location":"man/constraints/","page":"Constraints","title":"Constraints","text":"Modules = [GCPConstraints]\nFilter = t -> t in subtypes(GCPConstraints.AbstractConstraint)","category":"page"},{"location":"man/constraints/#GCPDecompositions.GCPConstraints.LowerBound","page":"Constraints","title":"GCPDecompositions.GCPConstraints.LowerBound","text":"LowerBound(value::Real)\n\nLower-bound constraint on the entries of the factor matrices U = (U[1],...,U[N]), i.e., U[i][j,k] >= value.\n\n\n\n\n\n","category":"type"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"\n\n\n\n\n

Amino Acids

\"Download\"Author\"\"Created\"

\n\n\n

This demo considers a well-known dataset of fluorescence measurements for three amino acids.

Website: https://ucphchemometrics.com/2023/05/04/amino-acids-fluorescence-data/

Analogous tutorial in Tensor Toolbox: https://www.tensortoolbox.org/cp_als_doc.html

Relevant papers:

  1. Bro, R, Multi-way Analysis in the Food Industry. Models, Algorithms, and Applications. 1998. Ph.D. Thesis, University of Amsterdam (NL) & Royal Veterinary and Agricultural University (DK).

  2. Kiers, H.A.L. (1998) A three-step algorithm for Candecomp/Parafac analysis of large data sets with multicollinearity, Journal of Chemometrics, 12, 155-171.

\n\n
using CairoMakie, GCPDecompositions, LinearAlgebra
\n\n\n","category":"page"},{"location":"demos/amino-acids/#Load-data","page":"Amino Acids","title":"Load data","text":"","category":"section"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"
\n

The following code downloads the data file, extracts the data, and caches it.

\n\n
using CacheVariables, MAT, ZipFile
\n\n\n
data = @cache \"amino-acids-cache/data.bson\" let\n    # Download file\n    url = \"https://ucphchemometrics.com/wp-content/uploads/2023/05/claus.zip\"\n    zipname = download(url, tempname(@__DIR__))\n\n    # Extract MAT file\n    zipfile = ZipFile.Reader(zipname)\n    matname = tempname(@__DIR__)\n    write(matname, only(zipfile.files))\n    close(zipfile)\n\n    # Extract data\n    data = matread(matname)\n\n    # Clean up and output data\n    rm(zipname)\n    rm(matname)\n    data\nend
\n
Dict{String, Any} with 6 entries:\n  \"evalme\" => \"delta=input(' How many nm above exc. would you like to cut off: ');id=on…\n  \"EmAx\"   => [250.0 251.0 … 449.0 450.0]\n  \"X\"      => [0.858 0.32 … 11.931 12.159; 1.312 1.742 … 2.376 1.716; … ; 1.243 -0.071 …\n  \"ExAx\"   => [240.0 241.0 … 299.0 300.0]\n  \"readme\" => Any[\"1.Column Trp in M\", \"2.Column Tyr in M\", \"3.Column Phe in M\"]\n  \"y\"      => [2.67e-6 0.0 0.0; 0.0 1.33e-5 0.0; … ; 1.58e-6 5.44e-6 0.000355; 8.79e-7 …
\n\n\n

The data tensor X is 5×201×61 and consists of measurements across 5 samples, 201 emissions, and 61 excitations.

\n\n
X = data[\"X\"]
\n
5×201×61 Array{Float64, 3}:\n[:, :, 1] =\n 0.858   0.32   0.863   0.301  -0.16   …  14.291  15.286  14.53   11.931  12.159\n 1.312   1.742  0.987  -0.166   0.928      2.33    2.23    1.975   2.376   1.716\n 1.248  -0.319  0.658   0.414  -0.518      2.989   2.938   2.739   2.782   3.264\n 1.243  -0.071  1.017  -0.253  -1.151     10.961   9.597   8.76    7.894   7.189\n 3.963   1.425  1.764   1.769   1.107      6.05    6.34    5.011   5.558   4.724\n\n[:, :, 2] =\n  1.868  1.01    1.904   1.198   0.559  …  14.359  14.087  13.352  13.123  11.265\n  1.659  1.703  -0.317   0.882   0.91       1.934   2.378   2.221   1.675   1.443\n -1.364  1.393   0.814  -0.744  -0.322      3.194   2.536   2.81    2.381   2.026\n -0.246  2.12    0.247   0.442   0.157      9.526   9.73    8.261   8.198   7.19\n  2.635  3.581   2.284   1.63    1.298      5.654   5.92    5.058   5.224   4.642\n\n[:, :, 3] =\n 3.017   2.368  2.014   0.323  -0.24   …  14.335  14.674  14.09   13.342  13.222\n 3.014   2.774  2.727   1.206   0.325      2.288   1.786   2.39    1.825   1.912\n 0.373  -0.129  0.432   0.103  -0.92       3.154   3.12    3.332   2.696   3.175\n 1.303   0.397  1.933  -1.355  -0.101      9.673   9.766   9.351   8.018   8.948\n 4.02    3.929  4.573   0.297   1.691      6.076   5.386   6.276   5.264   5.301\n\n;;; … \n\n[:, :, 59] =\n  0.067   0.035   0.102   0.011   0.074  0.111  …  7.214  7.337  6.813  7.07   6.23\n -0.05   -0.025   0.035  -0.093  -0.066  0.037     1.993  1.825  1.794  1.856  1.525\n -0.032  -0.008  -0.001  -0.086   0.004  0.078     0.899  1.093  0.974  0.943  0.918\n  0.077   0.106   0.1     0.018   0.005  0.103     4.614  4.481  4.553  4.214  4.206\n  0.071   0.096   0.101   0.013  -0.074  0.035     2.899  2.876  2.52   2.911  2.201\n\n[:, :, 60] =\n 0.104  0.111   0.071  0.236  0.107  0.117  …  6.268  6.174  6.293  5.325  5.7\n 0.0    0.051  -0.033  0.1    0.008  0.052     1.967  1.784  1.788  1.535  1.537\n 0.11   0.052  -0.01   0.025  0.102  0.051     1.077  1.168  1.103  0.826  0.823\n 0.035  0.051   0.07   0.037  0.101  0.121     3.767  3.935  4.128  3.616  3.337\n 0.103  0.118   0.068  0.103  0.104  0.051     2.428  2.578  2.333  2.64   2.194\n\n[:, :, 61] =\n 0.111  0.092  -0.037  0.05   0.037   0.058  …  5.26   4.682  4.518  4.678  4.52\n 0.034  0.033  -0.088  0.017  0.001  -0.008     1.767  1.847  1.494  1.439  1.522\n 0.091  0.032  -0.03   0.108  0.075  -0.007     1.029  1.047  0.805  0.7    0.775\n 0.001  0.093  -0.033  0.052  0.032  -0.007     3.476  3.523  3.151  3.204  2.728\n 0.035  0.101   0.022  0.12   0.102   0.06      1.869  2.142  1.901  1.782  1.993
\n\n\n

The emission and excitation wavelengths are:

\n\n
em_wave = dropdims(data[\"EmAx\"]; dims=1)
\n
201-element Vector{Float64}:\n 250.0\n 251.0\n 252.0\n 253.0\n 254.0\n 255.0\n 256.0\n   ⋮\n 445.0\n 446.0\n 447.0\n 448.0\n 449.0\n 450.0
\n\n
ex_wave = dropdims(data[\"ExAx\"]; dims=1)
\n
61-element Vector{Float64}:\n 240.0\n 241.0\n 242.0\n 243.0\n 244.0\n 245.0\n 246.0\n   ⋮\n 295.0\n 296.0\n 297.0\n 298.0\n 299.0\n 300.0
\n\n\n

Next, we plot the fluorescence landscape for each sample.

\n\n
with_theme() do\n    fig = Figure(; size=(800,500))\n\n    # Loop through samples\n    for i in 1:size(X,1)\n        ax = Axis3(fig[fldmod1(i, 3)...]; title=\"Sample $i\",\n            xlabel=\"Emission\\nWavelength\", xticks=250:100:450,\n            ylabel=\"Excitation\\nWavelength\", yticks=240:30:300,\n            zlabel=\"\"\n        )\n        surface!(ax, em_wave, ex_wave, X[i,:,:])\n    end\n\n    rowgap!(fig.layout, 40)\n    colgap!(fig.layout, 50)\n    resize_to_layout!(fig)\n    fig\nend
\n\n\n","category":"page"},{"location":"demos/amino-acids/#Run-CP-Decomposition","page":"Amino Acids","title":"Run CP Decomposition","text":"","category":"section"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"
\n
\n\n\n

Conventional CP decomposition (i.e., with respect to the least-squares loss) can be computed using gcp with its default arguments.

\n\n
M = gcp(X, 3)
\n
5×201×61 CPD{Float64, 3, Vector{Float64}, Matrix{Float64}} with 3 components\nλ weights:\n3-element Vector{Float64}: …\nU[1] factor matrix:\n5×3 Matrix{Float64}: …\nU[2] factor matrix:\n201×3 Matrix{Float64}: …\nU[3] factor matrix:\n61×3 Matrix{Float64}: …
\n\n\n

Now, we plot the (normalized) factors.

\n\n
with_theme() do\n    fig = Figure()\n\n    # Plot factors (normalized by max)\n    for row in 1:ncomponents(M)\n        barplot(fig[row,1], 1:size(X,1), normalize(M.U[1][:,row], Inf))\n        lines(fig[row,2], em_wave, normalize(M.U[2][:,row], Inf))\n        lines(fig[row,3], ex_wave, normalize(M.U[3][:,row], Inf))\n    end\n\n    # Link and hide x axes\n    linkxaxes!(contents(fig[:,1])...)\n    linkxaxes!(contents(fig[:,2])...)\n    linkxaxes!(contents(fig[:,3])...)\n    hidexdecorations!.(contents(fig[1:2,:]); ticks=false, grid=false)\n\n    # Link and hide y axes\n    linkyaxes!(contents(fig.layout)...)\n    hideydecorations!.(contents(fig.layout); ticks=false, grid=false)\n\n    # Add labels\n    Label(fig[0,1], \"Samples\"; tellwidth=false, fontsize=20)\n    Label(fig[0,2], \"Emission\"; tellwidth=false, fontsize=20)\n    Label(fig[0,3], \"Excitation\"; tellwidth=false, fontsize=20)\n\n    fig\nend
\n\n
\n

Built with Julia 1.10.4 and

\nCacheVariables 0.1.4
\nCairoMakie 0.12.2
\nGCPDecompositions 0.1.2
\nMAT 0.10.7
\nZipFile 0.10.1\n
\n\n","category":"page"},{"location":"demos/amino-acids/","page":"Amino Acids","title":"Amino Acids","text":"EditURL = \"https://github.com/dahong67/GCPDecompositions.jl/blob/master/docs/src/demos/amino-acids.jl\"","category":"page"},{"location":"#GCPDecompositions:-Generalized-CP-Decompositions","page":"Home","title":"GCPDecompositions: Generalized CP Decompositions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Documentation for GCPDecompositions.","category":"page"},{"location":"","page":"Home","title":"Home","text":"👋 This package provides research code and work is ongoing. If you are interested in using it in your own research, I'd love to hear from you and collaborate! Feel free to write: hong@udel.edu","category":"page"},{"location":"","page":"Home","title":"Home","text":"Please cite the following papers for this technique:","category":"page"},{"location":"","page":"Home","title":"Home","text":"David Hong, Tamara G. Kolda, Jed A. Duersch. \"Generalized Canonical Polyadic Tensor Decomposition\", SIAM Review 62:133-163, 2020. https://doi.org/10.1137/18M1203626 https://arxiv.org/abs/1808.07452Tamara G. Kolda, David Hong. \"Stochastic Gradients for Large-Scale Tensor Decomposition\", SIAM Journal on Mathematics of Data Science 2:1066-1095, 2020. https://doi.org/10.1137/19M1266265 https://arxiv.org/abs/1906.01687","category":"page"},{"location":"","page":"Home","title":"Home","text":"In BibTeX form:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@Article{hkd2020gcp,\n title = \"Generalized Canonical Polyadic Tensor Decomposition\",\n author = \"David Hong and Tamara G. Kolda and Jed A. Duersch\",\n journal = \"{SIAM} Review\",\n year = \"2020\",\n volume = \"62\",\n number = \"1\",\n pages = \"133--163\",\n DOI = \"10.1137/18M1203626\",\n}\n\n@Article{kh2020sgf,\n title = \"Stochastic Gradients for Large-Scale Tensor Decomposition\",\n author = \"Tamara G. Kolda and David Hong\",\n journal = \"{SIAM} Journal on Mathematics of Data Science\",\n year = \"2020\",\n volume = \"2\",\n number = \"4\",\n pages = \"1066--1095\",\n DOI = \"10.1137/19M1266265\",\n}","category":"page"},{"location":"demos/main/#Overview-of-demos","page":"Overview","title":"Overview of demos","text":"","category":"section"},{"location":"demos/main/","page":"Overview","title":"Overview","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/losses/#Loss-functions","page":"Loss functions","title":"Loss functions","text":"","category":"section"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"warning: Work-in-progress\nThis page of the docs is still a work-in-progress. Check back later!","category":"page"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"GCPLosses\nGCPLosses.AbstractLoss","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses","page":"Loss functions","title":"GCPDecompositions.GCPLosses","text":"Loss functions for Generalized CP Decomposition.\n\n\n\n\n\n","category":"module"},{"location":"man/losses/#GCPDecompositions.GCPLosses.AbstractLoss","page":"Loss functions","title":"GCPDecompositions.GCPLosses.AbstractLoss","text":"AbstractLoss\n\nAbstract type for GCP loss functions f(xm), where x is the data entry and m is the model entry.\n\nConcrete types ConcreteLoss <: AbstractLoss should implement:\n\nvalue(loss::ConcreteLoss, x, m) that computes the value of the loss function f(xm)\nderiv(loss::ConcreteLoss, x, m) that computes the value of the partial derivative partial_m f(xm) with respect to m\ndomain(loss::ConcreteLoss) that returns an Interval from IntervalSets.jl defining the domain for m\n\n\n\n\n\n","category":"type"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"Modules = [GCPLosses]\nFilter = t -> t in subtypes(GCPLosses.AbstractLoss)","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BernoulliLogit","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BernoulliLogit","text":"BernoulliLogit(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Bernouli data X with log odds-success rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameBernouli(rho_i)\nLink function: m_i = log(fracrho_i1 - rho_i)\nLoss function: f(x m) = log(1 + e^m) - xm\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BernoulliOdds","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BernoulliOdds","text":"BernoulliOdds(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Bernouli data X with odds-sucess rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameBernouli(rho_i)\nLink function: m_i = fracrho_i1 - rho_i\nLoss function: f(x m) = log(m + 1) - xlog(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.BetaDivergence","page":"Loss functions","title":"GCPDecompositions.GCPLosses.BetaDivergence","text":"BetaDivergence(β::Real, eps::Real)\n\nBetaDivergence Loss for given β\n\nLoss function: f(x m β) = frac1betam^beta - frac1beta - 1xm^beta - 1 if beta in mathbbR 0 1 m - xlog(m) if beta = 1 fracxm + log(m) if beta = 0\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.Gamma","page":"Loss functions","title":"GCPDecompositions.GCPLosses.Gamma","text":"Gamma(eps::Real = 1e-10)\n\nLoss corresponding to a statistical assumption of Gamma-distributed data X with scale given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornameGamma(k sigma_i)\nLink function: m_i = k sigma_i\nLoss function: f(xm) = fracxm + epsilon + log(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.Huber","page":"Loss functions","title":"GCPDecompositions.GCPLosses.Huber","text":"Huber(Δ::Real)\n\nHuber Loss for given Δ\n\nLoss function: f(x m) = (x - m)^2 if abs(x - m)leqDelta 2Deltaabs(x - m) - Delta^2 otherwise\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.LeastSquares","page":"Loss functions","title":"GCPDecompositions.GCPLosses.LeastSquares","text":"LeastSquares()\n\nLoss corresponding to conventional CP decomposition. Corresponds to a statistical assumption of Gaussian data X with mean given by the low-rank model tensor M.\n\nDistribution: x_i sim mathcalN(mu_i sigma)\nLink function: m_i = mu_i\nLoss function: f(xm) = (x-m)^2\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.NegativeBinomialOdds","page":"Loss functions","title":"GCPDecompositions.GCPLosses.NegativeBinomialOdds","text":"NegativeBinomialOdds(r::Integer, eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Negative Binomial data X with log odds failure rate given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameNegativeBinomial(r rho_i)\nLink function: m = fracrho1 - rho\nLoss function: f(x m) = (r + x) log(1 + m) - xlog(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.NonnegativeLeastSquares","page":"Loss functions","title":"GCPDecompositions.GCPLosses.NonnegativeLeastSquares","text":"NonnegativeLeastSquares()\n\nLoss corresponding to nonnegative CP decomposition. Corresponds to a statistical assumption of Gaussian data X with nonnegative mean given by the low-rank model tensor M.\n\nDistribution: x_i sim mathcalN(mu_i sigma)\nLink function: m_i = mu_i\nLoss function: f(xm) = (x-m)^2\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.Poisson","page":"Loss functions","title":"GCPDecompositions.GCPLosses.Poisson","text":"Poisson(eps::Real = 1e-10)\n\nLoss corresponding to a statistical assumption of Poisson data X with rate given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornamePoisson(lambda_i)\nLink function: m_i = lambda_i\nLoss function: f(xm) = m - x log(m + epsilon)\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.PoissonLog","page":"Loss functions","title":"GCPDecompositions.GCPLosses.PoissonLog","text":"PoissonLog()\n\nLoss corresponding to a statistical assumption of Poisson data X with log-rate given by the low-rank model tensor M.\n\nDistribution: x_i sim operatornamePoisson(lambda_i)\nLink function: m_i = log lambda_i\nLoss function: f(xm) = e^m - x m\nDomain: m in mathbbR\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.Rayleigh","page":"Loss functions","title":"GCPDecompositions.GCPLosses.Rayleigh","text":"Rayleigh(eps::Real = 1e-10)\n\nLoss corresponding to the statistical assumption of Rayleigh data X with sacle given by the low-rank model tensor M\n\nDistribution: x_i sim operatornameRayleigh(theta_i)\nLink function: m_i = sqrtfracpi2theta_i\nLoss function: f(x m) = 2log(m + epsilon) + fracpi4(fracxm + epsilon)^2\nDomain: m in 0 infty)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/#GCPDecompositions.GCPLosses.UserDefined","page":"Loss functions","title":"GCPDecompositions.GCPLosses.UserDefined","text":"UserDefined\n\nType for user-defined loss functions f(xm), where x is the data entry and m is the model entry.\n\nContains three fields:\n\nfunc::Function : function that evaluates the loss function f(xm)\nderiv::Function : function that evaluates the partial derivative partial_m f(xm) with respect to m\ndomain::Interval : Interval from IntervalSets.jl defining the domain for m\n\nThe constructor is UserDefined(func; deriv, domain). If not provided,\n\nderiv is automatically computed from func using forward-mode automatic differentiation\ndomain gets a default value of Interval(-Inf, +Inf)\n\n\n\n\n\n","category":"type"},{"location":"man/losses/","page":"Loss functions","title":"Loss functions","text":"GCPLosses.value\nGCPLosses.deriv\nGCPLosses.domain\nGCPLosses.objective\nGCPLosses.grad_U!","category":"page"},{"location":"man/losses/#GCPDecompositions.GCPLosses.value","page":"Loss functions","title":"GCPDecompositions.GCPLosses.value","text":"value(loss, x, m)\n\nCompute the value of the (entrywise) loss function loss for data entry x and model entry m.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.deriv","page":"Loss functions","title":"GCPDecompositions.GCPLosses.deriv","text":"deriv(loss, x, m)\n\nCompute the derivative of the (entrywise) loss function loss at the model entry m for the data entry x.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.domain","page":"Loss functions","title":"GCPDecompositions.GCPLosses.domain","text":"domain(loss)\n\nReturn the domain of the (entrywise) loss function loss.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.objective","page":"Loss functions","title":"GCPDecompositions.GCPLosses.objective","text":"objective(M::CPD, X::AbstractArray, loss)\n\nCompute the GCP objective function for the model tensor M, data tensor X, and loss function loss.\n\n\n\n\n\n","category":"function"},{"location":"man/losses/#GCPDecompositions.GCPLosses.grad_U!","page":"Loss functions","title":"GCPDecompositions.GCPLosses.grad_U!","text":"grad_U!(GU, M::CPD, X::AbstractArray, loss)\n\nCompute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).\n\n\n\n\n\n","category":"function"}] }