| Type: | Package |
| Title: | Gaussian Process Fitting |
| Version: | 0.2.16 |
| Maintainer: | Collin Erickson <collinberickson@gmail.com> |
| Description: | Fits a Gaussian process model to data. Gaussian processes are commonly used in computer experiments to fit an interpolating model. The model is stored as an 'R6' object and can be easily updated with new data. There are options to run in parallel, and 'Rcpp' has been used to speed up calculations. For more info about Gaussian process software, see Erickson et al. (2018) <doi:10.1016/j.ejor.2017.10.002>. |
| License: | GPL-3 |
| LinkingTo: | Rcpp, RcppArmadillo |
| Imports: | ggplot2, Rcpp, R6, lbfgs |
| RoxygenNote: | 7.3.2 |
| Depends: | mixopt (> 0.1.0), numDeriv, rmarkdown, tidyr |
| Suggests: | ContourFunctions, dplyr, ggrepel, gridExtra, knitr, lhs, MASS, microbenchmark, rlang, splitfngr, testthat, testthatmulti |
| VignetteBuilder: | knitr |
| URL: | https://github.com/CollinErickson/GauPro |
| BugReports: | https://github.com/CollinErickson/GauPro/issues |
| Encoding: | UTF-8 |
| NeedsCompilation: | yes |
| Packaged: | 2025-08-26 00:53:16 UTC; colli |
| Author: | Collin Erickson [aut, cre] |
| Repository: | CRAN |
| Date/Publication: | 2025-08-26 04:10:02 UTC |
Kernel product
Description
Kernel product
Usage
## S3 method for class 'GauPro_kernel'
k1 * k2
Arguments
k1 |
First kernel |
k2 |
Second kernel |
Value
Kernel which is product of two kernels
Examples
k1 <- Exponential$new(beta=1)
k2 <- Matern32$new(beta=0)
k <- k1 * k2
k$k(matrix(c(2,1), ncol=1))
Kernel sum
Description
Kernel sum
Usage
## S3 method for class 'GauPro_kernel'
k1 + k2
Arguments
k1 |
First kernel |
k2 |
Second kernel |
Value
Kernel which is sum of two kernels
Examples
k1 <- Exponential$new(beta=1)
k2 <- Matern32$new(beta=0)
k <- k1 + k2
k$k(matrix(c(2,1), ncol=1))
Cubic Kernel R6 class
Description
Cubic Kernel R6 class
Cubic Kernel R6 class
Usage
k_Cubic(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Cubic
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$C_dC_dparams()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Cubic$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Cubic$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Cubic$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Cubic$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Cubic$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Cubic$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Cubic$new(beta=runif(6)-.5)
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Cubic$new(1),
parallel=FALSE, restarts=0)
gp$predict(.454)
Exponential Kernel R6 class
Description
Exponential Kernel R6 class
Exponential Kernel R6 class
Usage
k_Exponential(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Exponential
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$C_dC_dparams()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Exponential$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Exponential$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Exponential$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Exponential$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Exponential$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Exponential$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Exponential$new(beta=0)
Factor Kernel R6 class
Description
Initialize kernel object
Usage
k_FactorKernel(
s2 = 1,
D,
nlevels,
xindex,
p_lower = 0,
p_upper = 0.9,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
p,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of the factor (which column of X) |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
p |
Vector of correlations |
useC |
Should C code used? Not implemented for FactorKernel yet. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
Format
R6Class object.
Details
For a factor that has been converted to its indices. Each factor will need a separate kernel.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_FactorKernel
Public fields
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
p_lengthlength of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
FactorKernel$new( s2 = 1, D, nlevels, xindex, p_lower = 0, p_upper = 0.9, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, p, useC = TRUE, offdiagequal = 1 - 1e-06 )
Arguments
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of the factor (which column of X)
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
pVector of correlations
useCShould C code used? Not implemented for FactorKernel yet.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method k()
Calculate covariance between two points
Usage
FactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
FactorKernel$kone(x, y, p, s2, isdiag = TRUE, offdiagequal = self$offdiagequal)
Arguments
xvector
yvector
pcorrelation parameters on regular scale
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
FactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
FactorKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
FactorKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
FactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
FactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
FactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
FactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
FactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method s2_from_params()
Get s2 from params vector
Usage
FactorKernel$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
FactorKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
FactorKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
kk <- FactorKernel$new(D=1, nlevels=5, xindex=1)
kk$p <- (1:10)/100
kmat <- outer(1:5, 1:5, Vectorize(kk$k))
kmat
kk$plot()
# 2D, Gaussian on 1D, index on 2nd dim
if (requireNamespace("dplyr", quietly=TRUE)) {
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3))
n <- nrow(X)
Z <- X[,1] - (X[,2]-1.8)^2 + rnorm(n,0,.1)
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- FactorKernel$new(D=2, nlevels=3, xind=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=0)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)
}
GauPro_selector
Description
GauPro_selector
Usage
GauPro(..., type = "Gauss")
Arguments
... |
Pass on |
type |
Type of Gaussian process, or the kind of correlation function. |
Value
A GauPro object
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
#y <- sin(2*pi*x) + rnorm(n,0,1e-1)
y <- (2*x) %%1
gp <- GauPro(X=x, Z=y, parallel=FALSE)
Corr Gauss GP using inherited optim
Description
Corr Gauss GP using inherited optim
Corr Gauss GP using inherited optim
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro -> GauPro_Gauss
Public fields
corrName of correlation
thetaCorrelation parameters
theta_lengthLength of theta
theta_mapMap for theta
theta_shortShort vector for theta
separableAre the dimensions separable?
Methods
Public methods
Inherited methods
GauPro::GauPro$cool1Dplot()GauPro::GauPro$deviance_searchnug()GauPro::GauPro$fit()GauPro::GauPro$grad_norm()GauPro::GauPro$initialize_GauPr()GauPro::GauPro$loglikelihood()GauPro::GauPro$nugget_update()GauPro::GauPro$optim()GauPro::GauPro$optimRestart()GauPro::GauPro$plot()GauPro::GauPro$plot1D()GauPro::GauPro$plot2D()GauPro::GauPro$pred()GauPro::GauPro$pred_LOO()GauPro::GauPro$pred_mean()GauPro::GauPro$pred_meanC()GauPro::GauPro$pred_one_matrix()GauPro::GauPro$pred_var()GauPro::GauPro$predict()GauPro::GauPro$sample()GauPro::GauPro$update()GauPro::GauPro$update_K_and_estimates()GauPro::GauPro$update_corrparams()GauPro::GauPro$update_data()GauPro::GauPro$update_nugget()
Method new()
Create GauPro object
Usage
GauPro_Gauss$new( X, Z, verbose = 0, separable = T, useC = F, useGrad = T, parallel = FALSE, nug = 1e-06, nug.min = 1e-08, nug.est = T, param.est = T, theta = NULL, theta_short = NULL, theta_map = NULL, ... )
Arguments
XMatrix whose rows are the input points
ZOutput points corresponding to X
verboseAmount of stuff to print. 0 is little, 2 is a lot.
separableAre dimensions separable?
useCShould C code be used when possible? Should be faster.
useGradShould the gradient be used?
parallelShould code be run in parallel? Make optimization faster but uses more computer resources.
nugValue for the nugget. The starting value if estimating it.
nug.minMinimum allowable value for the nugget.
nug.estShould the nugget be estimated?
param.estShould the kernel parameters be estimated?
thetaCorrelation parameters
theta_shortCorrelation parameters, not recommended
theta_mapCorrelation parameters, not recommended
...Not used
Method corr_func()
Correlation function
Usage
GauPro_Gauss$corr_func(x, x2 = NULL, theta = self$theta)
Arguments
xFirst point
x2Second point
thetaCorrelation parameter
Method deviance_theta()
Calculate deviance
Usage
GauPro_Gauss$deviance_theta(theta)
Arguments
thetaCorrelation parameter
Method deviance_theta_log()
Calculate deviance
Usage
GauPro_Gauss$deviance_theta_log(beta)
Arguments
betaCorrelation parameter on log scale
Method deviance()
Calculate deviance
Usage
GauPro_Gauss$deviance(theta = self$theta, nug = self$nug)
Arguments
thetaCorrelation parameter
nugNugget
Method deviance_grad()
Calculate deviance gradient
Usage
GauPro_Gauss$deviance_grad( theta = NULL, nug = self$nug, joint = NULL, overwhat = if (self$nug.est) "joint" else "theta" )
Arguments
thetaCorrelation parameter
nugNugget
jointCalculate over theta and nug at same time?
overwhatCalculate over theta and nug at same time?
Method deviance_fngr()
Calculate deviance and gradient at same time
Usage
GauPro_Gauss$deviance_fngr( theta = NULL, nug = NULL, overwhat = if (self$nug.est) "joint" else "theta" )
Arguments
thetaCorrelation parameter
nugNugget
overwhatCalculate over theta and nug at same time?
jointCalculate over theta and nug at same time?
Method deviance_log()
Calculate deviance gradient
Usage
GauPro_Gauss$deviance_log(beta = NULL, nug = self$nug, joint = NULL)
Arguments
betaCorrelation parameter on log scale
nugNugget
jointCalculate over theta and nug at same time?
Method deviance_log2()
Calculate deviance on log scale
Usage
GauPro_Gauss$deviance_log2(beta = NULL, lognug = NULL, joint = NULL)
Arguments
betaCorrelation parameter on log scale
lognugLog of nugget
jointCalculate over theta and nug at same time?
Method deviance_log_grad()
Calculate deviance gradient on log scale
Usage
GauPro_Gauss$deviance_log_grad( beta = NULL, nug = self$nug, joint = NULL, overwhat = if (self$nug.est) "joint" else "theta" )
Arguments
betaCorrelation parameter
nugNugget
jointCalculate over theta and nug at same time?
overwhatCalculate over theta and nug at same time?
Method deviance_log2_grad()
Calculate deviance gradient on log scale
Usage
GauPro_Gauss$deviance_log2_grad( beta = NULL, lognug = NULL, joint = NULL, overwhat = if (self$nug.est) "joint" else "theta" )
Arguments
betaCorrelation parameter
lognugLog of nugget
jointCalculate over theta and nug at same time?
overwhatCalculate over theta and nug at same time?
Method deviance_log2_fngr()
Calculate deviance and gradient on log scale
Usage
GauPro_Gauss$deviance_log2_fngr( beta = NULL, lognug = NULL, joint = NULL, overwhat = if (self$nug.est) "joint" else "theta" )
Arguments
betaCorrelation parameter
lognugLog of nugget
jointCalculate over theta and nug at same time?
overwhatCalculate over theta and nug at same time?
Method get_optim_functions()
Get optimization functions
Usage
GauPro_Gauss$get_optim_functions(param_update, nug.update)
Arguments
param_updateShould the parameters be updated?
nug.updateShould the nugget be updated?
Method param_optim_lower()
Lower bound of params
Usage
GauPro_Gauss$param_optim_lower()
Method param_optim_upper()
Upper bound of params
Usage
GauPro_Gauss$param_optim_upper()
Method param_optim_start()
Start value of params for optim
Usage
GauPro_Gauss$param_optim_start()
Method param_optim_start0()
Start value of params for optim
Usage
GauPro_Gauss$param_optim_start0()
Method param_optim_jitter()
Jitter value of params for optim
Usage
GauPro_Gauss$param_optim_jitter(param_value)
Arguments
param_valueparam value to add jitter to
Method update_params()
Update value of params after optim
Usage
GauPro_Gauss$update_params(restarts, param_update, nug.update)
Arguments
restartsNumber of restarts
param_updateAre the params being updated?
nug.updateIs the nugget being updated?
Method grad()
Calculate the gradient
Usage
GauPro_Gauss$grad(XX)
Arguments
XXPoints to calculate grad at
Method grad_dist()
Calculate the gradient distribution
Usage
GauPro_Gauss$grad_dist(XX)
Arguments
XXPoints to calculate grad at
Method hessian()
Calculate the hessian
Usage
GauPro_Gauss$hessian(XX, useC = self$useC)
Arguments
XXPoints to calculate grad at
useCShould C code be used to speed up?
Method print()
Print this object
Usage
GauPro_Gauss$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_Gauss$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_Gauss$new(X=x, Z=y, parallel=FALSE)
Corr Gauss GP using inherited optim
Description
Corr Gauss GP using inherited optim
Corr Gauss GP using inherited optim
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro -> GauPro::GauPro_Gauss -> GauPro_Gauss_LOO
Public fields
use_LOOShould the leave-one-out correction be used?
tmodSecond GP model fit to the t-values of leave-one-out predictions
Methods
Public methods
Inherited methods
GauPro::GauPro$cool1Dplot()GauPro::GauPro$deviance_searchnug()GauPro::GauPro$fit()GauPro::GauPro$grad_norm()GauPro::GauPro$initialize_GauPr()GauPro::GauPro$loglikelihood()GauPro::GauPro$nugget_update()GauPro::GauPro$optim()GauPro::GauPro$optimRestart()GauPro::GauPro$plot()GauPro::GauPro$plot1D()GauPro::GauPro$plot2D()GauPro::GauPro$pred()GauPro::GauPro$pred_LOO()GauPro::GauPro$pred_mean()GauPro::GauPro$pred_meanC()GauPro::GauPro$pred_var()GauPro::GauPro$predict()GauPro::GauPro$sample()GauPro::GauPro$update_K_and_estimates()GauPro::GauPro$update_corrparams()GauPro::GauPro$update_data()GauPro::GauPro$update_nugget()GauPro::GauPro_Gauss$corr_func()GauPro::GauPro_Gauss$deviance()GauPro::GauPro_Gauss$deviance_fngr()GauPro::GauPro_Gauss$deviance_grad()GauPro::GauPro_Gauss$deviance_log()GauPro::GauPro_Gauss$deviance_log2()GauPro::GauPro_Gauss$deviance_log2_fngr()GauPro::GauPro_Gauss$deviance_log2_grad()GauPro::GauPro_Gauss$deviance_log_grad()GauPro::GauPro_Gauss$deviance_theta()GauPro::GauPro_Gauss$deviance_theta_log()GauPro::GauPro_Gauss$get_optim_functions()GauPro::GauPro_Gauss$grad()GauPro::GauPro_Gauss$grad_dist()GauPro::GauPro_Gauss$hessian()GauPro::GauPro_Gauss$initialize()GauPro::GauPro_Gauss$param_optim_jitter()GauPro::GauPro_Gauss$param_optim_lower()GauPro::GauPro_Gauss$param_optim_start()GauPro::GauPro_Gauss$param_optim_start0()GauPro::GauPro_Gauss$param_optim_upper()GauPro::GauPro_Gauss$update_params()
Method update()
Update the model, can be data and parameters
Usage
GauPro_Gauss_LOO$update( Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL, restarts = 5, param_update = self$param.est, nug.update = self$nug.est, no_update = FALSE )
Arguments
XnewNew X matrix
ZnewNew Z values
XallMatrix with all X values
ZallAll Z values
restartsNumber of optimization restarts
param_updateShould the parameters be updated?
nug.updateShould the nugget be updated?
no_updateShould none of the parameters/nugget be updated?
Method pred_one_matrix()
Predict mean and se for given matrix
Usage
GauPro_Gauss_LOO$pred_one_matrix(XX, se.fit = F, covmat = F)
Arguments
XXPoints to predict at
se.fitShould the se be returned?
covmatShould the covariance matrix be returned?
Method print()
Print this object
Usage
GauPro_Gauss_LOO$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_Gauss_LOO$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_Gauss_LOO$new(X=x, Z=y, parallel=FALSE)
Class providing object with methods for fitting a GP model
Description
Class providing object with methods for fitting a GP model
Class providing object with methods for fitting a GP model
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Methods
new(X, Z, corr="Gauss", verbose=0, separable=T, useC=F,useGrad=T, parallel=T, nug.est=T, ...)This method is used to create object of this class with
XandZas the data.update(Xnew=NULL, Znew=NULL, Xall=NULL, Zall=NULL, restarts = 5, param_update = T, nug.update = self$nug.est)This method updates the model, adding new data if given, then running optimization again.
Public fields
XDesign matrix
ZResponses
NNumber of data points
DDimension of data
nug.minMinimum value of nugget
nugValue of the nugget, is estimated unless told otherwise
verbose0 means nothing printed, 1 prints some, 2 prints most.
useGradShould grad be used?
useCShould C code be used?
parallelShould the code be run in parallel?
parallel_coresHow many cores are there? It will self detect, do not set yourself.
nug.estShould the nugget be estimated?
param.estShould the parameters be estimated?
mu_hatMean estimate
s2_hatVariance estimate
KCovariance matrix
KcholCholesky factorization of K
KinvInverse of K
Methods
Public methods
Method corr_func()
Correlation function
Usage
GauPro_base$corr_func(...)
Arguments
...Does nothing
Method new()
Create GauPro object
Usage
GauPro_base$new( X, Z, verbose = 0, useC = F, useGrad = T, parallel = FALSE, nug = 1e-06, nug.min = 1e-08, nug.est = T, param.est = TRUE, ... )
Arguments
XMatrix whose rows are the input points
ZOutput points corresponding to X
verboseAmount of stuff to print. 0 is little, 2 is a lot.
useCShould C code be used when possible? Should be faster.
useGradShould the gradient be used?
parallelShould code be run in parallel? Make optimization faster but uses more computer resources.
nugValue for the nugget. The starting value if estimating it.
nug.minMinimum allowable value for the nugget.
nug.estShould the nugget be estimated?
param.estShould the kernel parameters be estimated?
...Not used
Method initialize_GauPr()
Not used
Usage
GauPro_base$initialize_GauPr()
Method fit()
Fit the model, never use this function
Usage
GauPro_base$fit(X, Z)
Arguments
XNot used
ZNot used
Method update_K_and_estimates()
Update Covariance matrix and estimated parameters
Usage
GauPro_base$update_K_and_estimates()
Method predict()
Predict mean and se for given matrix
Usage
GauPro_base$predict(XX, se.fit = F, covmat = F, split_speed = T)
Arguments
XXPoints to predict at
se.fitShould the se be returned?
covmatShould the covariance matrix be returned?
split_speedShould the predictions be split up for speed
Method pred()
Predict mean and se for given matrix
Usage
GauPro_base$pred(XX, se.fit = F, covmat = F, split_speed = T)
Arguments
XXPoints to predict at
se.fitShould the se be returned?
covmatShould the covariance matrix be returned?
split_speedShould the predictions be split up for speed
Method pred_one_matrix()
Predict mean and se for given matrix
Usage
GauPro_base$pred_one_matrix(XX, se.fit = F, covmat = F)
Arguments
XXPoints to predict at
se.fitShould the se be returned?
covmatShould the covariance matrix be returned?
Method pred_mean()
Predict mean
Usage
GauPro_base$pred_mean(XX, kx.xx)
Arguments
XXPoints to predict at
kx.xxCovariance matrix between X and XX
Method pred_meanC()
Predict mean using C code
Usage
GauPro_base$pred_meanC(XX, kx.xx)
Arguments
XXPoints to predict at
kx.xxCovariance matrix between X and XX
Method pred_var()
Predict variance
Usage
GauPro_base$pred_var(XX, kxx, kx.xx, covmat = F)
Arguments
XXPoints to predict at
kxxCovariance matrix of XX with itself
kx.xxCovariance matrix between X and XX
covmatNot used
Method pred_LOO()
Predict at X using leave-one-out. Can use for diagnostics.
Usage
GauPro_base$pred_LOO(se.fit = FALSE)
Arguments
se.fitShould the standard error and t values be returned?
Method plot()
Plot the object
Usage
GauPro_base$plot(...)
Arguments
...Parameters passed to cool1Dplot(), plot2D(), or plotmarginal()
Method cool1Dplot()
Make cool 1D plot
Usage
GauPro_base$cool1Dplot( n2 = 20, nn = 201, col2 = "gray", xlab = "x", ylab = "y", xmin = NULL, xmax = NULL, ymin = NULL, ymax = NULL )
Arguments
n2Number of things to plot
nnNumber of things to plot
col2color
xlabx label
ylaby label
xminxmin
xmaxxmax
yminymin
ymaxymax
Method plot1D()
Make 1D plot
Usage
GauPro_base$plot1D( n2 = 20, nn = 201, col2 = 2, xlab = "x", ylab = "y", xmin = NULL, xmax = NULL, ymin = NULL, ymax = NULL )
Arguments
n2Number of things to plot
nnNumber of things to plot
col2Color of the prediction interval
xlabx label
ylaby label
xminxmin
xmaxxmax
yminymin
ymaxymax
Method plot2D()
Make 2D plot
Usage
GauPro_base$plot2D()
Method loglikelihood()
Calculate the log likelihood, don't use this
Usage
GauPro_base$loglikelihood(mu = self$mu_hat, s2 = self$s2_hat)
Arguments
muMean vector
s2s2 param
Method optim()
Optimize parameters
Usage
GauPro_base$optim( restarts = 5, param_update = T, nug.update = self$nug.est, parallel = self$parallel, parallel_cores = self$parallel_cores )
Arguments
restartsNumber of restarts to do
param_updateShould parameters be updated?
nug.updateShould nugget be updated?
parallelShould restarts be done in parallel?
parallel_coresIf running parallel, how many cores should be used?
Method optimRestart()
Run a single optimization restart.
Usage
GauPro_base$optimRestart( start.par, start.par0, param_update, nug.update, optim.func, optim.grad, optim.fngr, lower, upper, jit = T )
Arguments
start.parStarting parameters
start.par0Starting parameters
param_updateShould parameters be updated?
nug.updateShould nugget be updated?
optim.funcFunction to optimize.
optim.gradGradient of function to optimize.
optim.fngrFunction that returns the function value and its gradient.
lowerLower bounds for optimization
upperUpper bounds for optimization
jitIs jitter being used?
Method update()
Update the model, can be data and parameters
Usage
GauPro_base$update( Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL, restarts = 5, param_update = self$param.est, nug.update = self$nug.est, no_update = FALSE )
Arguments
XnewNew X matrix
ZnewNew Z values
XallMatrix with all X values
ZallAll Z values
restartsNumber of optimization restarts
param_updateShould the parameters be updated?
nug.updateShould the nugget be updated?
no_updateShould none of the parameters/nugget be updated?
Method update_data()
Update the data
Usage
GauPro_base$update_data(Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL)
Arguments
XnewNew X matrix
ZnewNew Z values
XallMatrix with all X values
ZallAll Z values
Method update_corrparams()
Update the correlation parameters
Usage
GauPro_base$update_corrparams(...)
Arguments
...Args passed to update
Method update_nugget()
Update the nugget
Usage
GauPro_base$update_nugget(...)
Arguments
...Args passed to update
Method deviance_searchnug()
Optimize deviance for nugget
Usage
GauPro_base$deviance_searchnug()
Method nugget_update()
Update the nugget
Usage
GauPro_base$nugget_update()
Method grad_norm()
Calculate the norm of the gradient at XX
Usage
GauPro_base$grad_norm(XX)
Arguments
XXPoints to calculate at
Method sample()
Sample at XX
Usage
GauPro_base$sample(XX, n = 1)
Arguments
XXInput points to sample at
nNumber of samples
Method print()
Print object
Usage
GauPro_base$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_base$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
#n <- 12
#x <- matrix(seq(0,1,length.out = n), ncol=1)
#y <- sin(2*pi*x) + rnorm(n,0,1e-1)
#gp <- GauPro(X=x, Z=y, parallel=FALSE)
Kernel R6 class
Description
Kernel R6 class
Kernel R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Public fields
DNumber of input dimensions of data
useCShould C code be used when possible? Can be much faster.
Methods
Public methods
Method plot()
Plot kernel decay.
Usage
GauPro_kernel$plot(X = NULL)
Arguments
XMatrix of points the kernel is used with. Some will be used to demonstrate how the covariance changes.
Method print()
Print this object
Usage
GauPro_kernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_kernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
#k <- GauPro_kernel$new()
Beta Kernel R6 class
Description
Beta Kernel R6 class
Beta Kernel R6 class
Format
R6Class object.
Details
This is the base structure for a kernel that uses beta = log10(theta) for the lengthscale parameter. It standardizes the params because they all use the same underlying structure. Kernels that inherit this only need to implement kone and dC_dparams.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_beta
Public fields
betaParameter for correlation. Log of theta.
beta_estShould beta be estimated?
beta_lowerLower bound of beta
beta_upperUpper bound of beta
beta_lengthlength of beta
s2variance
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
s2_estShould s2 be estimated?
useCShould C code used? Much faster.
isotropicIf isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
GauPro_kernel_beta$new( beta, s2 = 1, D, beta_lower = -8, beta_upper = 6, beta_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE, isotropic = FALSE )
Arguments
betaInitial beta value
s2Initial variance
DNumber of input dimensions of data
beta_lowerLower bound for beta
beta_upperUpper bound for beta
beta_estShould beta be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster.
isotropicIf isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension.
Method k()
Calculate covariance between two points
Usage
GauPro_kernel_beta$k( x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL )
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters. Log of theta.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Calculate covariance between two points
Usage
GauPro_kernel_beta$kone(x, y, beta, theta, s2)
Arguments
xvector.
yvector.
betaCorrelation parameters. Log of theta.
thetaCorrelation parameters.
s2Variance parameter.
Method param_optim_start()
Starting point for parameters for optimization
Usage
GauPro_kernel_beta$param_optim_start( jitter = F, y, beta_est = self$beta_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
GauPro_kernel_beta$param_optim_start0( jitter = F, y, beta_est = self$beta_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Upper bounds of parameters for optimization
Usage
GauPro_kernel_beta$param_optim_lower( beta_est = self$beta_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
s2_estIs s2 being estimated?
p_estIs p being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
GauPro_kernel_beta$param_optim_upper( beta_est = self$beta_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
s2_estIs s2 being estimated?
p_estIs p being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
GauPro_kernel_beta$set_params_from_optim( optim_out, beta_est = self$beta_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
beta_estIs beta being estimated?
s2_estIs s2 being estimated?
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
GauPro_kernel_beta$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method s2_from_params()
Get s2 from params vector
Usage
GauPro_kernel_beta$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_kernel_beta$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
#k1 <- Matern52$new(beta=0)
Gaussian process model with kernel
Description
Class providing object with methods for fitting a GP model. Allows for different kernel and trend functions to be used. The object is an R6 object with many methods that can be called.
'gpkm()' is equivalent to 'GauPro_kernel_model$new()', but is easier to type and gives parameter autocomplete suggestions.
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Methods
new(X, Z, corr="Gauss", verbose=0, separable=T, useC=F, useGrad=T, parallel=T, nug.est=T, ...)-
This method is used to create object of this class with
XandZas the data. update(Xnew=NULL, Znew=NULL, Xall=NULL, Zall=NULL, restarts = 0, param_update = T, nug.update = self$nug.est)This method updates the model, adding new data if given, then running optimization again.
Public fields
XDesign matrix
ZResponses
NNumber of data points
DDimension of data
nug.minMinimum value of nugget
nug.maxMaximum value of the nugget.
nug.estShould the nugget be estimated?
nugValue of the nugget, is estimated unless told otherwise
param.estShould the kernel parameters be estimated?
verbose0 means nothing printed, 1 prints some, 2 prints most.
useGradShould grad be used?
useCShould C code be used?
parallelShould the code be run in parallel?
parallel_coresHow many cores are there? By default it detects.
kernelThe kernel to determine the correlations.
trendThe trend.
mu_hatXPredicted trend value for each point in X.
s2_hatVariance parameter estimate
KCovariance matrix
KcholCholesky factorization of K
KinvInverse of K
Kinv_Z_minus_mu_hatXK inverse times Z minus the predicted trend at X.
restartsNumber of optimization restarts to do when updating.
normalizeShould the inputs be normalized?
normalize_meanIf using normalize, the mean of each column.
normalize_sdIf using normalize, the standard deviation of each column.
optimizerWhat algorithm should be used to optimize the parameters.
track_optimShould it track the parameters evaluated while optimizing?
track_optim_inputsIf track_optim is TRUE, this will keep a list of parameters evaluated. View them with plot_track_optim.
track_optim_devIf track_optim is TRUE, this will keep a vector of the deviance values calculated while optimizing parameters. View them with plot_track_optim.
formulaFormula
convert_formula_dataList for storing data to convert data using the formula
Methods
Public methods
Method new()
Create kernel_model object
Usage
GauPro_kernel_model$new( X, Z, kernel, trend, verbose = 0, useC = TRUE, useGrad = TRUE, parallel = FALSE, parallel_cores = "detect", nug = 1e-06, nug.min = 1e-08, nug.max = 100, nug.est = TRUE, param.est = TRUE, restarts = 0, normalize = FALSE, optimizer = "L-BFGS-B", track_optim = FALSE, formula, data, ... )
Arguments
XMatrix whose rows are the input points
ZOutput points corresponding to X
kernelThe kernel to use. E.g., Gaussian$new().
trendTrend to use. E.g., trend_constant$new().
verboseAmount of stuff to print. 0 is little, 2 is a lot.
useCShould C code be used when possible? Should be faster.
useGradShould the gradient be used?
parallelShould code be run in parallel? Make optimization faster but uses more computer resources.
parallel_coresWhen using parallel, how many cores should be used?
nugValue for the nugget. The starting value if estimating it.
nug.minMinimum allowable value for the nugget.
nug.maxMaximum allowable value for the nugget.
nug.estShould the nugget be estimated?
param.estShould the kernel parameters be estimated?
restartsHow many optimization restarts should be used when estimating parameters?
normalizeShould the data be normalized?
optimizerWhat algorithm should be used to optimize the parameters.
track_optimShould it track the parameters evaluated while optimizing?
formulaFormula for the data if giving in a data frame.
dataData frame of data. Use in conjunction with formula.
...Not used
Method fit()
Fit model
Usage
GauPro_kernel_model$fit(X, Z)
Arguments
XInputs
ZOutputs
Method update_K_and_estimates()
Update covariance matrix and estimates
Usage
GauPro_kernel_model$update_K_and_estimates()
Method predict()
Predict for a matrix of points
Usage
GauPro_kernel_model$predict( XX, se.fit = F, covmat = F, split_speed = F, mean_dist = FALSE, return_df = TRUE )
Arguments
XXpoints to predict at
se.fitShould standard error be returned?
covmatShould covariance matrix be returned?
split_speedShould the matrix be split for faster predictions?
mean_distShould the error be for the distribution of the mean?
return_dfWhen returning se.fit, should it be returned in a data frame? Otherwise it will be a list, which is faster.
Method pred()
Predict for a matrix of points
Usage
GauPro_kernel_model$pred( XX, se.fit = F, covmat = F, split_speed = F, mean_dist = FALSE, return_df = TRUE )
Arguments
XXpoints to predict at
se.fitShould standard error be returned?
covmatShould covariance matrix be returned?
split_speedShould the matrix be split for faster predictions?
mean_distShould the error be for the distribution of the mean?
return_dfWhen returning se.fit, should it be returned in a data frame? Otherwise it will be a list, which is faster.
Method pred_one_matrix()
Predict for a matrix of points
Usage
GauPro_kernel_model$pred_one_matrix( XX, se.fit = F, covmat = F, return_df = FALSE, mean_dist = FALSE )
Arguments
XXpoints to predict at
se.fitShould standard error be returned?
covmatShould covariance matrix be returned?
return_dfWhen returning se.fit, should it be returned in a data frame? Otherwise it will be a list, which is faster.
mean_distShould the error be for the distribution of the mean?
Method pred_mean()
Predict mean
Usage
GauPro_kernel_model$pred_mean(XX, kx.xx)
Arguments
XXpoints to predict at
kx.xxCovariance of X with XX
Method pred_meanC()
Predict mean using C
Usage
GauPro_kernel_model$pred_meanC(XX, kx.xx)
Arguments
XXpoints to predict at
kx.xxCovariance of X with XX
Method pred_var()
Predict variance
Usage
GauPro_kernel_model$pred_var(XX, kxx, kx.xx, covmat = F)
Arguments
XXpoints to predict at
kxxCovariance of XX with itself
kx.xxCovariance of X with XX
covmatShould the covariance matrix be returned?
Method pred_LOO()
leave one out predictions
Usage
GauPro_kernel_model$pred_LOO(se.fit = FALSE)
Arguments
se.fitShould standard errors be included?
Method pred_var_after_adding_points()
Predict variance after adding points
Usage
GauPro_kernel_model$pred_var_after_adding_points(add_points, pred_points)
Arguments
add_pointsPoints to add
pred_pointsPoints to predict at
Method pred_var_after_adding_points_sep()
Predict variance reductions after adding each point separately
Usage
GauPro_kernel_model$pred_var_after_adding_points_sep(add_points, pred_points)
Arguments
add_pointsPoints to add
pred_pointsPoints to predict at
Method pred_var_reduction()
Predict variance reduction for a single point
Usage
GauPro_kernel_model$pred_var_reduction(add_point, pred_points)
Arguments
add_pointPoint to add
pred_pointsPoints to predict at
Method pred_var_reductions()
Predict variance reductions
Usage
GauPro_kernel_model$pred_var_reductions(add_points, pred_points)
Arguments
add_pointsPoints to add
pred_pointsPoints to predict at
Method plot()
Plot the object
Usage
GauPro_kernel_model$plot(...)
Arguments
...Parameters passed to cool1Dplot(), plot2D(), or plotmarginal()
Method cool1Dplot()
Make cool 1D plot
Usage
GauPro_kernel_model$cool1Dplot( n2 = 20, nn = 201, col2 = "green", xlab = "x", ylab = "y", xmin = NULL, xmax = NULL, ymin = NULL, ymax = NULL, gg = TRUE )
Arguments
n2Number of things to plot
nnNumber of things to plot
col2color
xlabx label
ylaby label
xminxmin
xmaxxmax
yminymin
ymaxymax
ggShould ggplot2 be used to make plot?
Method plot1D()
Make 1D plot
Usage
GauPro_kernel_model$plot1D( n2 = 20, nn = 201, col2 = 2, col3 = 3, xlab = "x", ylab = "y", xmin = NULL, xmax = NULL, ymin = NULL, ymax = NULL, gg = TRUE )
Arguments
n2Number of things to plot
nnNumber of things to plot
col2Color of the prediction interval
col3Color of the interval for the mean
xlabx label
ylaby label
xminxmin
xmaxxmax
yminymin
ymaxymax
ggShould ggplot2 be used to make plot?
Method plot2D()
Make 2D plot
Usage
GauPro_kernel_model$plot2D(se = FALSE, mean = TRUE, horizontal = TRUE, n = 50)
Arguments
seShould the standard error of prediction be plotted?
meanShould the mean be plotted?
horizontalIf plotting mean and se, should they be next to each other?
nNumber of points along each dimension
Method plotmarginal()
Plot marginal. For each input, hold all others at a constant value and adjust it along it's range to see how the prediction changes.
Usage
GauPro_kernel_model$plotmarginal(npt = 5, ncol = NULL)
Arguments
nptNumber of lines to make. Each line represents changing a single variable while holding the others at the same values.
ncolNumber of columnsfor the plot
Method plotmarginalrandom()
Plot marginal prediction for random sample of inputs
Usage
GauPro_kernel_model$plotmarginalrandom(npt = 100, ncol = NULL)
Arguments
nptNumber of random points to evaluate
ncolNumber of columns in the plot
Method plotkernel()
Plot the kernel
Usage
GauPro_kernel_model$plotkernel(X = self$X)
Arguments
XX matrix for kernel plot
Method plotLOO()
Plot leave one out predictions for design points
Usage
GauPro_kernel_model$plotLOO()
Method plot_track_optim()
If track_optim, this will plot the parameters in the order they were evaluated.
Usage
GauPro_kernel_model$plot_track_optim(minindex = NULL)
Arguments
minindexMinimum index to plot.
Method loglikelihood()
Calculate loglikelihood of parameters
Usage
GauPro_kernel_model$loglikelihood(mu = self$mu_hatX, s2 = self$s2_hat)
Arguments
muMean parameters
s2Variance parameter
Method AIC()
AIC (Akaike information criterion)
Usage
GauPro_kernel_model$AIC()
Method get_optim_functions()
Get optimization functions
Usage
GauPro_kernel_model$get_optim_functions(param_update, nug.update)
Arguments
param_updateShould parameters be updated?
nug.updateShould nugget be updated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
GauPro_kernel_model$param_optim_lower(nug.update)
Arguments
nug.updateIs the nugget being updated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
GauPro_kernel_model$param_optim_upper(nug.update)
Arguments
nug.updateIs the nugget being updated?
Method param_optim_start()
Starting point for parameters for optimization
Usage
GauPro_kernel_model$param_optim_start(nug.update, jitter)
Arguments
nug.updateIs nugget being updated?
jitterShould there be a jitter?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
GauPro_kernel_model$param_optim_start0(nug.update, jitter)
Arguments
nug.updateIs nugget being updated?
jitterShould there be a jitter?
Method param_optim_start_mat()
Get matrix for starting points of optimization
Usage
GauPro_kernel_model$param_optim_start_mat(restarts, nug.update, l)
Arguments
restartsNumber of restarts to use
nug.updateIs nugget being updated?
lNot used
Method optim()
Optimize parameters
Usage
GauPro_kernel_model$optim( restarts = self$restarts, n0 = 5 * self$D, param_update = T, nug.update = self$nug.est, parallel = self$parallel, parallel_cores = self$parallel_cores )
Arguments
restartsNumber of restarts to do
n0This many starting parameters are chosen and evaluated. The best ones are used as the starting points for optimization.
param_updateShould parameters be updated?
nug.updateShould nugget be updated?
parallelShould restarts be done in parallel?
parallel_coresIf running parallel, how many cores should be used?
Method optimRestart()
Run a single optimization restart.
Usage
GauPro_kernel_model$optimRestart( start.par, start.par0, param_update, nug.update, optim.func, optim.grad, optim.fngr, lower, upper, jit = T, start.par.i )
Arguments
start.parStarting parameters
start.par0Starting parameters
param_updateShould parameters be updated?
nug.updateShould nugget be updated?
optim.funcFunction to optimize.
optim.gradGradient of function to optimize.
optim.fngrFunction that returns the function value and its gradient.
lowerLower bounds for optimization
upperUpper bounds for optimization
jitIs jitter being used?
start.par.iStarting parameters for this restart
Method update()
Update the model. Should only give in (Xnew and Znew) or (Xall and Zall).
Usage
GauPro_kernel_model$update( Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL, restarts = self$restarts, param_update = self$param.est, nug.update = self$nug.est, no_update = FALSE )
Arguments
XnewNew X values to add.
ZnewNew Z values to add.
XallAll X values to be used. Will replace existing X.
ZallAll Z values to be used. Will replace existing Z.
restartsNumber of optimization restarts.
param_updateAre the parameters being updated?
nug.updateIs the nugget being updated?
no_updateAre no parameters being updated?
Method update_fast()
Fast update when adding new data.
Usage
GauPro_kernel_model$update_fast(Xnew = NULL, Znew = NULL)
Arguments
XnewNew X values to add.
ZnewNew Z values to add.
Method update_params()
Update the parameters.
Usage
GauPro_kernel_model$update_params(..., nug.update)
Arguments
...Passed to optim.
nug.updateIs the nugget being updated?
Method update_data()
Update the data. Should only give in (Xnew and Znew) or (Xall and Zall).
Usage
GauPro_kernel_model$update_data( Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL )
Arguments
XnewNew X values to add.
ZnewNew Z values to add.
XallAll X values to be used. Will replace existing X.
ZallAll Z values to be used. Will replace existing Z.
Method update_corrparams()
Update correlation parameters. Not the nugget.
Usage
GauPro_kernel_model$update_corrparams(...)
Arguments
...Passed to self$update()
Method update_nugget()
Update nugget Not the correlation parameters.
Usage
GauPro_kernel_model$update_nugget(...)
Arguments
...Passed to self$update()
Method deviance()
Calculate the deviance.
Usage
GauPro_kernel_model$deviance( params = NULL, nug = self$nug, nuglog, trend_params = NULL )
Arguments
paramsKernel parameters
nugNugget
nuglogLog of nugget. Only give in nug or nuglog.
trend_paramsParameters for the trend.
Method deviance_grad()
Calculate the gradient of the deviance.
Usage
GauPro_kernel_model$deviance_grad( params = NULL, kernel_update = TRUE, X = self$X, nug = self$nug, nug.update, nuglog, trend_params = NULL, trend_update = TRUE )
Arguments
paramsKernel parameters
kernel_updateIs the kernel being updated? If yes, it's part of the gradient.
XInput matrix
nugNugget
nug.updateIs the nugget being updated? If yes, it's part of the gradient.
nuglogLog of the nugget.
trend_paramsTrend parameters
trend_updateIs the trend being updated? If yes, it's part of the gradient.
Method deviance_fngr()
Calculate the deviance along with its gradient.
Usage
GauPro_kernel_model$deviance_fngr( params = NULL, kernel_update = TRUE, X = self$X, nug = self$nug, nug.update, nuglog, trend_params = NULL, trend_update = TRUE )
Arguments
paramsKernel parameters
kernel_updateIs the kernel being updated? If yes, it's part of the gradient.
XInput matrix
nugNugget
nug.updateIs the nugget being updated? If yes, it's part of the gradient.
nuglogLog of the nugget.
trend_paramsTrend parameters
trend_updateIs the trend being updated? If yes, it's part of the gradient.
Method grad()
Calculate gradient
Usage
GauPro_kernel_model$grad(XX, X = self$X, Z = self$Z)
Arguments
XXpoints to calculate at
XX points
Zoutput points
Method grad_norm()
Calculate norm of gradient
Usage
GauPro_kernel_model$grad_norm(XX)
Arguments
XXpoints to calculate at
Method grad_dist()
Calculate distribution of gradient
Usage
GauPro_kernel_model$grad_dist(XX)
Arguments
XXpoints to calculate at
Method grad_sample()
Sample gradient at points
Usage
GauPro_kernel_model$grad_sample(XX, n)
Arguments
XXpoints to calculate at
nNumber of samples
Method grad_norm2_mean()
Calculate mean of gradient norm squared
Usage
GauPro_kernel_model$grad_norm2_mean(XX)
Arguments
XXpoints to calculate at
Method grad_norm2_dist()
Calculate distribution of gradient norm squared
Usage
GauPro_kernel_model$grad_norm2_dist(XX)
Arguments
XXpoints to calculate at
Method grad_norm2_sample()
Get samples of squared norm of gradient
Usage
GauPro_kernel_model$grad_norm2_sample(XX, n)
Arguments
XXpoints to sample at
nNumber of samples
Method hessian()
Calculate Hessian
Usage
GauPro_kernel_model$hessian(XX, as_array = FALSE)
Arguments
XXPoints to calculate Hessian at
as_arrayShould result be an array?
Method gradpredvar()
Calculate gradient of the predictive variance
Usage
GauPro_kernel_model$gradpredvar(XX)
Arguments
XXpoints to calculate at
Method sample()
Sample at rows of XX
Usage
GauPro_kernel_model$sample(XX, n = 1)
Arguments
XXInput matrix
nNumber of samples
Method optimize_fn()
Optimize any function of the GP prediction over the valid input space. If there are inputs that should only be optimized over a discrete set of values, specify 'mopar' for all parameters. Factor inputs will be handled automatically.
Usage
GauPro_kernel_model$optimize_fn( fn = NULL, lower = apply(self$X, 2, min), upper = apply(self$X, 2, max), n0 = 100, minimize = FALSE, fn_args = NULL, gr = NULL, fngr = NULL, mopar = NULL, groupeval = FALSE )
Arguments
fnFunction to optimize
lowerLower bounds to search within
upperUpper bounds to search within
n0Number of points to evaluate in initial stage
minimizeAre you trying to minimize the output?
fn_argsArguments to pass to the function fn.
grGradient of function to optimize.
fngrFunction that returns list with names elements "fn" for the function value and "gr" for the gradient. Useful when it is slow to evaluate and fn/gr would duplicate calculations if done separately.
moparList of parameters using mixopt
groupevalCan a matrix of points be evaluated? Otherwise just a single point at a time.
Method EI()
Calculate expected improvement
Usage
GauPro_kernel_model$EI(x, minimize = FALSE, eps = 0, return_grad = FALSE, ...)
Arguments
xVector to calculate EI of, or matrix for whose rows it should be calculated
minimizeAre you trying to minimize the output?
epsExploration parameter
return_gradShould the gradient be returned?
...Additional args
Method maxEI()
Find the point that maximizes the expected improvement. If there are inputs that should only be optimized over a discrete set of values, specify 'mopar' for all parameters.
Usage
GauPro_kernel_model$maxEI( lower = apply(self$X, 2, min), upper = apply(self$X, 2, max), n0 = 100, minimize = FALSE, eps = 0, dontconvertback = FALSE, EItype = "corrected", mopar = NULL, usegrad = FALSE )
Arguments
lowerLower bounds to search within
upperUpper bounds to search within
n0Number of points to evaluate in initial stage
minimizeAre you trying to minimize the output?
epsExploration parameter
dontconvertbackIf data was given in with a formula, should it converted back to the original scale?
EItypeType of EI to calculate. One of "EI", "Augmented", or "Corrected"
moparList of parameters using mixopt
usegradShould the gradient be used when optimizing? Can make it faster.
Method maxqEI()
Find the multiple points that maximize the expected improvement. Currently only implements the constant liar method.
Usage
GauPro_kernel_model$maxqEI( npoints, method = "pred", lower = apply(self$X, 2, min), upper = apply(self$X, 2, max), n0 = 100, minimize = FALSE, eps = 0, EItype = "corrected", dontconvertback = FALSE, mopar = NULL )
Arguments
npointsNumber of points to add
methodMethod to use for setting the output value for the points chosen as a placeholder. Can be one of: "CL" for constant liar, which uses the best value seen yet; or "pred", which uses the predicted value, also called the Believer method in literature.
lowerLower bounds to search within
upperUpper bounds to search within
n0Number of points to evaluate in initial stage
minimizeAre you trying to minimize the output?
epsExploration parameter
EItypeType of EI to calculate. One of "EI", "Augmented", or "Corrected"
dontconvertbackIf data was given in with a formula, should it converted back to the original scale?
moparList of parameters using mixopt
Method KG()
Calculate Knowledge Gradient
Usage
GauPro_kernel_model$KG(x, minimize = FALSE, eps = 0, current_extreme = NULL)
Arguments
xPoint to calculate at
minimizeIs the objective to minimize?
epsExploration parameter
current_extremeUsed for recursive solving
Method AugmentedEI()
Calculated Augmented EI
Usage
GauPro_kernel_model$AugmentedEI( x, minimize = FALSE, eps = 0, return_grad = F, ... )
Arguments
xVector to calculate EI of, or matrix for whose rows it should be calculated
minimizeAre you trying to minimize the output?
epsExploration parameter
return_gradShould the gradient be returned?
...Additional args
fThe reference max, user shouldn't change this.
Method CorrectedEI()
Calculated Augmented EI
Usage
GauPro_kernel_model$CorrectedEI( x, minimize = FALSE, eps = 0, return_grad = F, ... )
Arguments
xVector to calculate EI of, or matrix for whose rows it should be calculated
minimizeAre you trying to minimize the output?
epsExploration parameter
return_gradShould the gradient be returned?
...Additional args
Method importance()
Feature importance
Usage
GauPro_kernel_model$importance(plot = TRUE, print_bars = TRUE)
Arguments
plotShould the plot be made?
print_barsShould the importances be printed as bars?
Method print()
Print this object
Usage
GauPro_kernel_model$print()
Method summary()
Summary
Usage
GauPro_kernel_model$summary(...)
Arguments
...Additional arguments
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_kernel_model$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
References
https://scikit-learn.org/stable/modules/permutation_importance.html#id2
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel="gauss")
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
n <- 200
d <- 7
x <- matrix(runif(n*d), ncol=d)
f <- function(x) {x[1]*x[2] + cos(x[3]) + x[4]^2}
y <- apply(x, 1, f)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Gaussian)
Corr Gauss GP using inherited optim
Description
Corr Gauss GP using inherited optim
Corr Gauss GP using inherited optim
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro -> GauPro_kernel_model_LOO
Public fields
tmodA second GP model for the t-values of leave-one-out predictions
use_LOOShould the leave-one-out error corrections be used?
Methods
Public methods
Inherited methods
GauPro::GauPro$AIC()GauPro::GauPro$AugmentedEI()GauPro::GauPro$CorrectedEI()GauPro::GauPro$EI()GauPro::GauPro$KG()GauPro::GauPro$cool1Dplot()GauPro::GauPro$deviance()GauPro::GauPro$deviance_fngr()GauPro::GauPro$deviance_grad()GauPro::GauPro$fit()GauPro::GauPro$get_optim_functions()GauPro::GauPro$grad()GauPro::GauPro$grad_dist()GauPro::GauPro$grad_norm()GauPro::GauPro$grad_norm2_dist()GauPro::GauPro$grad_norm2_mean()GauPro::GauPro$grad_norm2_sample()GauPro::GauPro$grad_sample()GauPro::GauPro$gradpredvar()GauPro::GauPro$hessian()GauPro::GauPro$importance()GauPro::GauPro$loglikelihood()GauPro::GauPro$maxEI()GauPro::GauPro$maxqEI()GauPro::GauPro$optim()GauPro::GauPro$optimRestart()GauPro::GauPro$optimize_fn()GauPro::GauPro$param_optim_lower()GauPro::GauPro$param_optim_start()GauPro::GauPro$param_optim_start0()GauPro::GauPro$param_optim_start_mat()GauPro::GauPro$param_optim_upper()GauPro::GauPro$plot()GauPro::GauPro$plot1D()GauPro::GauPro$plot2D()GauPro::GauPro$plotLOO()GauPro::GauPro$plot_track_optim()GauPro::GauPro$plotkernel()GauPro::GauPro$plotmarginal()GauPro::GauPro$plotmarginalrandom()GauPro::GauPro$pred()GauPro::GauPro$pred_LOO()GauPro::GauPro$pred_mean()GauPro::GauPro$pred_meanC()GauPro::GauPro$pred_var()GauPro::GauPro$pred_var_after_adding_points()GauPro::GauPro$pred_var_after_adding_points_sep()GauPro::GauPro$pred_var_reduction()GauPro::GauPro$pred_var_reductions()GauPro::GauPro$predict()GauPro::GauPro$print()GauPro::GauPro$sample()GauPro::GauPro$summary()GauPro::GauPro$update_K_and_estimates()GauPro::GauPro$update_corrparams()GauPro::GauPro$update_data()GauPro::GauPro$update_fast()GauPro::GauPro$update_nugget()GauPro::GauPro$update_params()
Method new()
Create a kernel model that uses a leave-one-out GP model to fix the standard error predictions.
Usage
GauPro_kernel_model_LOO$new(..., LOO_kernel, LOO_options = list())
Arguments
...Passed to super$initialize.
LOO_kernelThe kernel that should be used for the leave-one-out model. Shouldn't be too smooth.
LOO_optionsOptions passed to the leave-one-out model.
Method update()
Update the model. Should only give in (Xnew and Znew) or (Xall and Zall).
Usage
GauPro_kernel_model_LOO$update( Xnew = NULL, Znew = NULL, Xall = NULL, Zall = NULL, restarts = 5, param_update = self$param.est, nug.update = self$nug.est, no_update = FALSE )
Arguments
XnewNew X values to add.
ZnewNew Z values to add.
XallAll X values to be used. Will replace existing X.
ZallAll Z values to be used. Will replace existing Z.
restartsNumber of optimization restarts.
param_updateAre the parameters being updated?
nug.updateIs the nugget being updated?
no_updateAre no parameters being updated?
Method pred_one_matrix()
Predict for a matrix of points
Usage
GauPro_kernel_model_LOO$pred_one_matrix( XX, se.fit = F, covmat = F, return_df = FALSE, mean_dist = FALSE )
Arguments
XXpoints to predict at
se.fitShould standard error be returned?
covmatShould covariance matrix be returned?
return_dfWhen returning se.fit, should it be returned in a data frame?
mean_distShould mean distribution be returned?
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_kernel_model_LOO$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model_LOO$new(X=x, Z=y, kernel=Gaussian)
y <- x^2 * sin(2*pi*x) + rnorm(n,0,1e-3)
gp <- GauPro_kernel_model_LOO$new(X=x, Z=y, kernel=Matern52)
y <- exp(-1.4*x)*cos(7*pi*x/2)
gp <- GauPro_kernel_model_LOO$new(X=x, Z=y, kernel=Matern52)
Trend R6 class
Description
Trend R6 class
Trend R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Public fields
DNumber of input dimensions of data
Methods
Public methods
Method clone()
The objects of this class are cloneable with this method.
Usage
GauPro_trend$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
#k <- GauPro_trend$new()
Gaussian Kernel R6 class
Description
Gaussian Kernel R6 class
Gaussian Kernel R6 class
Usage
k_Gaussian(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Gaussian
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Gaussian$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Gaussian$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Gaussian$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
Gaussian$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Gaussian$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method d2C_dx2()
Second derivative of covariance with respect to X
Usage
Gaussian$d2C_dx2(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method d2C_dudv()
Second derivative of covariance with respect to X and XX each once.
Usage
Gaussian$d2C_dudv(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method d2C_dudv_ueqvrows()
Second derivative of covariance with respect to X and XX when they equal the same value
Usage
Gaussian$d2C_dudv_ueqvrows(XX, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Gaussian$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Gaussian$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Gaussian$new(beta=0)
plot(k1)
k1 <- Gaussian$new(beta=c(0,-1, 1))
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Gaussian$new(1),
parallel=FALSE)
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
Calculate the Gaussian deviance in C
Description
Calculate the Gaussian deviance in C
Usage
Gaussian_devianceC(theta, nug, X, Z)
Arguments
theta |
Theta vector |
nug |
Nugget |
X |
Matrix X |
Z |
Matrix Z |
Value
Correlation matrix
Examples
Gaussian_devianceC(c(1,1), 1e-8, matrix(c(1,0,0,1),2,2), matrix(c(1,0),2,1))
Calculate Hessian for a GP with Gaussian correlation
Description
Calculate Hessian for a GP with Gaussian correlation
Usage
Gaussian_hessianC(XX, X, Z, Kinv, mu_hat, theta)
Arguments
XX |
The vector at which to calculate the Hessian |
X |
The input points |
Z |
The output values |
Kinv |
The inverse of the correlation matrix |
mu_hat |
Estimate of mu |
theta |
Theta parameters for the correlation |
Value
Matrix, the Hessian at XX
Examples
set.seed(0)
n <- 40
x <- matrix(runif(n*2), ncol=2)
f1 <- function(a) {sin(2*pi*a[1]) + sin(6*pi*a[2])}
y <- apply(x,1,f1) + rnorm(n,0,.01)
gp <- GauPro(x,y, verbose=2, parallel=FALSE);gp$theta
gp$hessian(c(.2,.75), useC=TRUE) # Should be -38.3, -5.96, -5.96, -389.4 as 2x2 matrix
Gaussian hessian in C
Description
Gaussian hessian in C
Usage
Gaussian_hessianCC(XX, X, Z, Kinv, mu_hat, theta)
Arguments
XX |
point to find Hessian at |
X |
matrix of data points |
Z |
matrix of output |
Kinv |
inverse of correlation matrix |
mu_hat |
mean estimate |
theta |
correlation parameters |
Value
Hessian matrix
Calculate Hessian for a GP with Gaussian correlation
Description
Calculate Hessian for a GP with Gaussian correlation
Usage
Gaussian_hessianR(XX, X, Z, Kinv, mu_hat, theta)
Arguments
XX |
The vector at which to calculate the Hessian |
X |
The input points |
Z |
The output values |
Kinv |
The inverse of the correlation matrix |
mu_hat |
Estimate of mu |
theta |
Theta parameters for the correlation |
Value
Matrix, the Hessian at XX
Examples
set.seed(0)
n <- 40
x <- matrix(runif(n*2), ncol=2)
f1 <- function(a) {sin(2*pi*a[1]) + sin(6*pi*a[2])}
y <- apply(x,1,f1) + rnorm(n,0,.01)
gp <- GauPro(x,y, verbose=2, parallel=FALSE);gp$theta
gp$hessian(c(.2,.75), useC=FALSE) # Should be -38.3, -5.96, -5.96, -389.4 as 2x2 matrix
Gower factor Kernel R6 class
Description
Gower factor Kernel R6 class
Gower factor Kernel R6 class
Usage
k_GowerFactorKernel(
s2 = 1,
D,
nlevels,
xindex,
p_lower = 0,
p_upper = 0.9,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
p,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of the factor (which column of X) |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
p |
Vector of correlations |
useC |
Should C code used? Not implemented for FactorKernel yet. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
Format
R6Class object.
Details
For a factor that has been converted to its indices. Each factor will need a separate kernel.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_GowerFactorKernel
Public fields
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
GowerFactorKernel$new( s2 = 1, D, nlevels, xindex, p_lower = 0, p_upper = 0.9, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, p, useC = TRUE, offdiagequal = 1 - 1e-06 )
Arguments
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of the factor (which column of X)
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
pVector of correlations
useCShould C code used? Not implemented for FactorKernel yet.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method k()
Calculate covariance between two points
Usage
GowerFactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
GowerFactorKernel$kone( x, y, p, s2, isdiag = TRUE, offdiagequal = self$offdiagequal )
Arguments
xvector
yvector
pcorrelation parameters on regular scale
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
GowerFactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
GowerFactorKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
GowerFactorKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
GowerFactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
alpha_estIs alpha being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
GowerFactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
alpha_estIs alpha being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
GowerFactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
alpha_estIs alpha being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
GowerFactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
alpha_estIs alpha being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
GowerFactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
alpha_estIs alpha being estimated?
Method s2_from_params()
Get s2 from params vector
Usage
GowerFactorKernel$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
GowerFactorKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GowerFactorKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
kk <- GowerFactorKernel$new(D=1, nlevels=5, xindex=1, p=.2)
kmat <- outer(1:5, 1:5, Vectorize(kk$k))
kmat
kk$plot()
# 2D, Gaussian on 1D, index on 2nd dim
if (requireNamespace("dplyr", quietly=TRUE)) {
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3))
n <- nrow(X)
Z <- X[,1] - (X[,2]-1.8)^2 + rnorm(n,0,.1)
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- GowerFactorKernel$new(D=2, nlevels=3, xind=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=0)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)
}
Kernel R6 class
Description
Kernel R6 class
Kernel R6 class
Usage
k_IgnoreIndsKernel(k, ignoreinds, useC = TRUE)
Arguments
k |
Kernel to use on the non-ignored indices |
ignoreinds |
Indices of columns of X to ignore. |
useC |
Should C code used? Not implemented for IgnoreInds. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_IgnoreInds
Public fields
DNumber of input dimensions of data
kernelKernel to use on indices that aren't ignored
ignoreindsIndices to ignore. For a matrix X, these are the columns to ignore. For example, when those dimensions will be given a different kernel, such as for factors.
Active bindings
s2_estIs s2 being estimated?
s2Value of s2 (variance)
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
IgnoreIndsKernel$new(k, ignoreinds, useC = TRUE)
Arguments
kKernel to use on the non-ignored indices
ignoreindsIndices of columns of X to ignore.
useCShould C code used? Not implemented for IgnoreInds.
Method k()
Calculate covariance between two points
Usage
IgnoreIndsKernel$k(x, y = NULL, ...)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
...Passed to kernel
Method kone()
Find covariance of two points
Usage
IgnoreIndsKernel$kone(x, y, ...)
Arguments
xvector
yvector
...Passed to kernel
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
IgnoreIndsKernel$dC_dparams(params = NULL, X, ...)
Arguments
paramsKernel parameters
Xmatrix of points in rows
...Passed to kernel
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
IgnoreIndsKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
IgnoreIndsKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional arguments passed on to the kernel
Method param_optim_start()
Starting point for parameters for optimization
Usage
IgnoreIndsKernel$param_optim_start(...)
Arguments
...Passed to kernel
Method param_optim_start0()
Starting point for parameters for optimization
Usage
IgnoreIndsKernel$param_optim_start0(...)
Arguments
...Passed to kernel
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
IgnoreIndsKernel$param_optim_lower(...)
Arguments
...Passed to kernel
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
IgnoreIndsKernel$param_optim_upper(...)
Arguments
...Passed to kernel
Method set_params_from_optim()
Set parameters from optimization output
Usage
IgnoreIndsKernel$set_params_from_optim(...)
Arguments
...Passed to kernel
Method s2_from_params()
Get s2 from params vector
Usage
IgnoreIndsKernel$s2_from_params(...)
Arguments
...Passed to kernel
Method print()
Print this object
Usage
IgnoreIndsKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
IgnoreIndsKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
kg <- Gaussian$new(D=3)
kig <- GauPro::IgnoreIndsKernel$new(k = Gaussian$new(D=3), ignoreinds = 2)
Xtmp <- as.matrix(expand.grid(1:2, 1:2, 1:2))
cbind(Xtmp, kig$k(Xtmp))
cbind(Xtmp, kg$k(Xtmp))
Latent Factor Kernel R6 class
Description
Latent Factor Kernel R6 class
Latent Factor Kernel R6 class
Usage
k_LatentFactorKernel(
s2 = 1,
D,
nlevels,
xindex,
latentdim,
p_lower = 0,
p_upper = 1,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of X to use the kernel on |
latentdim |
Dimension of embedding space |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
Format
R6Class object.
Details
Used for factor variables, a single dimension. Each level of the factor gets mapped into a latent space, then the distances in that space determine their correlations.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_LatentFactorKernel
Public fields
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
p_lengthlength of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
latentdimDimension of embedding space
pf_to_p_logLogical vector used to convert pf to p
p_to_pf_indsVector of indexes used to convert p to pf
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
LatentFactorKernel$new( s2 = 1, D, nlevels, xindex, latentdim, p_lower = 0, p_upper = 1, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE, offdiagequal = 1 - 1e-06 )
Arguments
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of X to use the kernel on
latentdimDimension of embedding space
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method k()
Calculate covariance between two points
Usage
LatentFactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
LatentFactorKernel$kone( x, y, pf, s2, isdiag = TRUE, offdiagequal = self$offdiagequal )
Arguments
xvector
yvector
pfcorrelation parameters on regular scale, includes zeroes for first level.
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
LatentFactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
LatentFactorKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
LatentFactorKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
LatentFactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
LatentFactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
LatentFactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
LatentFactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
LatentFactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method p_to_pf()
Convert p (short parameter vector) to pf (long parameter vector with zeros).
Usage
LatentFactorKernel$p_to_pf(p)
Arguments
pParameter vector
Method s2_from_params()
Get s2 from params vector
Usage
LatentFactorKernel$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method plotLatent()
Plot the points in the latent space
Usage
LatentFactorKernel$plotLatent()
Method print()
Print this object
Usage
LatentFactorKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
LatentFactorKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
References
https://stackoverflow.com/questions/27086195/linear-index-upper-triangular-matrix
Examples
# Create a new kernel for a single factor with 5 levels,
# mapped into two latent dimensions.
kk <- LatentFactorKernel$new(D=1, nlevels=5, xindex=1, latentdim=2)
# Random initial parameter values
kk$p
# Plots to understand
kk$plotLatent()
kk$plot()
# 5 levels, 1/4 are similar and 2/3/5 are similar
n <- 30
x <- matrix(sample(1:5, n, TRUE))
y <- c(ifelse(x == 1 | x == 4, 4, -3) + rnorm(n,0,.1))
plot(c(x), y)
m5 <- GauPro_kernel_model$new(
X=x, Z=y,
kernel=LatentFactorKernel$new(D=1, nlevels = 5, xindex = 1, latentdim = 2))
m5$kernel$p
# We should see 1/4 and 2/3/4 in separate clusters
m5$kernel$plotLatent()
if (requireNamespace("dplyr", quietly=TRUE)) {
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3), c(3.7,3))
n <- nrow(X)
Z <- X[,1] - (4-X[,2])^2 + rnorm(n,0,.1)
plot(X[,1], Z, col=X[,2])
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- LatentFactorKernel$new(D=2, nlevels=3, xind=2, latentdim=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=1)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)
}
Matern 3/2 Kernel R6 class
Description
Matern 3/2 Kernel R6 class
Matern 3/2 Kernel R6 class
Usage
k_Matern32(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Matern32
Public fields
sqrt3Saved value of square root of 3
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$C_dC_dparams()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Matern32$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Matern32$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Matern32$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Matern32$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Matern32$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Matern32$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Matern32$new(beta=0)
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Matern32$new(1),
parallel=FALSE)
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
Matern 5/2 Kernel R6 class
Description
Matern 5/2 Kernel R6 class
Matern 5/2 Kernel R6 class
Usage
k_Matern52(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Details
k(x, y) = s2 * (1 + t1 + t1^2 / 3) * exp(-t1)
where
t1 = sqrt(5) * sqrt(sum(theta * (x-y)^2))
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Matern52
Public fields
sqrt5Saved value of square root of 5
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$C_dC_dparams()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Matern52$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Matern52$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Matern52$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Matern52$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Matern52$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Matern52$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Matern52$new(beta=0)
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Matern52$new(1),
parallel=FALSE)
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
Ordered Factor Kernel R6 class
Description
Ordered Factor Kernel R6 class
Ordered Factor Kernel R6 class
Usage
k_OrderedFactorKernel(
s2 = 1,
D,
nlevels,
xindex,
p_lower = 1e-08,
p_upper = 5,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of the factor (which column of X) |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Not implemented for FactorKernel yet. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
Format
R6Class object.
Details
Use for factor inputs that are considered to have an ordering
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_OrderedFactorKernel
Public fields
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
p_lengthlength of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
OrderedFactorKernel$new( s2 = 1, D = NULL, nlevels, xindex, p_lower = 1e-08, p_upper = 5, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE, offdiagequal = 1 - 1e-06 )
Arguments
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of X to use the kernel on
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
pVector of distances in latent space
Method k()
Calculate covariance between two points
Usage
OrderedFactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
OrderedFactorKernel$kone( x, y, p, s2, isdiag = TRUE, offdiagequal = self$offdiagequal )
Arguments
xvector
yvector
pcorrelation parameters on regular scale
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
OrderedFactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
OrderedFactorKernel$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
OrderedFactorKernel$dC_dx(XX, X, ...)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
OrderedFactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
OrderedFactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
OrderedFactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
OrderedFactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
Arguments
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
OrderedFactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
Method s2_from_params()
Get s2 from params vector
Usage
OrderedFactorKernel$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method plotLatent()
Plot the points in the latent space
Usage
OrderedFactorKernel$plotLatent()
Method print()
Print this object
Usage
OrderedFactorKernel$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
OrderedFactorKernel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
References
https://stackoverflow.com/questions/27086195/linear-index-upper-triangular-matrix
Examples
kk <- OrderedFactorKernel$new(D=1, nlevels=5, xindex=1)
kk$p <- (1:10)/100
kmat <- outer(1:5, 1:5, Vectorize(kk$k))
kmat
if (requireNamespace("dplyr", quietly=TRUE)) {
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3), c(3.7,3))
n <- nrow(X)
Z <- X[,1] - (4-X[,2])^2 + rnorm(n,0,.1)
plot(X[,1], Z, col=X[,2])
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- OrderedFactorKernel$new(D=2, nlevels=3, xind=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=0)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)
}
Periodic Kernel R6 class
Description
Periodic Kernel R6 class
Periodic Kernel R6 class
Usage
k_Periodic(
p,
alpha = 1,
s2 = 1,
D,
p_lower = 0,
p_upper = 100,
p_est = TRUE,
alpha_lower = 0,
alpha_upper = 100,
alpha_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE
)
Arguments
p |
Periodic parameter |
alpha |
Periodic parameter |
s2 |
Initial variance |
D |
Number of input dimensions of data |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
alpha_lower |
Lower bound for alpha |
alpha_upper |
Upper bound for alpha |
alpha_est |
Should alpha be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster if implemented. |
Format
R6Class object.
Details
p is the period for each dimension, a is a single number for scaling
k(x, y) = s2 * exp(-sum(alpha*sin(p * (x-y))^2))
k(x, y) = \sigma^2 * \exp(-\sum(\alpha_i*sin(p * (x_i-y_i))^2))
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_Periodic
Public fields
pParameter for correlation
p_estShould p be estimated?
logpLog of p
logp_lowerLower bound of logp
logp_upperUpper bound of logp
p_lengthlength of p
alphaParameter for correlation
alpha_estShould alpha be estimated?
logalphaLog of alpha
logalpha_lowerLower bound of logalpha
logalpha_upperUpper bound of logalpha
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
Periodic$new( p, alpha = 1, s2 = 1, D, p_lower = 0, p_upper = 100, p_est = TRUE, alpha_lower = 0, alpha_upper = 100, alpha_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE )
Arguments
pPeriodic parameter
alphaPeriodic parameter
s2Initial variance
DNumber of input dimensions of data
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
alpha_lowerLower bound for alpha
alpha_upperUpper bound for alpha
alpha_estShould alpha be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster if implemented.
Method k()
Calculate covariance between two points
Usage
Periodic$k( x, y = NULL, logp = self$logp, logalpha = self$logalpha, s2 = self$s2, params = NULL )
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
logpCorrelation parameters.
logalphaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Periodic$kone(x, y, logp, p, alpha, s2)
Arguments
xvector
yvector
logpcorrelation parameters on log scale
pcorrelation parameters on regular scale
alphacorrelation parameter
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Periodic$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
Periodic$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Periodic$dC_dx(XX, X, logp = self$logp, logalpha = self$logalpha, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
logplog of p
logalphalog of alpha
s2Variance parameter
Method param_optim_start()
Starting point for parameters for optimization
Usage
Periodic$param_optim_start( jitter = F, y, p_est = self$p_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
Periodic$param_optim_start0( jitter = F, y, p_est = self$p_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
Periodic$param_optim_lower( p_est = self$p_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
p_estIs p being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
Periodic$param_optim_upper( p_est = self$p_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
p_estIs p being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
Periodic$set_params_from_optim( optim_out, p_est = self$p_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
p_estIs p being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method s2_from_params()
Get s2 from params vector
Usage
Periodic$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
Periodic$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Periodic$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Periodic$new(p=1, alpha=1)
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Periodic$new(D=1),
parallel=FALSE)
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
plot(gp$kernel)
Power Exponential Kernel R6 class
Description
Power Exponential Kernel R6 class
Power Exponential Kernel R6 class
Usage
k_PowerExp(
alpha = 1.95,
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
alpha_lower = 1e-08,
alpha_upper = 2,
alpha_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE
)
Arguments
alpha |
Initial alpha value (the exponent). Between 0 and 2. |
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
alpha_lower |
Lower bound for alpha |
alpha_upper |
Upper bound for alpha |
alpha_est |
Should alpha be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster if implemented. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_PowerExp
Public fields
alphaalpha value (the exponent). Between 0 and 2.
alpha_lowerLower bound for alpha
alpha_upperUpper bound for alpha
alpha_estShould alpha be estimated?
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
PowerExp$new( alpha = 1.95, beta, s2 = 1, D, beta_lower = -8, beta_upper = 6, beta_est = TRUE, alpha_lower = 1e-08, alpha_upper = 2, alpha_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE )
Arguments
alphaInitial alpha value (the exponent). Between 0 and 2.
betaInitial beta value
s2Initial variance
DNumber of input dimensions of data
beta_lowerLower bound for beta
beta_upperUpper bound for beta
beta_estShould beta be estimated?
alpha_lowerLower bound for alpha
alpha_upperUpper bound for alpha
alpha_estShould alpha be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster if implemented.
Method k()
Calculate covariance between two points
Usage
PowerExp$k( x, y = NULL, beta = self$beta, alpha = self$alpha, s2 = self$s2, params = NULL )
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
alphaalpha value (the exponent). Between 0 and 2.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
PowerExp$kone(x, y, beta, theta, alpha, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
alphaalpha value (the exponent). Between 0 and 2.
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
PowerExp$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
PowerExp$dC_dx( XX, X, theta, beta = self$beta, alpha = self$alpha, s2 = self$s2 )
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
alphaalpha value (the exponent). Between 0 and 2.
s2Variance parameter
Method param_optim_start()
Starting point for parameters for optimization
Usage
PowerExp$param_optim_start( jitter = F, y, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
PowerExp$param_optim_start0( jitter = F, y, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
PowerExp$param_optim_lower( beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
PowerExp$param_optim_upper( beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
PowerExp$set_params_from_optim( optim_out, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
beta_estIs beta estimate?
alpha_estIs alpha estimated?
s2_estIs s2 estimated?
Method print()
Print this object
Usage
PowerExp$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
PowerExp$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- PowerExp$new(beta=0, alpha=0)
Rational Quadratic Kernel R6 class
Description
Rational Quadratic Kernel R6 class
Rational Quadratic Kernel R6 class
Usage
k_RatQuad(
beta,
alpha = 1,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
alpha_lower = 1e-08,
alpha_upper = 100,
alpha_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE
)
Arguments
beta |
Initial beta value |
alpha |
Initial alpha value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
alpha_lower |
Lower bound for alpha |
alpha_upper |
Upper bound for alpha |
alpha_est |
Should alpha be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster if implemented. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_RatQuad
Public fields
alphaalpha value (the exponent). Between 0 and 2.
logalphaLog of alpha
logalpha_lowerLower bound for log of alpha
logalpha_upperUpper bound for log of alpha
alpha_estShould alpha be estimated?
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
RatQuad$new( beta, alpha = 1, s2 = 1, D, beta_lower = -8, beta_upper = 6, beta_est = TRUE, alpha_lower = 1e-08, alpha_upper = 100, alpha_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE )
Arguments
betaInitial beta value
alphaInitial alpha value
s2Initial variance
DNumber of input dimensions of data
beta_lowerLower bound for beta
beta_upperUpper bound for beta
beta_estShould beta be estimated?
alpha_lowerLower bound for alpha
alpha_upperUpper bound for alpha
alpha_estShould alpha be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster if implemented.
Method k()
Calculate covariance between two points
Usage
RatQuad$k( x, y = NULL, beta = self$beta, logalpha = self$logalpha, s2 = self$s2, params = NULL )
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
logalphaA correlation parameter
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
RatQuad$kone(x, y, beta, theta, alpha, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
alphaA correlation parameter
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
RatQuad$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
RatQuad$dC_dx(XX, X, theta, beta = self$beta, alpha = self$alpha, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
alphaparameter
s2Variance parameter
Method param_optim_start()
Starting point for parameters for optimization
Usage
RatQuad$param_optim_start( jitter = F, y, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
RatQuad$param_optim_start0( jitter = F, y, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
jitterShould there be a jitter?
yOutput
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
RatQuad$param_optim_lower( beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
RatQuad$param_optim_upper( beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
RatQuad$set_params_from_optim( optim_out, beta_est = self$beta_est, alpha_est = self$alpha_est, s2_est = self$s2_est )
Arguments
optim_outOutput from optimization
beta_estIs beta being estimated?
alpha_estIs alpha being estimated?
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
RatQuad$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
RatQuad$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- RatQuad$new(beta=0, alpha=0)
Triangle Kernel R6 class
Description
Triangle Kernel R6 class
Triangle Kernel R6 class
Usage
k_Triangle(
beta,
s2 = 1,
D,
beta_lower = -8,
beta_upper = 6,
beta_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
isotropic = FALSE
)
Arguments
beta |
Initial beta value |
s2 |
Initial variance |
D |
Number of input dimensions of data |
beta_lower |
Lower bound for beta |
beta_upper |
Upper bound for beta |
beta_est |
Should beta be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Much faster. |
isotropic |
If isotropic then a single beta/theta is used for all dimensions. If not (anisotropic) then a separate beta/beta is used for each dimension. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super classes
GauPro::GauPro_kernel -> GauPro::GauPro_kernel_beta -> GauPro_kernel_Triangle
Methods
Public methods
Inherited methods
GauPro::GauPro_kernel$plot()GauPro::GauPro_kernel_beta$C_dC_dparams()GauPro::GauPro_kernel_beta$initialize()GauPro::GauPro_kernel_beta$param_optim_lower()GauPro::GauPro_kernel_beta$param_optim_start()GauPro::GauPro_kernel_beta$param_optim_start0()GauPro::GauPro_kernel_beta$param_optim_upper()GauPro::GauPro_kernel_beta$s2_from_params()GauPro::GauPro_kernel_beta$set_params_from_optim()
Method k()
Calculate covariance between two points
Usage
Triangle$k(x, y = NULL, beta = self$beta, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
betaCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
Triangle$kone(x, y, beta, theta, s2)
Arguments
xvector
yvector
betacorrelation parameters on log scale
thetacorrelation parameters on regular scale
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
Triangle$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
Triangle$dC_dx(XX, X, theta, beta = self$beta, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
thetaCorrelation parameters
betalog of theta
s2Variance parameter
Method print()
Print this object
Usage
Triangle$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Triangle$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Triangle$new(beta=0)
plot(k1)
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro_kernel_model$new(X=x, Z=y, kernel=Triangle$new(1),
parallel=FALSE)
gp$predict(.454)
gp$plot1D()
gp$cool1Dplot()
White noise Kernel R6 class
Description
Initialize kernel object
Usage
k_White(
s2 = 1,
D,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE
)
Arguments
s2 |
Initial variance |
D |
Number of input dimensions of data |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Not implemented for White. |
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_White
Public fields
s2variance
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
s2_estShould s2 be estimated?
Methods
Public methods
Inherited methods
Method new()
Initialize kernel object
Usage
White$new( s2 = 1, D, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE )
Arguments
s2Initial variance
DNumber of input dimensions of data
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Not implemented for White.
Method k()
Calculate covariance between two points
Usage
White$k(x, y = NULL, s2 = self$s2, params = NULL)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
Method kone()
Find covariance of two points
Usage
White$kone(x, y, s2)
Arguments
xvector
yvector
s2Variance parameter
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
White$dC_dparams(params = NULL, X, C_nonug, C, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
White$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
White$dC_dx(XX, X, s2 = self$s2)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
s2Variance parameter
thetaCorrelation parameters
betalog of theta
Method param_optim_start()
Starting point for parameters for optimization
Usage
White$param_optim_start(jitter = F, y, s2_est = self$s2_est)
Arguments
jitterShould there be a jitter?
yOutput
s2_estIs s2 being estimated?
Method param_optim_start0()
Starting point for parameters for optimization
Usage
White$param_optim_start0(jitter = F, y, s2_est = self$s2_est)
Arguments
jitterShould there be a jitter?
yOutput
s2_estIs s2 being estimated?
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
White$param_optim_lower(s2_est = self$s2_est)
Arguments
s2_estIs s2 being estimated?
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
White$param_optim_upper(s2_est = self$s2_est)
Arguments
s2_estIs s2 being estimated?
Method set_params_from_optim()
Set parameters from optimization output
Usage
White$set_params_from_optim(optim_out, s2_est = self$s2_est)
Arguments
optim_outOutput from optimization
s2_ests2 estimate
Method s2_from_params()
Get s2 from params vector
Usage
White$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
White$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
White$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- White$new(s2=1e-8)
Cube multiply over first dimension
Description
The result is transposed since that is what apply will give you
Usage
arma_mult_cube_vec(cub, v)
Arguments
cub |
A cube (3D array) |
v |
A vector |
Value
Transpose of multiplication over first dimension of cub time v
Examples
d1 <- 10
d2 <- 1e2
d3 <- 2e2
aa <- array(data = rnorm(d1*d2*d3), dim = c(d1, d2, d3))
bb <- rnorm(d3)
t1 <- apply(aa, 1, function(U) {U%*%bb})
t2 <- arma_mult_cube_vec(aa, bb)
dd <- t1 - t2
summary(dd)
image(dd)
table(dd)
# microbenchmark::microbenchmark(apply(aa, 1, function(U) {U%*%bb}),
# arma_mult_cube_vec(aa, bb))
Correlation Cubic matrix in C (symmetric)
Description
Correlation Cubic matrix in C (symmetric)
Usage
corr_cubic_matrix_symC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_cubic_matrix_symC(matrix(c(1,0,0,1),2,2),c(1,1))
Correlation Gaussian matrix in C (symmetric)
Description
Correlation Gaussian matrix in C (symmetric)
Usage
corr_exponential_matrix_symC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_gauss_matrix_symC(matrix(c(1,0,0,1),2,2),c(1,1))
Correlation Gaussian matrix gradient in C using Armadillo
Description
Correlation Gaussian matrix gradient in C using Armadillo
Usage
corr_gauss_dCdX(XX, X, theta, s2)
Arguments
XX |
Matrix XX to get gradient for |
X |
Matrix X GP was fit to |
theta |
Theta vector |
s2 |
Variance parameter |
Value
3-dim array of correlation derivative
Examples
# corr_gauss_dCdX(matrix(c(1,0,0,1),2,2),c(1,1))
Gaussian correlation
Description
Gaussian correlation
Usage
corr_gauss_matrix(x, x2 = NULL, theta)
Arguments
x |
First data matrix |
x2 |
Second data matrix |
theta |
Correlation parameter |
Value
Correlation matrix
Examples
corr_gauss_matrix(matrix(1:10,ncol=1), matrix(6:15,ncol=1), 1e-2)
Correlation Gaussian matrix in C using Rcpp
Description
Correlation Gaussian matrix in C using Rcpp
Usage
corr_gauss_matrixC(x, y, theta)
Arguments
x |
Matrix x |
y |
Matrix y, must have same number of columns as x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_gauss_matrixC(matrix(c(1,0,0,1),2,2), matrix(c(1,0,1,1),2,2), c(1,1))
Correlation Gaussian matrix in C using Armadillo
Description
20-25
Usage
corr_gauss_matrix_armaC(x, y, theta, s2 = 1)
Arguments
x |
Matrix x |
y |
Matrix y, must have same number of columns as x |
theta |
Theta vector |
s2 |
Variance to multiply matrix by |
Value
Correlation matrix
Examples
corr_gauss_matrix_armaC(matrix(c(1,0,0,1),2,2),matrix(c(1,0,1,1),2,2),c(1,1))
x1 <- matrix(runif(100*6), nrow=100, ncol=6)
x2 <- matrix(runif(1e4*6), ncol=6)
th <- runif(6)
t1 <- corr_gauss_matrixC(x1, x2, th)
t2 <- corr_gauss_matrix_armaC(x1, x2, th)
identical(t1, t2)
# microbenchmark::microbenchmark(corr_gauss_matrixC(x1, x2, th),
# corr_gauss_matrix_armaC(x1, x2, th))
Correlation Gaussian matrix in C (symmetric)
Description
Correlation Gaussian matrix in C (symmetric)
Usage
corr_gauss_matrix_symC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_gauss_matrix_symC(matrix(c(1,0,0,1),2,2),c(1,1))
Correlation Gaussian matrix in C using Armadillo (symmetric)
Description
About 30
Usage
corr_gauss_matrix_sym_armaC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_gauss_matrix_sym_armaC(matrix(c(1,0,0,1),2,2),c(1,1))
x3 <- matrix(runif(1e3*6), ncol=6)
th <- runif(6)
t3 <- corr_gauss_matrix_symC(x3, th)
t4 <- corr_gauss_matrix_sym_armaC(x3, th)
identical(t3, t4)
# microbenchmark::microbenchmark(corr_gauss_matrix_symC(x3, th),
# corr_gauss_matrix_sym_armaC(x3, th), times=50)
Correlation Latent factor matrix in C (symmetric)
Description
Correlation Latent factor matrix in C (symmetric)
Usage
corr_latentfactor_matrix_symC(x, theta, xindex, latentdim, offdiagequal)
Arguments
x |
Matrix x |
theta |
Theta vector |
xindex |
Index to use |
latentdim |
Number of latent dimensions |
offdiagequal |
What to set off-diagonal values with matching values to. |
Value
Correlation matrix
Examples
corr_latentfactor_matrix_symC(matrix(c(1,.5, 2,1.6, 1,0),ncol=2,byrow=TRUE),
c(1.5,1.8), 1, 1, 1-1e-6)
corr_latentfactor_matrix_symC(matrix(c(0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4),
ncol=4, byrow=TRUE),
c(0.101, -0.714, 0.114, -0.755, 0.117, -0.76, 0.116, -0.752),
4, 2, 1-1e-6) * 6.85
Correlation Latent factor matrix in C (symmetric)
Description
Correlation Latent factor matrix in C (symmetric)
Usage
corr_latentfactor_matrixmatrixC(x, y, theta, xindex, latentdim, offdiagequal)
Arguments
x |
Matrix x |
y |
Matrix y |
theta |
Theta vector |
xindex |
Index to use |
latentdim |
Number of latent dimensions |
offdiagequal |
What to set off-diagonal values with matching values to. |
Value
Correlation matrix
Examples
corr_latentfactor_matrixmatrixC(matrix(c(1,.5, 2,1.6, 1,0),ncol=2,byrow=TRUE),
matrix(c(2,1.6, 1,0),ncol=2,byrow=TRUE),
c(1.5,1.8), 1, 1, 1-1e-6)
corr_latentfactor_matrixmatrixC(matrix(c(0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4),
ncol=4, byrow=TRUE),
matrix(c(0,0,0,2,0,0,0,4,0,0,0,1),
ncol=4, byrow=TRUE),
c(0.101, -0.714, 0.114, -0.755, 0.117, -0.76, 0.116, -0.752),
4, 2, 1-1e-6) * 6.85
Correlation Matern 3/2 matrix in C (symmetric)
Description
Correlation Matern 3/2 matrix in C (symmetric)
Usage
corr_matern32_matrix_symC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_gauss_matrix_symC(matrix(c(1,0,0,1),2,2),c(1,1))
Correlation Gaussian matrix in C (symmetric)
Description
Correlation Gaussian matrix in C (symmetric)
Usage
corr_matern52_matrix_symC(x, theta)
Arguments
x |
Matrix x |
theta |
Theta vector |
Value
Correlation matrix
Examples
corr_matern52_matrix_symC(matrix(c(1,0,0,1),2,2),c(1,1))
Correlation ordered factor matrix in C (symmetric)
Description
Correlation ordered factor matrix in C (symmetric)
Usage
corr_orderedfactor_matrix_symC(x, theta, xindex, offdiagequal)
Arguments
x |
Matrix x |
theta |
Theta vector |
xindex |
Index to use |
offdiagequal |
What to set off-diagonal values with matching values to. |
Value
Correlation matrix
Examples
corr_orderedfactor_matrix_symC(matrix(c(1,.5, 2,1.6, 1,0),ncol=2,byrow=TRUE),
c(1.5,1.8), 1, 1-1e-6)
corr_orderedfactor_matrix_symC(matrix(c(0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4),
ncol=4, byrow=TRUE),
c(0.101, -0.714, 0.114, -0.755, 0.117, -0.76, 0.116, -0.752),
4, 1-1e-6) * 6.85
Correlation ordered factor matrix in C (symmetric)
Description
Correlation ordered factor matrix in C (symmetric)
Usage
corr_orderedfactor_matrixmatrixC(x, y, theta, xindex, offdiagequal)
Arguments
x |
Matrix x |
y |
Matrix y |
theta |
Theta vector |
xindex |
Index to use |
offdiagequal |
What to set off-diagonal values with matching values to. |
Value
Correlation matrix
Examples
corr_orderedfactor_matrixmatrixC(matrix(c(1,.5, 2,1.6, 1,0),ncol=2,byrow=TRUE),
matrix(c(2,1.6, 1,0),ncol=2,byrow=TRUE),
c(1.5,1.8), 1, 1-1e-6)
corr_orderedfactor_matrixmatrixC(matrix(c(0,0,0,1,0,0,0,2,0,0,0,3,0,0,0,4),
ncol=4, byrow=TRUE),
matrix(c(0,0,0,2,0,0,0,4,0,0,0,1),
ncol=4, byrow=TRUE),
c(0.101, -0.714, 0.114, -0.755, 0.117, -0.76, 0.116, -0.752),
4, 1-1e-6) * 6.85
Gaussian process regression model
Description
Fits a Gaussian process regression model to data.
An R6 object is returned with many methods.
'gpkm()' is an alias for 'GauPro_kernel_model$new()'. For full documentation, see documentation for 'GauPro_kernel_model'.
Standard methods that work include 'plot()', 'summary()', and 'predict()'.
Usage
gpkm(
X,
Z,
kernel,
trend,
verbose = 0,
useC = TRUE,
useGrad = TRUE,
parallel = FALSE,
parallel_cores = "detect",
nug = 1e-06,
nug.min = 1e-08,
nug.max = 100,
nug.est = TRUE,
param.est = TRUE,
restarts = 0,
normalize = FALSE,
optimizer = "L-BFGS-B",
track_optim = FALSE,
formula,
data,
...
)
Arguments
X |
Matrix whose rows are the input points |
Z |
Output points corresponding to X |
kernel |
The kernel to use. E.g., Gaussian$new(). |
trend |
Trend to use. E.g., trend_constant$new(). |
verbose |
Amount of stuff to print. 0 is little, 2 is a lot. |
useC |
Should C code be used when possible? Should be faster. |
useGrad |
Should the gradient be used? |
parallel |
Should code be run in parallel? Make optimization faster but uses more computer resources. |
parallel_cores |
When using parallel, how many cores should be used? |
nug |
Value for the nugget. The starting value if estimating it. |
nug.min |
Minimum allowable value for the nugget. |
nug.max |
Maximum allowable value for the nugget. |
nug.est |
Should the nugget be estimated? |
param.est |
Should the kernel parameters be estimated? |
restarts |
How many optimization restarts should be used when estimating parameters? |
normalize |
Should the data be normalized? |
optimizer |
What algorithm should be used to optimize the parameters. |
track_optim |
Should it track the parameters evaluated while optimizing? |
formula |
Formula for the data if giving in a data frame. |
data |
Data frame of data. Use in conjunction with formula. |
... |
Not used |
Details
The default kernel is a Matern 5/2 kernel, but factor/character inputs will be given factor kernels.
Calculate gradfunc in optimization to speed up. NEEDS TO APERM dC_dparams Doesn't need to be exported, should only be useful in functions.
Description
Calculate gradfunc in optimization to speed up. NEEDS TO APERM dC_dparams Doesn't need to be exported, should only be useful in functions.
Usage
gradfuncarray(dC_dparams, Cinv, Cinv_yminusmu)
Arguments
dC_dparams |
Derivative matrix for covariance function wrt kernel parameters |
Cinv |
Inverse of covariance matrix |
Cinv_yminusmu |
Vector that is the inverse of C times y minus the mean. |
Value
Vector, one value for each parameter
Examples
gradfuncarray(array(dim=c(2,4,4), data=rnorm(32)), matrix(rnorm(16),4,4), rnorm(4))
Calculate gradfunc in optimization to speed up. NEEDS TO APERM dC_dparams Doesn't need to be exported, should only be useful in functions.
Description
Calculate gradfunc in optimization to speed up. NEEDS TO APERM dC_dparams Doesn't need to be exported, should only be useful in functions.
Usage
gradfuncarrayR(dC_dparams, Cinv, Cinv_yminusmu)
Arguments
dC_dparams |
Derivative matrix for covariance function wrt kernel parameters |
Cinv |
Inverse of covariance matrix |
Cinv_yminusmu |
Vector that is the inverse of C times y minus the mean. |
Value
Vector, one value for each parameter
Examples
a1 <- array(dim=c(2,4,4), data=rnorm(32))
a2 <- matrix(rnorm(16),4,4)
a3 <- rnorm(4)
#gradfuncarray(a1, a2, a3)
#gradfuncarrayR(a1, a2, a3)
Derivative of cubic kernel covariance matrix in C
Description
Derivative of cubic kernel covariance matrix in C
Usage
kernel_cubic_dC(x, theta, C_nonug, s2_est, beta_est, lenparams_D, s2_nug, s2)
Arguments
x |
Matrix x |
theta |
Theta vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
beta_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
s2 |
s2 |
Value
Correlation matrix
Derivative of Matern 5/2 kernel covariance matrix in C
Description
Derivative of Matern 5/2 kernel covariance matrix in C
Usage
kernel_exponential_dC(
x,
theta,
C_nonug,
s2_est,
beta_est,
lenparams_D,
s2_nug,
s2
)
Arguments
x |
Matrix x |
theta |
Theta vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
beta_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
s2 |
s2 parameter |
Value
Correlation matrix
Derivative of Gaussian kernel covariance matrix in C
Description
Derivative of Gaussian kernel covariance matrix in C
Usage
kernel_gauss_dC(x, theta, C_nonug, s2_est, beta_est, lenparams_D, s2_nug)
Arguments
x |
Matrix x |
theta |
Theta vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
beta_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
Value
Correlation matrix
Derivative of covariance matrix of X with respect to kernel parameters for the Latent Factor Kernel
Description
Derivative of covariance matrix of X with respect to kernel parameters for the Latent Factor Kernel
Usage
kernel_latentFactor_dC(
x,
pf,
C_nonug,
s2_est,
p_est,
lenparams_D,
s2_nug,
latentdim,
xindex,
nlevels,
s2
)
Arguments
x |
Matrix x |
pf |
pf vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
p_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
latentdim |
Number of latent dimensions |
xindex |
Which column of x is the indexing variable |
nlevels |
Number of levels |
s2 |
Value of s2 |
Value
Correlation matrix
Derivative of Matern 5/2 kernel covariance matrix in C
Description
Derivative of Matern 5/2 kernel covariance matrix in C
Usage
kernel_matern32_dC(x, theta, C_nonug, s2_est, beta_est, lenparams_D, s2_nug)
Arguments
x |
Matrix x |
theta |
Theta vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
beta_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
Value
Correlation matrix
Derivative of Matern 5/2 kernel covariance matrix in C
Description
Derivative of Matern 5/2 kernel covariance matrix in C
Usage
kernel_matern52_dC(x, theta, C_nonug, s2_est, beta_est, lenparams_D, s2_nug)
Arguments
x |
Matrix x |
theta |
Theta vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
beta_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
Value
Correlation matrix
Derivative of covariance matrix of X with respect to kernel parameters for the Ordered Factor Kernel
Description
Derivative of covariance matrix of X with respect to kernel parameters for the Ordered Factor Kernel
Usage
kernel_orderedFactor_dC(
x,
pf,
C_nonug,
s2_est,
p_est,
lenparams_D,
s2_nug,
xindex,
nlevels,
s2
)
Arguments
x |
Matrix x |
pf |
pf vector |
C_nonug |
cov mat without nugget |
s2_est |
whether s2 is being estimated |
p_est |
Whether theta/beta is being estimated |
lenparams_D |
Number of parameters the derivative is being calculated for |
s2_nug |
s2 times the nug |
xindex |
Which column of x is the indexing variable |
nlevels |
Number of levels |
s2 |
Value of s2 |
Value
Correlation matrix
Gaussian Kernel R6 class
Description
Gaussian Kernel R6 class
Gaussian Kernel R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_product
Public fields
k1kernel 1
k2kernel 2
s2Variance
Active bindings
k1plparam length of kernel 1
k2plparam length of kernel 2
s2_estIs s2 being estimated?
Methods
Public methods
Inherited methods
Method new()
Is s2 being estimated?
Length of the parameters of k1
Length of the parameters of k2
Initialize kernel
Usage
kernel_product$new(k1, k2, useC = TRUE)
Arguments
k1Kernel 1
k2Kernel 2
useCShould C code used? Not applicable for kernel product.
Method k()
Calculate covariance between two points
Usage
kernel_product$k(x, y = NULL, params, ...)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
paramsparameters to use instead of beta and s2.
...Not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
kernel_product$param_optim_start(jitter = F, y)
Arguments
jitterShould there be a jitter?
yOutput
Method param_optim_start0()
Starting point for parameters for optimization
Usage
kernel_product$param_optim_start0(jitter = F, y)
Arguments
jitterShould there be a jitter?
yOutput
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
kernel_product$param_optim_lower()
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
kernel_product$param_optim_upper()
Method set_params_from_optim()
Set parameters from optimization output
Usage
kernel_product$set_params_from_optim(optim_out)
Arguments
optim_outOutput from optimization
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
kernel_product$dC_dparams(params = NULL, C, X, C_nonug, nug)
Arguments
paramsKernel parameters
CCovariance with nugget
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
kernel_product$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
kernel_product$dC_dx(XX, X)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
Method s2_from_params()
Get s2 from params vector
Usage
kernel_product$s2_from_params(params, s2_est = self$s2_est)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
kernel_product$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
kernel_product$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Exponential$new(beta=1)
k2 <- Matern32$new(beta=2)
k <- k1 * k2
k$k(matrix(c(2,1), ncol=1))
Gaussian Kernel R6 class
Description
Gaussian Kernel R6 class
Gaussian Kernel R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_kernel -> GauPro_kernel_sum
Public fields
k1kernel 1
k2kernel 2
k1_param_lengthparam length of kernel 1
k2_param_lengthparam length of kernel 2
k1plparam length of kernel 1
k2plparam length of kernel 2
s2variance
s2_estIs s2 being estimated?
Methods
Public methods
Inherited methods
Method new()
Initialize kernel
Usage
kernel_sum$new(k1, k2, useC = TRUE)
Arguments
k1Kernel 1
k2Kernel 2
useCShould C code used? Not applicable for kernel sum.
Method k()
Calculate covariance between two points
Usage
kernel_sum$k(x, y = NULL, params, ...)
Arguments
xvector.
yvector, optional. If excluded, find correlation of x with itself.
paramsparameters to use instead of beta and s2.
...Not used
Method param_optim_start()
Starting point for parameters for optimization
Usage
kernel_sum$param_optim_start(jitter = F, y)
Arguments
jitterShould there be a jitter?
yOutput
Method param_optim_start0()
Starting point for parameters for optimization
Usage
kernel_sum$param_optim_start0(jitter = F, y)
Arguments
jitterShould there be a jitter?
yOutput
Method param_optim_lower()
Lower bounds of parameters for optimization
Usage
kernel_sum$param_optim_lower()
Method param_optim_upper()
Upper bounds of parameters for optimization
Usage
kernel_sum$param_optim_upper()
Method set_params_from_optim()
Set parameters from optimization output
Usage
kernel_sum$set_params_from_optim(optim_out)
Arguments
optim_outOutput from optimization
Method dC_dparams()
Derivative of covariance with respect to parameters
Usage
kernel_sum$dC_dparams(params = NULL, C, X, C_nonug, nug)
Arguments
paramsKernel parameters
CCovariance with nugget
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
nugValue of nugget
Method C_dC_dparams()
Calculate covariance matrix and its derivative with respect to parameters
Usage
kernel_sum$C_dC_dparams(params = NULL, X, nug)
Arguments
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
Method dC_dx()
Derivative of covariance with respect to X
Usage
kernel_sum$dC_dx(XX, X)
Arguments
XXmatrix of points
Xmatrix of points to take derivative with respect to
Method s2_from_params()
Get s2 from params vector
Usage
kernel_sum$s2_from_params(params)
Arguments
paramsparameter vector
s2_estIs s2 being estimated?
Method print()
Print this object
Usage
kernel_sum$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
kernel_sum$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
k1 <- Exponential$new(beta=1)
k2 <- Matern32$new(beta=2)
k <- k1 + k2
k$k(matrix(c(2,1), ncol=1))
Predict for class GauPro
Description
Predict for class GauPro
Usage
## S3 method for class 'GauPro'
predict(object, XX, se.fit = F, covmat = F, split_speed = T, ...)
Arguments
object |
Object of class GauPro |
XX |
new points to predict |
se.fit |
Should standard error be returned (and variance)? |
covmat |
Should the covariance matrix be returned? |
split_speed |
Should the calculation be split up to speed it up? |
... |
Additional parameters |
Value
Prediction from object at XX
Examples
n <- 12
x <- matrix(seq(0,1,length.out = n), ncol=1)
y <- sin(2*pi*x) + rnorm(n,0,1e-1)
gp <- GauPro(X=x, Z=y, parallel=FALSE)
predict(gp, .448)
Print summary.GauPro
Description
Print summary.GauPro
Usage
## S3 method for class 'summary.GauPro'
print(x, ...)
Arguments
x |
summary.GauPro object |
... |
Additional args |
Value
prints, returns invisible object
Find the square root of a matrix
Description
Same thing as 'expm::sqrtm', but faster.
Usage
sqrt_matrix(mat, symmetric)
Arguments
mat |
Matrix to find square root matrix of |
symmetric |
Is it symmetric? Passed to eigen. |
Value
Square root of mat
Examples
mat <- matrix(c(1,.1,.1,1), 2, 2)
smat <- sqrt_matrix(mat=mat, symmetric=TRUE)
smat %*% smat
Summary for GauPro object
Description
Summary for GauPro object
Usage
## S3 method for class 'GauPro'
summary(object, ...)
Arguments
object |
GauPro R6 object |
... |
Additional arguments passed to summary |
Value
Summary
Trend R6 class
Description
Trend R6 class
Trend R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_trend -> GauPro_trend_0
Public fields
mTrend parameters
m_lowerm lower bound
m_upperm upper bound
m_estShould m be estimated?
Methods
Public methods
Method new()
Initialize trend object
Usage
trend_0$new(m = 0, m_lower = 0, m_upper = 0, m_est = FALSE, D = NA)
Arguments
mtrend initial parameters
m_lowertrend lower bounds
m_uppertrend upper bounds
m_estLogical of whether each param should be estimated
DNumber of input dimensions of data
Method Z()
Get trend value for given matrix X
Usage
trend_0$Z(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend parameters
paramstrend parameters
Method dZ_dparams()
Derivative of trend with respect to trend parameters
Usage
trend_0$dZ_dparams(X, m = m$est, params = NULL)
Arguments
Xmatrix of points
mtrend values
paramsoverrides m
Method dZ_dx()
Derivative of trend with respect to X
Usage
trend_0$dZ_dx(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend values
paramsoverrides m
Method param_optim_start()
Get parameter initial point for optimization
Usage
trend_0$param_optim_start(jitter, trend_est)
Arguments
jitterNot used
trend_estIf the trend should be estimate.
Method param_optim_start0()
Get parameter initial point for optimization
Usage
trend_0$param_optim_start0(jitter, trend_est)
Arguments
jitterNot used
trend_estIf the trend should be estimate.
Method param_optim_lower()
Get parameter lower bounds for optimization
Usage
trend_0$param_optim_lower(jitter, trend_est)
Arguments
jitterNot used
trend_estIf the trend should be estimate.
Method param_optim_upper()
Get parameter upper bounds for optimization
Usage
trend_0$param_optim_upper(jitter, trend_est)
Arguments
jitterNot used
trend_estIf the trend should be estimate.
Method set_params_from_optim()
Set parameters after optimization
Usage
trend_0$set_params_from_optim(optim_out)
Arguments
optim_outOutput from optim
Method clone()
The objects of this class are cloneable with this method.
Usage
trend_0$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
t1 <- trend_0$new()
Trend R6 class
Description
Trend R6 class
Trend R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_trend -> GauPro_trend_LM
Public fields
mTrend parameters
m_lowerm lower bound
m_upperm upper bound
m_estShould m be estimated?
btrend parameter
b_lowertrend lower bounds
b_uppertrend upper bounds
b_estShould b be estimated?
Methods
Public methods
Method new()
Initialize trend object
Usage
trend_LM$new( D, m = rep(0, D), m_lower = rep(-Inf, D), m_upper = rep(Inf, D), m_est = rep(TRUE, D), b = 0, b_lower = -Inf, b_upper = Inf, b_est = TRUE )
Arguments
DNumber of input dimensions of data
mtrend initial parameters
m_lowertrend lower bounds
m_uppertrend upper bounds
m_estLogical of whether each param should be estimated
btrend parameter
b_lowertrend lower bounds
b_uppertrend upper bounds
b_estShould b be estimated?
Method Z()
Get trend value for given matrix X
Usage
trend_LM$Z(X, m = self$m, b = self$b, params = NULL)
Arguments
Xmatrix of points
mtrend parameters
btrend parameters (slopes)
paramstrend parameters
Method dZ_dparams()
Derivative of trend with respect to trend parameters
Usage
trend_LM$dZ_dparams(X, m = self$m_est, b = self$b_est, params = NULL)
Arguments
Xmatrix of points
mtrend values
btrend intercept
paramsoverrides m
Method dZ_dx()
Derivative of trend with respect to X
Usage
trend_LM$dZ_dx(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend values
paramsoverrides m
Method param_optim_start()
Get parameter initial point for optimization
Usage
trend_LM$param_optim_start( jitter = FALSE, b_est = self$b_est, m_est = self$m_est )
Arguments
jitterNot used
b_estIf the mean should be estimated.
m_estIf the linear terms should be estimated.
Method param_optim_start0()
Get parameter initial point for optimization
Usage
trend_LM$param_optim_start0( jitter = FALSE, b_est = self$b_est, m_est = self$m_est )
Arguments
jitterNot used
b_estIf the mean should be estimated.
m_estIf the linear terms should be estimated.
Method param_optim_lower()
Get parameter lower bounds for optimization
Usage
trend_LM$param_optim_lower(b_est = self$b_est, m_est = self$m_est)
Arguments
b_estIf the mean should be estimated.
m_estIf the linear terms should be estimated.
Method param_optim_upper()
Get parameter upper bounds for optimization
Usage
trend_LM$param_optim_upper(b_est = self$b_est, m_est = self$m_est)
Arguments
b_estIf the mean should be estimated.
m_estIf the linear terms should be estimated.
Method set_params_from_optim()
Set parameters after optimization
Usage
trend_LM$set_params_from_optim(optim_out)
Arguments
optim_outOutput from optim
Method clone()
The objects of this class are cloneable with this method.
Usage
trend_LM$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
t1 <- trend_LM$new(D=2)
Trend R6 class
Description
Trend R6 class
Trend R6 class
Format
R6Class object.
Value
Object of R6Class with methods for fitting GP model.
Super class
GauPro::GauPro_trend -> GauPro_trend_c
Public fields
mTrend parameters
m_lowerm lower bound
m_upperm upper bound
m_estShould m be estimated?
Methods
Public methods
Method new()
Initialize trend object
Usage
trend_c$new(m = 0, m_lower = -Inf, m_upper = Inf, m_est = TRUE, D = NA)
Arguments
mtrend initial parameters
m_lowertrend lower bounds
m_uppertrend upper bounds
m_estLogical of whether each param should be estimated
DNumber of input dimensions of data
Method Z()
Get trend value for given matrix X
Usage
trend_c$Z(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend parameters
paramstrend parameters
Method dZ_dparams()
Derivative of trend with respect to trend parameters
Usage
trend_c$dZ_dparams(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend values
paramsoverrides m
Method dZ_dx()
Derivative of trend with respect to X
Usage
trend_c$dZ_dx(X, m = self$m, params = NULL)
Arguments
Xmatrix of points
mtrend values
paramsoverrides m
Method param_optim_start()
Get parameter initial point for optimization
Usage
trend_c$param_optim_start(jitter = F, m_est = self$m_est)
Arguments
jitterNot used
m_estIf the trend should be estimate.
Method param_optim_start0()
Get parameter initial point for optimization
Usage
trend_c$param_optim_start0(jitter = F, m_est = self$m_est)
Arguments
jitterNot used
m_estIf the trend should be estimate.
Method param_optim_lower()
Get parameter lower bounds for optimization
Usage
trend_c$param_optim_lower(m_est = self$m_est)
Arguments
m_estIf the trend should be estimate.
Method param_optim_upper()
Get parameter upper bounds for optimization
Usage
trend_c$param_optim_upper(m_est = self$m_est)
Arguments
m_estIf the trend should be estimate.
Method set_params_from_optim()
Set parameters after optimization
Usage
trend_c$set_params_from_optim(optim_out)
Arguments
optim_outOutput from optim
Method clone()
The objects of this class are cloneable with this method.
Usage
trend_c$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
t1 <- trend_c$new()