Majorisation-Minimisation based Estimation for Reduced-Rank Regression with a Cauchy Distribution Assumption. This method is robust in the sense that it assumes a heavy-tailed Cauchy distribution for the innovations. This method is an iterative optimization algorithm. See References for a similar setting.

RRRR(
  y,
  x,
  z = NULL,
  mu = TRUE,
  r = 1,
  itr = 100,
  earlystop = 1e-04,
  initial_A = matrix(rnorm(P * r), ncol = r),
  initial_B = matrix(rnorm(Q * r), ncol = r),
  initial_D = matrix(rnorm(P * R), ncol = R),
  initial_mu = matrix(rnorm(P)),
  initial_Sigma = diag(P),
  return_data = TRUE
)

Arguments

y

Matrix of dimension N*P. The matrix for the response variables. See Detail.

x

Matrix of dimension N*Q. The matrix for the explanatory variables to be projected. See Detail.

z

Matrix of dimension N*R. The matrix for the explanatory variables not to be projected. See Detail.

mu

Logical. Indicating if a constant term is included.

r

Integer. The rank for the reduced-rank matrix \(AB'\). See Detail.

itr

Integer. The maximum number of iteration.

earlystop

Scalar. The criteria to stop the algorithm early. The algorithm will stop if the improvement on objective function is small than \(earlystop * objective_from_last_iteration\).

initial_A

Matrix of dimension P*r. The initial value for matrix \(A\). See Detail.

initial_B

Matrix of dimension Q*r. The initial value for matrix \(B\). See Detail.

initial_D

Matrix of dimension P*R. The initial value for matrix \(D\). See Detail.

initial_mu

Matrix of dimension P*1. The initial value for the constant \(mu\). See Detail.

initial_Sigma

Matrix of dimension P*P. The initial value for matrix Sigma. See Detail.

return_data

Logical. Indicating if the data used is return in the output. If set to TRUE, update.RRRR can update the model by simply provide new data. Set to FALSE to save output size.

Value

A list of the estimated parameters of class RRRR.

spec

The input specifications. \(N\) is the sample size.

history

The path of all the parameters during optimization and the path of the objective value.

mu

The estimated constant vector. Can be NULL.

A

The estimated exposure matrix.

B

The estimated factor matrix.

D

The estimated coefficient matrix of z.

Sigma

The estimated covariance matrix of the innovation distribution.

obj

The final objective value.

data

The data used in estimation if return_data is set to TRUE. NULL otherwise.

Details

The formulation of the reduced-rank regression is as follow: $$y = \mu +AB' x + D z+innov,$$ where for each realization \(y\) is a vector of dimension \(P\) for the \(P\) response variables, \(x\) is a vector of dimension \(Q\) for the \(Q\) explanatory variables that will be projected to reduce the rank, \(z\) is a vector of dimension \(R\) for the \(R\) explanatory variables that will not be projected, \(\mu\) is the constant vector of dimension \(P\), \(innov\) is the innovation vector of dimension \(P\), \(D\) is a coefficient matrix for \(z\) with dimension \(P*R\), \(A\) is the so called exposure matrix with dimension \(P*r\), and \(B\) is the so called factor matrix with dimension \(Q*r\). The matrix resulted from \(AB'\) will be a reduced rank coefficient matrix with rank of \(r\). The function estimates parameters \(\mu\), \(A\), \(B\), \(D\), and \(Sigma\), the covariance matrix of the innovation's distribution, assuming the innovation has a Cauchy distribution.

References

Z. Zhao and D. P. Palomar, "Robust maximum likelihood estimation of sparse vector error correction model," in2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 913--917,IEEE, 2017.

Author

Yangzhuoran Yang

Examples

set.seed(2222)
data <- RRR_sim()
res <- RRRR(y=data$y, x=data$x, z = data$z)
res
#> Robust Reduced-Rank Regression
#> ------
#> Majorisation-Minimisation
#> ------------
#> Specifications:
#>    N    P    Q    R    r 
#> 1000    3    3    1    1 
#> 
#> Coefficients:
#>          mu         A         B         D    Sigma1    Sigma2    Sigma3
#> 1  0.077140 -0.167090  1.557873  0.205806  0.652482 -0.044401  0.048752
#> 2  0.140989  0.442582  0.922494  1.138489 -0.044401  0.652799 -0.064597
#> 3  0.103221  0.799325 -0.694877  1.954476  0.048752 -0.064597  0.693794