Abstract
To reduce the predictors dimension without loss of information on the regression, we develop in this paper a sufficient dimension reduction method which we term cumulative Hessian directions. Unlike many other existing sufficient dimension reduction methods, the estimation of our proposal avoids completely selecting the tuning parameters such as the number of slices in slicing estimation or the bandwidth in kernel smoothing. We also investigate the asymptotic properties of our proposal when the predictors dimension diverges. Illustrations through simulations and an application are presented to evidence the efficacy of our proposal and to compare it with existing methods.
Original language | English |
---|---|
Pages (from-to) | 325-334 |
Number of pages | 10 |
Journal | Statistics and Computing |
Volume | 21 |
Issue number | 3 |
DOIs | |
Publication status | Published - Jul 2011 |
Scopus Subject Areas
- Theoretical Computer Science
- Statistics and Probability
- Statistics, Probability and Uncertainty
- Computational Theory and Mathematics
User-Defined Keywords
- Central subspace
- Diverging parameters
- Inverse regression
- Sufficient dimension reduction