Parametric Bootstrapping Predictive Estimator for Logistic Regression

Main Article Content

Kunio Takezawa

Abstract

This paper proposes a method for constructing a predictive estimator for logistic regression. We make a provisional assumption that the predictive estimator is given by multiplying the maximum likelihood estimators by constants, which are estimated using a parametric bootstrap method. The relative merits of the maximum likelihood estimator and the predictive estimator produced by this method are determined by cross-validation. The results show that the predictive
estimators derived by this method lead to a smaller deviance than that obtained by the maximum likelihood estimator in many instances.

Keywords:
Log-likelihood, future data, predictive estimator, logistic regression, maximum likelihood estimator, parametric bootstrap method.

Article Details

How to Cite
Takezawa, K. (2019). Parametric Bootstrapping Predictive Estimator for Logistic Regression. Journal of Advances in Mathematics and Computer Science, 32(5), 1-15. https://doi.org/10.9734/jamcs/2019/v32i530154
Section
Original Research Article

Article Metrics


References

Millar RB. Maximum likelihood estimation and inference: with examples in R, SAS and ADMB. Wiley. NJ, U.S.A.; 2011.

Takezawa K. A revision of AIC for normal error models. Open Journal of Statistics. 2012;2(3):309-312.

Takezawa K. Learning regression analysis by simulation. Springer, Tokyo, Japan; 2014.

Takezawa K. Estimation of the exponential distribution in the light of future data. British Journal of Mathematics & Computer Science. 2015;5(1):128-132.

Ogasawara H. Predictive estimation of a covariance matrix and its structural parameters. Journal of the Japanese Society of Computational Statistics. 2017;30:45-63.

Ogasawara H. A family of the adjusted estimators maximizing the asymptotic predictive expected log-likelihood. Behaviormetrika. 2017;44:57-95.

Ogasawara H. An asymptotic equivalence of the cross-data and predictive estimators. Communications in Statistics - Theory and Methods. 2019;1-14.

Mc Cullagh P, Nelder JA. Generalized linear models, second edition. Chapman & Hall/CRC. Boca Raton, FL, U.S.A.; 1989.

Myers RH, Montgomery DC. Vining GG, Robinson TJ. Generalized Linear Models: With Applications in Engineering and the Sciences (first edition). Wiley. NJ, U.S.A.; 2002.

Wood SN. Generalized additive models: An introduction with R, second edition. Chapman & Hall/CRC. Boca Raton, FL, U.S.A.; 2017.

Dobson AJ, Barnett AG. An introduction to generalized linear models, fourth edition. Chapman & Hall/CRC. Boca Raton, FL, U.S.A.; 2018.

Takezawa K. Predictive estimator for simple regression. Journal of Advances in Mathematics and Computer Science. 2017;24(4):1-14.

Efron B, Tibshirani RJ. An introduction to the bootstrap. Chapman & Hall/CRC. Boca Raton, FL, U.S.A.; 1993.

Takezawa K. Optimal estimator with respect to expected log-likelihood. International Journal of Innovation in Science and Mathematics. 2014;2(6):494-508.

James G, Witten G, Hastie D, Tibshirani T, James R. An introduction to statistical learning: With applications in R. New York: Springer; 2013.

Akaike H. Information theory and an extension of the maximum likelihood principle. Proceedings of 2nd International Symposium on Information Theory (Petrov BN. Csaki F. (Eds.)),. Budapest: Akademiai Kiado. 1973;267-281.

Akaike H. A new look at the statistical model identification. IEEE Transaction on Automatic Control. 1974;19(6):716-723.

Konishi S, Kitagawa G. Information criteria and statistical modelling. New York: Springer; 2008.

Wahba G. Spline Models for Observational Data (CBMS-NSF Regional Conference Series in Applied Mathematics). Society for Industrial and Applied Mathematics; 1990.

Golub GH, Heath M, Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics. 1979;21(2):215-223.

Efron B. Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association. 1983;78;316-331.

Efron B, Tibshirani RJ. Improvements on cross-validation: The .632+ bootstrap method. Journal of the American Statistical Association. 1997;92(438):548-560.

Arlot S, Celisse A. A survey of cross-validation procedures for model selection. Statistics Surveys. 2010;4;40-79.

Friedman JH, Hastie T, Tibshirani R. Regularized paths for generalized linear models via coordinate descent. Journal of Statistical Software. 2010;33(1);1-22.

Hastie T, Qian J. Glmnet Vignette. Stanford; 2016.
Available:https://web.stanford.edu/ hastie/Papers/GlmnetV ignette:pdf