Skip to main content
Log in

Changepoint in dependent and non-stationary panels

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Detection procedures for a change in means of panel data are proposed. Unlike classical inference tools used for the changepoint analysis in the panel data framework, we allow for mutually dependent and generally non-stationary panels with an extremely short follow-up period. Two competitive self-normalized test statistics are employed and their asymptotic properties are derived for a large number of available panels. The bootstrap extensions are introduced in order to handle such a universal setup. The novel changepoint methods are able to detect a common break point even when the change occurs immediately after the first time point or just before the last observation period. The developed tests are proved to be consistent. Their empirical properties are investigated through a simulation study. The invented techniques are applied to option pricing and non-life insurance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Andrews DWK (1984) Non-strong mixing autoregressive processes. J Appl Probab 21(4):930–934

    MathSciNet  MATH  Google Scholar 

  • Andrews DWK (1993) Tests for parameter instability and structural change with unknown change point. Econometrica 61(4):821–858

    MathSciNet  MATH  Google Scholar 

  • Antoch J, Hanousek J, Horváth L, Hušková M, Wang S (2019) Structural breaks in panel data: large number of panels and short length time series. Econom Rev 38(7):828–855

    MathSciNet  Google Scholar 

  • Bai J (2010) Common breaks in means and variances for panel data. J Econom 157(1):78–92

    MathSciNet  MATH  Google Scholar 

  • Baltagi BH, Feng Q, Kao C (2016) Estimation of heterogeneous panels with structural breaks. J Econom 191(2016):176–195

    MathSciNet  MATH  Google Scholar 

  • Betken A (2016) Testing for change-points in long-range dependent time series by means of a self-normalized Wilcoxon test. J Time Ser Anal 37(6):785–809

    MathSciNet  MATH  Google Scholar 

  • Billingsley P (1968) Convergence of probability measures, 1st edn. Wiley, New York

    MATH  Google Scholar 

  • Bradley RC (2005) Basic properties of strong mixing conditions. A survey and some open questions. Probab Surveys 2:107–144

    MathSciNet  MATH  Google Scholar 

  • Chan J, Horváth L, Hušková M (2013) Darling–Erdős limit results for change-point detection in panel data. J Stat Plan Inference 143(5):955–970

    MATH  Google Scholar 

  • Csörgő M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Chichester

    MATH  Google Scholar 

  • De Wachter S, Tzavalis E (2012) Detection of structural breaks in linear dynamic panel data models. Comput Stat Data Anal 56(11):3020–3034

    MathSciNet  MATH  Google Scholar 

  • Fitzenberger B (1997) The moving block bootstrap and robust inference for linear least squares and quantile regression. J Econom 82:235–287

    MathSciNet  MATH  Google Scholar 

  • Hall P, Horowitz JL, Jing BY (1995) On blocking rules for the bootstrap with dependent data. Biometrika 82(3):561–574

    MathSciNet  MATH  Google Scholar 

  • Horváth L, Hušková M (2012) Change-point detection in panel data. J Time Ser Anal 33(4):631–648

    MathSciNet  MATH  Google Scholar 

  • Horváth L, Horváth Z, Hušková M (2008) Beyond parametrics in interdisciplinary research: Festschrift in honor of Professor Pranab K. Sen. In: Balakrishnan N, Peña EA, Silvapulle MJ (eds) Ratio tests for change point detection, vol 1. Institute of Mathematical Statistics, Beachwood, pp 293–304

    Google Scholar 

  • Hušková M, Kirch C (2012) Bootstrapping sequential change-point tests for linear regression. Metrika 75(5):673–708

    MathSciNet  MATH  Google Scholar 

  • Hušková M, Kirch C, Prášková Z, Steinebach J (2008) On the detection of changes in autoregressive time series, II. Resampling procedures. J Stat Plan Inference 138(6):1697–1721

    MathSciNet  MATH  Google Scholar 

  • Ibragimov IA, Linnik YV (1971) Independent and stationary sequences of random variables. Wolters-Noordhoff, Groningen

    MATH  Google Scholar 

  • Kim D (2011) Estimating a common deterministic time trend break in large panels with cross sectional dependence. J Econom 164(2):310–330

    MathSciNet  MATH  Google Scholar 

  • Kirch C (2006) Resampling methods for the change analysis of dependent data. PhD thesis, University of Cologne, Germany

  • Künsch HR (1989) The jacknife and the bootstrap for general stationary observations. Ann Stat 17:1217–1241

    MATH  Google Scholar 

  • Lahiri S, Furukawa K, Lee YD (2007) A nonparametric plug-in rule for selecting optimal block lengths for block bootstrap methods. Stat Methodol 4(3):292–321

    MathSciNet  MATH  Google Scholar 

  • Lindner AM (2009) Stationarity, mixing, distributional properties and moments of GARCH(p, q)-processes. In: Andersen TG, Davis RA, Kreiss JP, Mikosch T (eds) Handbook of financial time series. Springer, Berlin, pp 481–496

    Google Scholar 

  • Liu RY, Singh K (1992) Moving blocks Jackknife and bootstrap capture weak dependence. In: Lapage R, Billard L (eds) Exploring the limits of bootstrap. Wiley, New York, pp 225–248

    Google Scholar 

  • Maciak M (2020) Quantile LASSO with changepoints in panel data models applied to option pricing. Econom Stat. https://doi.org/10.1016/j.ecosta.2019.12.005

  • Maciak M, Peštová B, Pešta M (2018) Structural breaks in dependent, heteroscedastic, and extremal panel data. Kybernetika 54(6):1106–1121

    MathSciNet  MATH  Google Scholar 

  • Meyers GG, Shi P (2011) Loss reserving data pulled from NAIC Schedule P. http://www.casact.org/research/index.cfm?fa=loss_reserves_data. http://www.casact.org/research/index.cfm?fa=loss_reserves_data. Updated 1 Sept 2011. Accessed 10 June 2014

  • Pesaran MH (2006) Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica 74(4):967–1012

    MathSciNet  MATH  Google Scholar 

  • Pešta M (2013) Asymptotics for weakly dependent errors-in-variables. Kybernetika 49(5):692–704

    MathSciNet  MATH  Google Scholar 

  • Pešta M (2017) Block bootstrap for dependent errors-in-variables. Commun Stat A Theory Methods 46(4):1871–1897

    MathSciNet  MATH  Google Scholar 

  • Pešta M, Hudecová Š (2012) Asymptotic consistency and inconsistency of the chain ladder. Insur Math Econ 51(2):472–479

    MathSciNet  MATH  Google Scholar 

  • Pešta M, Wendler M (2020) Nuisance-parameter-free changepoint detection in non-stationary series. TEST 29(2):379–408

    MathSciNet  Google Scholar 

  • Peštová B, Pešta M (2015) Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 78(6):665–689

    MathSciNet  MATH  Google Scholar 

  • Peštová B, Pešta M (2017) Change point estimation in panel data without boundary issue. Risks 5(1):7

    Google Scholar 

  • Peštová B, Pešta M (2018) Abrupt change in mean using block bootstrap and avoiding variance estimation. Comput Stat 33(1):413–441

    MathSciNet  MATH  Google Scholar 

  • Pešta M, Peštová B, Maciak M (2020) Changepoint estimation for dependent and non-stationary panels. Appl Math Czech. https://doi.org/10.21136/AM.2020.0296-19

  • Politis DN, White H (2004) Automatic block-length selection for the dependent bootstrap. Econom Rev 23:53–70

    MathSciNet  MATH  Google Scholar 

  • Qian J, Su L (2016a) Shrinkage estimation of common breaks in panel data models via adaptive group fused Lasso. J Econom 191(1):86–109

    MathSciNet  MATH  Google Scholar 

  • Qian J, Su L (2016b) Shrinkage estimation of regression models with multiple structural changes. Econom Theor 32(6):1376–1433

    MathSciNet  MATH  Google Scholar 

  • Rosenblatt M (1971) Markov processes: structure and asymptotic behavior. Springer, Berlin

    MATH  Google Scholar 

  • Shao X (2011) A simple test of changes in mean in the possible presence of long-range dependence. J Time Ser Anal 32(6):598–606

    MathSciNet  MATH  Google Scholar 

  • Shao X, Zhang X (2010) Testing for change points in time series. J Am Stat Assoc 105(491):1228–1240

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research of Matúš Maciak and Michal Pešta was supported by the Czech Science Foundation project GAČR No. 18-01781Y.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Pešta.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

Proof of Theorem 1

Firstly, we show that a multivariate CLT holds for a sequence of the T-dimensional \(\alpha \)-mixing random vectors \(\{[\sum _{s=1}^1\varepsilon _{i,s},\ldots ,\sum _{s=1}^T\varepsilon _{i,s}]^{\top }\}_{i\in \mathbb {N}}\equiv \{\varvec{\theta }_i\}_{i\in \mathbb {N}}\). According to the Cramér-Wold theorem, it is sufficient to ensure that all assumptions of the one-dimensional \(\alpha \)-mixing CLT (Pešta 2013, Lemma 2.1) for triangular arrays are valid for any linear combination \(\varvec{b}=[b_1,\ldots ,b_T]^{\top }\in \mathbb {R}^T\) of the elements of the random vector \(\varvec{\theta }_i,\,i\in \mathbb {N}\). Hence, let us consider \(\vartheta _i:=\varvec{b}^{\top }\varvec{\theta }_i\) and keep Assumption \(\mathcal {A}1\) in mind. Then, we get

$$\begin{aligned} \frac{\mathsf {E}\left( \sum _{i=1}^{N}\vartheta _i\right) ^2}{N}=\frac{1}{N}\mathsf {Var}\,\sum _{i=1}^N\varvec{b}^{\top }\varvec{\theta }_i=\varvec{b}^{\top }\frac{\varvec{\Lambda }_N}{N}\varvec{b}\rightarrow \varvec{b}^{\top }\varvec{\Lambda }\varvec{b},\quad N\rightarrow \infty , \end{aligned}$$
(6)

where \(\varvec{\Lambda }_N=\mathsf {Var}\,\sum _{i=1}^{N}[\sum _{s=1}^1\varepsilon _{i,s},\ldots ,\sum _{s=1}^T\varepsilon _{i,s}]^{\top }\). Furthermore,

$$\begin{aligned} \sup _{i\in \mathbb {N}}\mathsf {E}\left| \vartheta _i\right| ^{2+\chi }&\le \sup _{i\in \mathbb {N}}\left( \max _{t=1,\ldots ,T}|b_t|^{2+\chi }\right) \sum _{t=1}^T\mathsf {E}\left| \theta _{t,i}\right| ^{2+\chi }\nonumber \\&\le T\left( \max _{t=1,\ldots ,T}|b_t|^{2+\chi }\right) \max _{t=1,\ldots ,T}\sup _{i\in \mathbb {N}}\mathsf {E}\left| \sum _{s=1}^t\varepsilon _{i,s}\right| ^{2+\chi }\nonumber \\&\le T^2\left( \max _{t=1,\ldots ,T}|b_t|^{2+\chi }\right) \max _{t=1,\ldots ,T}\sup _{i\in \mathbb {N}}\mathsf {E}\left| \varepsilon _{i,t}\right| ^{2+\chi }<\infty . \end{aligned}$$
(7)

The \(\sigma \)-algebra generated by \(\vartheta _i\) is contained in the \(\sigma \)-algebra generated by \(\varvec{\varepsilon }_i\); consequently for all \(I\subseteq \mathbb {N}\) (possibly infinite), \(\sigma \{\vartheta _i:\,i\in I\}\subseteq \sigma \{\varvec{\varepsilon }_i:\,i\in I\}\). For this reason, for all \(i\in \mathbb {N}\), \(\alpha (\vartheta _{\circ },i)\le \alpha (\varvec{\varepsilon }_{\circ },i)\). One obtains

$$\begin{aligned} \sum _{i=1}^{\infty }\alpha (\vartheta _{\circ },i)^{\chi /(2+\chi )}\le \sum _{i=1}^{\infty }\alpha (\varvec{\varepsilon }_{\circ },i)^{\chi /(2+\chi )}<\infty ,\quad \chi >0. \end{aligned}$$
(8)

Relations (6)–(8) imply

(9)

Let us define

$$\begin{aligned} U_N(t):=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=1}^t(Y_{i,s}-\mu _i). \end{aligned}$$

Under \(\mathcal {H}_0\) and according to (9), we have

Moreover, let us define the reverse analogue to \(U_N(t)\), i.e.,

$$\begin{aligned} V_N(t):=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=t+1}^T(Y_{i,s}-\mu _i)=U_N(T)-U_N(t). \end{aligned}$$

Hence,

$$\begin{aligned} U_{N}(s)-\frac{s}{t}U_{N}(t)&=\frac{1}{\sqrt{N}}\sum _{i=1}^N\left\{ \sum _{r=1}^s\left[ \left( Y_{i,r}-\mu _i\right) -\frac{1}{t}\sum _{v=1}^t \left( Y_{i,v}-\mu _i\right) \right] \right\} \\&=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{r=1}^s\left( Y_{i,r}-\bar{Y}_{i,t}\right) \end{aligned}$$

and, consequently,

$$\begin{aligned}&V_{N}(s)-\frac{T-s}{T-t}V_{N}(t)\\&\quad =\frac{1}{\sqrt{N}}\sum _{i=1}^N\left\{ \sum _{r=s+1}^T\left[ \left( Y_{i,r}-\mu _i\right) -\frac{1}{T-t}\sum _{v=t+1}^T \left( Y_{i,v}-\mu _i\right) \right] \right\} \\&\quad =\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{r=s+1}^T\left( Y_{i,r}-\widetilde{Y}_{i,t}\right) . \end{aligned}$$

Using the continuous mapping theorem, we end up with

and

\(\square \)

Proof of Theorem 2

Let us define \(L_N(s,t):=\sum _{i=1}^N\sum _{r=1}^s\big (\varepsilon _{i,r}-\bar{\varepsilon }_{i,t}\big )\) and \(R_N(s,t):=\sum _{i=1}^N\sum _{r=s+1}^T\big (\varepsilon _{i,r}-\widetilde{\varepsilon }_{i,t}\big )\) such that \(\bar{\varepsilon }_{i,t}=\frac{1}{t}\sum _{s=1}^t \varepsilon _{i,s}\) and \(\widetilde{\varepsilon }_{i,t}=\frac{1}{T-t}\sum _{s=t+1}^T \varepsilon _{i,s}\). With respect to Assumption \(\mathcal {A}1\) and according to the underlying proof of Theorem 1, we have

$$\begin{aligned} \frac{1}{\sqrt{N}}\max _{s=1,\ldots ,\tau }|L_N(s,\tau )|&=\mathcal {O}_{\mathsf {P}}(1);\\ \frac{1}{\sqrt{N}}\max _{s=\tau ,\ldots ,T-1}|R_N(s,\tau )|&=\mathcal {O}_{\mathsf {P}}(1);\\ \frac{1}{\sqrt{N}}|L_N(\tau ,T)|&=\mathcal {O}_{\mathsf {P}}(1), \end{aligned}$$

as \(N\rightarrow \infty \). Note that there are no changes in the expectation of \(Y_{i,1},\ldots ,Y_{i,\tau }\) as well as in the expectation of \(Y_{i,\tau +1},\ldots ,Y_{i,T}\). Let \(t=\tau \). Then, under \(\mathcal {H}_A\),

according to Assumption \(\mathcal {A}2\).

Similarly for \(\mathcal {S}_N(T)\), we get that

$$\begin{aligned} {{\,\mathrm{plim}\,}}_{N\rightarrow \infty }\mathcal {S}_N(T)&\ge {{\,\mathrm{plim}\,}}_{N\rightarrow \infty }\frac{\sum _{t=1}^{T-1}\mathcal {L}_N^2(\tau ,T)}{\sum _{s=1}^{\tau }\mathcal {L}_N^2(s,\tau )+\sum _{s=\tau }^{T-1}\mathcal {R}_N^2(s,\tau )}\\&={{\,\mathrm{plim}\,}}_{N\rightarrow \infty }\frac{N^{-1}\sum _{t=1}^{T-1}\left( L_N(\tau ,T)-\frac{\tau }{T}(T-\tau )\sum _{i=1}^N\delta _i\right) ^2}{N^{-1}\sum _{s=1}^{\tau }L_N^2(s,\tau )+N^{-1}\sum _{s=\tau }^{T-1}R_N^2(s,\tau )}=\infty , \end{aligned}$$

where \({{\,\mathrm{plim}\,}}_{N\rightarrow \infty }\) is the probability limit operator corresponding to convergence in probability. The latter mentioned relation holds again because there are no changes in the means of \(Y_{i,1},\ldots ,Y_{i,\tau }\) as well as in the means of \(Y_{i,\tau +1},\ldots ,Y_{i,T}\) and due to Assumption \(\mathcal {A}2\). \(\square \)

Proof of Theorem 3

Let us define \(\widehat{\epsilon }_{i,t}:=\sum _{s=1}^t\widehat{e}_{i,s}\), \(\widehat{\epsilon }_{i,t}^{(b)}:=\sum _{s=1}^t\widehat{e}_{i,s}^{(b)}\),

$$\begin{aligned} \widehat{U}_N(t):=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=1}^t\widehat{e}_{i,s}=\frac{1}{\sqrt{N}}\sum _{i=1}^N\widehat{\epsilon }_{i,t}, \end{aligned}$$

and

$$\begin{aligned} \widehat{U}_N^{(b)}(t)&:=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=1}^t\widehat{Y}_{i,s}^{(b)}=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=1}^t\left( \widehat{e}_{i,s}^{(b)}-\frac{1}{N}\sum _{i=1}^N\widehat{e}_{i,s}\right) \\&=\frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{s=1}^t\left( \widehat{e}_{i,s}^{(b)}-\widehat{e}_{i,s}\right) =\frac{1}{\sqrt{N}}\sum _{i=1}^N\left( \widehat{\epsilon }_{i,t}^{(b)}-\widehat{\epsilon }_{i,t}\right) . \end{aligned}$$

Realize that \(\widehat{\epsilon }_{i,t}\) depends on \(\widehat{\tau }_N\) and, hence, it depends on N. Thus, \(\widehat{\epsilon }_{i,t}\equiv \widehat{\epsilon }_{i,t}(N)\). Since Assumption \(\mathcal {B}1\) holds, then according to the bootstrap multivariate CLT for \(\alpha \)-mixing triangular arrays of T-dimensional vectors \(\varvec{\xi }_{N,i}=[\widehat{\epsilon }_{i,1}(N),\ldots ,\widehat{\epsilon }_{i,T}(N)]^{\top }\) with \(k_N=N\) by Pešta(2017, minor modification of Theorem A.5), we have

where \(\widehat{\varvec{\Gamma }}_N=\frac{1}{N}\mathsf {Var}\,\sum _{i=1}^N[\widehat{\epsilon }_{i,1},\ldots ,\widehat{\epsilon }_{i,T}]^{\top }\).

Now, it is sufficient to realize that \([\widehat{U}_N(1),\ldots ,\widehat{U}_N(T)]^{\top }\) has approximately a multivariate normal distribution with zero mean and the covariance matrix \(\lim _{N\rightarrow \infty }\widehat{\varvec{\Gamma }}_N\). Using the law of total variance,

$$\begin{aligned} \mathsf {Var}\,\widehat{\epsilon }_{i,t}&=\mathsf {E}[\mathsf {Var}\,\{\widehat{\epsilon }_{i,t}|\widehat{\tau }_N\}]+\mathsf {Var}\,[\mathsf {E}\{\widehat{\epsilon }_{i,t}|\widehat{\tau }_N\}]\\&=\sum _{\pi =1}^T\mathsf {P}[\widehat{\tau }_N=\pi ]\mathsf {Var}\,[\widehat{\epsilon }_{i,t}|\widehat{\tau }_N=\pi ]+\sum _{\pi =1}^T\mathsf {P}[\widehat{\tau }_N=\pi ]\{\mathsf {E}[\widehat{\epsilon }_{i,t}|\widehat{\tau }_N=\pi ]\}^2\\&\quad -\left\{ \sum _{\pi =1}^T\mathsf {P}[\widehat{\tau }_N=\pi ]\mathsf {E}[\widehat{\epsilon }_{i,t}|\widehat{\tau }_N=\pi ]\right\} ^2. \end{aligned}$$

Since \(\lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=\tau ]=1\) (Assumption \(\mathcal {C}1\)) and \(\mathsf {E}[\widehat{e}_{i,t}|\widehat{\tau }_N=\tau ]=0\), then

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathsf {Var}\,\widehat{\epsilon }_{i,t}=\lim _{N\rightarrow \infty }\mathsf {Var}\,[\widehat{\epsilon }_{i,t}|\widehat{\tau }_N=\tau ]. \end{aligned}$$

Similarly with the covariance, i.e., after applying the law of total covariance, we have

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathsf {Cov}\,\left( \widehat{\epsilon }_{i,t},\widehat{\epsilon }_{i,v}\right) =\lim _{N\rightarrow \infty }\mathsf {Cov}\,\left( \widehat{\epsilon }_{i,t},\widehat{\epsilon }_{i,v}|\widehat{\tau }_N=\tau \right) . \end{aligned}$$

Note that

$$\begin{aligned} \left( \widehat{e}_{i,t}|\widehat{\tau }_N=\tau \right) =\left\{ \begin{array}{ll} \varepsilon _{i,t}-\bar{\varepsilon }_{i,\tau },&{} t\le \tau ;\\ \varepsilon _{i,t}-\widetilde{\varepsilon }_{i,\tau },&{} t>\tau . \end{array} \right. \end{aligned}$$

Taking into account the definitions of \(e_{i,t}\)’s from Assumption \(\mathcal {B}1\), we get \(\varvec{\Gamma }=\lim _{N\rightarrow \infty }\widehat{\varvec{\Gamma }}_N\).

Then \(\mathcal {L}_N^{(b)}(t,T)\) from the numerators of \(\mathcal {Q}_N^{(b)}(T)\) and \(\mathcal {S}_N^{(b)}(T)\) can be alternatively rewritten as

$$\begin{aligned} \frac{1}{\sqrt{N}}\sum _{i=1}^N\sum _{r=1}^s\left( \widehat{Y}_{i,r}^{(b)}-\bar{\widehat{Y}}_{i,t}^{(b)}\right)= & {} \frac{1}{\sqrt{N}}\sum _{i=1}^N\left\{ \left[ \sum _{r=1}^s \widehat{Y}_{i,r}^{(b)}\right] -\frac{s}{t}\sum _{v=1}^t \widehat{Y}_{i,v}^{(b)}\right\} \\= & {} \widehat{U}_{N}^{(b)}(s)-\frac{s}{t}\widehat{U}_{N}^{(b)}(t). \end{aligned}$$

Concerning the denominators of \(\mathcal {Q}_N^{(b)}(T)\) and \(\mathcal {S}_N^{(b)}(T)\), one needs to perform a similar calculation as in the proof of Theorem 1 with \(V_N(t)\), i.e., to define \(\widehat{V}_N(t)\) and \(\widehat{V}_N^{(b)}(t)\) analogously to \(\widehat{U}_N(t)\) and \(\widehat{U}_N^{(b)}(t)\) as \(V_N(t)\) is to \(U_N(t)\). Applying the continuous mapping theorem completes the proof. \(\square \)

Proof of Corollary 1

Recall the notation from the proof of Theorem 3. Under \(\mathcal {H}_0\) and Assumption \(\mathcal {C}1\), it holds that \(\lim _{N\rightarrow \infty }\mathsf {P}[\widehat{\tau }_N=T]=1\). Then in view of (4),

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathsf {P}\left[ \widehat{U}_N(s)-\frac{s}{t}\widehat{U}_N(t)=U_N(s)-\frac{s}{t}U_N(t)\right] =1,\quad 1\le s\le t\le T. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maciak, M., Pešta, M. & Peštová, B. Changepoint in dependent and non-stationary panels. Stat Papers 61, 1385–1407 (2020). https://doi.org/10.1007/s00362-020-01180-6

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-020-01180-6

Keywords

Mathematics Subject Classification

Navigation