Value at Risk and Extreme Value Theory


Seminar Paper, 2006

23 Pages, Grade: 1,0


Excerpt


Contents

1 Value at Risk and Risk Management
1.1 Value at Risk

2 Quantile Estimation
2.1 Parametric Methods
2.1.1 Normally distributed returns
2.1.2 Extreme Value Theory
2.2 Non-parametric Methods
2.3 Discussion

3 Extreme Value Theory
3.1 Block Maxima
3.1.1 Fisher-Tippett Theorem
3.1.2 Generalized Extreme Value Distribution
3.1.3 Parameter Estimation
3.1.4 Calculating VaR
3.1.5 Discussion
3.2 Peaks over Threshold
3.2.1 The Distribution of the Excesses
3.2.2 Pickands-Balkema-de-Haan Theorem
3.2.3 Generalized Pareto Distribution
3.2.4 Estimating Tails
3.2.5 Estimating GPD Parameters
3.2.6 Estimating VaR
3.2.7 Mean Excess Function
3.3 Conditional Value at Risk
3.3.1 A Model for the Returns
3.3.2 One Day Value at Risk

4 Case Study
4.1 Backtesting
4.2 Data Analysis
4.3 Results

1 Value at Risk and Risk Management

Complexity in all business areas is growing constantly and with that risk in all its aspects is becoming gradually a more important issue not only in management. Risk is usually understood as a measure of a negative event’s impact, e.g. a portfolio loss. According to Jorion[7] the purpose of risk management is to identify, measure and control risks.

Risk management has been used for a long time by financial institutions and asset managers on a voluntary basis. Then, after severe shocks in the financial sector, regulators began to claim for risk management systems in the regulated industries. Today, even non financial corporation - especially multinational corporates - use risk management to control their numerous risks exposures.

In this article I will focus on the aspect of measuring risk. I will firstly introduce the concept of Value at Risk, a measure that summarizes the complex risk structure in a single figure. In the next section I will discuss different methods how quantiles of distributions can be estimated. Extreme Value Theory provides an elegant way to estimate quantiles lying in the tails of a distributions. In section three I will present the theory behind Extreme Value Theory and how it can be used to estimate Value at Risk. Finally, in section 4 different Value at Risk measures are calculated for a series of returns in order to compare their power.

1.1 Value at Risk

Jorion [7, p. xxii] defines Value at Risk as follows:

Value at Risk (VaR) is the loss of a portfolio that will under normal market conditions not be exceeded in the next τ .ays with probability p .

For the following discussion we will need a more formal representation. Let X . , . . . , Xn .e a series of negative returns of a portfolio or asset with common distribution function F .here F . x . = P[ X .. x .1. We use negative returns as we are interested in losses and it will turn out to be useful to work with positive numbers.

Put mathematically, VaR is the upper p .th quantile of the distribution as is shown in figure

1. The unconditional VaR is defined as

illustration not visible in this excerpt

Figure 1: Value at Risk

where [illustration not visible in this excerpt] indicates the inverse of the distribution function of the returns and τ .s the prediction horizon. The conditional VaR is defined as

illustration not visible in this excerpt

where Ft .ndicates the conditional distribution function of the returns at time t .iven the information available at time t .

Although in most practical applications one is interested in the VaR of a portfolio, I will restrict the theoretical analysis to a single asset. The methods presented can be applied to portfolios as they only require the price series which can be constructed if the individual price series are available.

There are two main parameters characterizing VaR: the confidence probability p .nd the time horizon τ . The selection of both of them is a trade off between accuracy and usability. If we use a long time horizon, estimates become less reliable and the same holds for too small probabilities. In the following discussion I will use one step ahead VaR but give brief notes on the generalization to multi period VaR.

As can be seen from the formal definition the only thing we need for calculating VaR - given the parameters p .nd τ . is the distribution function of the (negative) returns, or more specific its upper tail. But as we will never know it by certain we must rely on estimates. In the next section I will discuss methods how to estimate quantiles of unknown distribution functions.

2 Quantile Estimation

The p .th quantile of the distribution function F .s a number xp .atisfying

illustration not visible in this excerpt

There are several methods available to estimate a quantile of a sample. They can be divided into parametric and non parametric methods.

2.1 Parametric Methods

Using parametric methods we assume that the distribution function of the observed sample is at least asymptotically identical with a known distribution function. Thus, we can fit the data to this function in order to obtain estimates of the parameters. Then, according to equation (1), the quantile can be calculated.

2.1.1 Normally distributed returns

If we assume the returns to be normally distributed we can use the estimates of the mean and the standard deviation to calculate the quantile using the inverse of the normal distribution function [illustration not visible in this excerpt]. This can, of course, be applied to any distribution function that might explain the behavior of the returns.

2.1.2 Extreme Value Theory

Instead of fitting the data to the overall distribution function we could try to find a distribution function that describes the tail of the sample data more accurately. Extreme Value Theory shows that there are distribution functions that can be used to describe the extremal behavior of any distribution function. This will be discussed in the following sections.

2.2 Non-parametric Methods

Non-parametric methods try to find quantile estimates without assuming a specific distribution function. We refer to this method as empirical quantile estimator . In the literature this method is also called historical simulation .

We apply order statistics to the series of negative returns x . , . . . , xn . i.e. we sort them beginning with the smallest [12, p. 267]. Put formally we have

illustration not visible in this excerpt

where x . . is the estimator of the quantile, . np .nd n .s the number of observations, f .s the density function and xp .s the actual quantile [3]. If .s not an integer value we can use the smaller integer or interpolate between the surrounding integers.

Thus, we can use x . . as an unbiased estimator for the actual quantile.

2.3 Discussion

The advantage of the non-parametric method is that it can be applied easily and does not need a distributional assumption. On the other hand, it assumes that the distribution does not change over time. Further, if we use it to estimate future VaR we possibly underestimate it because the highest predicted loss can not exceed the highest past loss. Finally, the quantile estimator is not applicable for the tails of the distribution as can be seen from equation (2) because the variance increases in p .or p < . / ..

The normality assumption of returns is generally rejected in the financial literature. Even if we could assume the returns around the mean to be normally distributed, it does not hold for the extremes. Returns of financial instruments, e.g. stocks, exhibit heavy tails, i.e. the probability of extreme events is higher than under a normal distribution. This problem is often solved by using heavy tailed distributions as the lognormal or t-distribution.

3 Extreme Value Theory

In the literature the dike building .xample is often used to motivate extreme value theory (EVT). Consider an engineer constructing a dike. Obviously, one major question is the height of the dike. Suppose we have data of the sea level peaks of the last two decades. How can we use them efficiently to get an answer? Extreme Value Theory does not give miraculous insights but uses the data available to extract as much information as possible without requiring strong assumptions as for example normally distributed sea levels.

EVT is frequently used in the insurance industry to calculate loss severities. McNeil [8] applies EVT to Danish fire loss data and shows that the fit at the tails provides strong evidence of the theory. Recently, EVT has been applied to financial problems, too.

EVT gives information about the distribution of extreme events, i.e. events from the tails of a distribution. Instead of describing the distribution on the whole domain we try to find a distribution function that holds particularly for the tails. In the following sections I will use two useful theorems that give asymptotic results for the behavior of sample extremes. First, we have to define what is meant by extreme . One possibility would be the maximum observation of a sample which will lead to the so called Block Maxima approach. The other way would be to define a high threshold indicating extreme observations. The latter is referred to as the Peaks over Threshold (POT) method.

A more comprehensive discussion on EVT can be found at Embrechts et. al.[4]. Coles[2] provides an introduction into the matter.

3.1 Block Maxima

Let x . , . . . , xn .e a series of negative returns with common distribution function F . We define the maximum of the sample as

illustration not visible in this excerpt

If we would know the distribution function of the maxima x .e could calculate the quantile and Value at Risk.

illustration not visible in this excerpt

Thus, if we would know F .e could easily calculate F . But even if F .ould be known, the distribution of the maxima would degenerate to a point as n .oes to infinity.

3.1.1 Fisher-Tippett Theorem

Theorem 1 (Fisher-Tippett). If the distribution of the normalized maxima converges to some limiting distribution . H . for increasing . n . then . H . must be an extreme value distribution [5]2. .

Put formally, there must exist sequences . an . and . bn . , such that .

illustration not visible in this excerpt

This result is similar to the well known Central Limit Theorem saying that the sum of random variables of any distribution converge to a normally distributed random variable as the number of observations goes to infinity. Put formally

illustration not visible in this excerpt

where N .enotes the normal distribution.

3.1.2 Generalized Extreme Value Distribution

The Generalized Extreme Value Distribution (GEV) has distribution function

illustration not visible in this excerpt

for 1 + ξ . x > ..

The parameter ξ .s called the shape .arameter. The GEV is a generalization of three known classes of distributions:

ξ . > . Fréchet distribution (pareto, t)

ξ . 0 Gumbel distribution (normal, lognormal, gamma)

ξ . < . Weibull distribution (uniform, beta)

If ξ . > . the distribution has heavy tails and thus, we are mainly interested in Fréchet type distributions. Figure 2 exhibits the Fréchet distribution for two different values of the shape parameter.

3.1.3 Parameter Estimation

A sample has naturally only one maximum. But there exists no way to estimate parameters with only one observation available. Thus, we must circumvent this problem by building so called block maxima. We define the block length as k .nd obtain n/k .ub samples from which we calculate the maxima to get a sample of k .ub sample maxima ( x . , . . . , . xk ..

illustration not visible in this excerpt

Figure 2: The Generalized Extreme Value Distribution for parameters ξ . 1 (thin line) and ξ . 1 . .

Now we can use the maximum likelihood method to estimate the parameters of the GEV distribution3.

3.1.4 Calculating VaR

If we calculate the p .th quantile of the GEV distribution with the estimated parameters we do not get the one day ahead VaR. We get the k .ays ahead estimate instead because with the Block Maxima approach we estimate the distribution of the maxima occurring during k .ays. Thus, we must apply some adjustments.

As stated above VaR could be calculated as [illustration not visible in this excerpt].

In other words: to obtain VaR we have to estimate the parameters of the GEV and then calculate its [illustration not visible in this excerpt] -th quantile.

3.1.5 Discussion

One important question is the block size k . The asymptotical result holds for a large number of maxima such that we would like to choose short blocks. But then we might fail getting a series of extreme .bservations.

One advantage of the Block Maxima method is that we can easily calculate multi period VaR by just setting the block size to the prediction interval (for sufficiently long periods).

[illustration not visible in this excerpt]

Figure 3: Observations exceeding the threshold u .re considered to be extreme values denoted by y .

Unfortunately there is no way to implement conditional forecasts comparable to ARMA models. Thus, it is impossible to build a model that instantly reacts on dynamics.

3.2 Peaks over Threshold

As EVT deals with extreme events we are still looking for an adequate way of defining these extreme events. The Peaks over Threshold (POT) method is another approach doing this. The presentation in this section follows Embrechts et al. [4] and McNeil [9].

3.2.1 The Distribution of the Excesses

Suppose the negative returns x . , . . . , xn .re independently distributed with the common distribution function F . We would like to find the distribution function of the excesses

[illustration not visible in this excerpt]

We define a threshold u .uch that all observations that exceed u .re considered to be extreme observations. We denote the distribution function of these excesses by

[illustration not visible in this excerpt]4

3.2.2 Pickands-Balkema-de-Haan Theorem

The following theorem by Balkema and de Haan[1] and Pickands[11] provides an important asymptotic result.

Theorem 2 (Pickands-Balkema-de-Haan). If . u ... then the Generalized Pareto Distribu- . tion (GPD) is the limiting distribution function of the excesses over a threshold u. Put formally .

[illustration not visible in this excerpt]

Thus, for high thresholds we can approximate the unknown excess distribution function Fu .y the known Generalized Pareto Distribution G .

The optimal level of u .s the result of a trade off. If u .s low there are many exceedances but the the asymptotic result does not hold any more. If u .s high the number of exceedances may be too low. In section 4 the so called mean excess function is introduced that can help detecting the adequate threshold.

3.2.3 Generalized Pareto Distribution

The Generalized Pareto Distribution is a generalization of the ordinary Pareto distribution.

[illustration not visible in this excerpt]

Figure 4: The Generalized Pareto Distribution for parameters ξ . { . . . , . , .0 } .

where β . > . is the scaling parameter and ξ .s the shape parameter. The domain of the GPD is x .. if ξ .. or 0 . x < . −β . / . ξ .f ξ . < .. For ξ . > . this is the ordinary Pareto distribution with heavy tails.

3.2.4 Estimating Tails

From equation (4) it follows that [illustration not visible in this excerpt] and therefore we can rewrite equation (3) as

[illustration not visible in this excerpt]

Now the only unknown is the expression F . u .. In section 3 we have stated that we should not use empirical quantiles as estimators for quantiles lying in the tail of a distribution. Though, if we chose u .uch that there is enough data we can estimate F . u . using the empirical quantile estimator. Denote the number of exceedances by Nu . Then

[illustration not visible in this excerpt]

3.2.5 Estimating GPD Parameters

After having found a formula for the tail distribution we must estimate the parameters of the GPD. They can be obtained by applying the maximum likelihood method to the excesses y . Therefore the nonlinear optimization problem max ξ . , . β .. y . ξ . , . β . must be solved where .) is the log likelihood function5.

3.2.6 Estimating VaR

If we invert equation (7) we get an estimator for the quantile and for the VaR, respectively.

[illustration not visible in this excerpt]

3.2.7 Mean Excess Function

In order to examine the data whether EVT can be applied the mean excess function is a useful tool. The mean excess function provides the expected excess over u .iven a threshold u .

e . u . = E[ x .. u|x > u .

For the GPD the theoretical mean excess function is

[illustration not visible in this excerpt]

This can help us in two aspects. First, we notice that the mean excess function of the GPD must be a straight line. Thus, if we observe a linear empirical mean excess function we can assume the sample to stem from the GPD. Second, the slope is positive for 0 < . ξ . < .. As we have stated above the GPD has heavy tails for positive shape parameters. Thus, if we observe a positive sloped line above some threshold u .e can conclude that the excesses over u .ollow a GPD with heavy tails [4].

[illustration not visible in this excerpt]

The empirical mean excess function is

[illustration not visible in this excerpt]

where the denominator simply counts the number of excesses and thus, the empirical mean excess function calculates the mean of the excesses.

Figure (7) shows an example of the empirical mean excess function.

3.3 Conditional Value at Risk

In this section we will develop a conditional Value at Risk estimator that enhances the ARMA- GARCH model by applying Extreme Value Theory as it has been described by McNeil[10] [9].

In contrast to the unconditional VaR we use the distribution function Ft .epending on the information given at time t .o estimate the quantile. Therefore we need a model that provides an estimate of the conditional distribution function.

3.3.1 A Model for the Returns

Consider the following model for the returns:

[illustration not visible in this excerpt]

where Zt .as an unknown distribution function FZ . zero mean and unit variance.

Assume that the negative returns follow an AR(1) process and the variance can be described by a GARCH(1,1) process. The conditional mean and variance can be expressed as

[illustration not visible in this excerpt]

3.3.2 One Day Value at Risk

We denote the conditional VaR by [illustration not visible in this excerpt] where the t .ndicatestheinformationset on which VaR is calculated. As we restrict the analysis to one day ahead estimates we do not explicitly include the time horizon.

VaR can be calculated using equation (9) as

[illustration not visible in this excerpt]

Thus we must first calculate the VaR based on the innovations Zt .

The renowned RiskMetrics takes the estimated values of μ . t .1 and σ . t .1 to calculate the quantile of the normal distribution. Equivalently, one could use a t-distribution instead to take account of the heavy tails.

McNeil and Frey [10] propose a method how to apply extreme value theory to the residuals of the AR-GARCH model to obtain more accurate VaR estimates. They suggest a two step procedure:

[illustration not visible in this excerpt]

As the theory only holds for serially uncorrelated observations we circumvent this problem by using the adjusted residuals instead of the original data.

The one day ahead prediction of μ . t .1 and σ . t .1 can be calculated using the estimated model.

4 Case Study

In this section I will firstly describe a method to compare different VaR estimators using backtesting. Then I will calculate VaR by applying the methods explained in the preceding sections. In particular I will calculate VaR using the following methods:

- .lock Maxima method
- .onditional POT
- .nconditional POT
- .mpirical Quantiles
- .onditional Normal

4.1 Backtesting

With each method mentioned we will calculate a VaR estimate for the next day. A violation takes place if the next day’s return exceeds the VaR prediction, i.e. if xt .1 > .aR t p . Assume that the VaR model is correct, i.e. it provides an estimate such that expected VaR equals actual VaR. We will refer to this as the null hypothesis. Under the null hypothesis the number of excesses are binomially distributed with the density function [7]

[illustration not visible in this excerpt]

If the probability to observe x .xcedances gets very low, say below 5 per cent, we would reject the null hypothesis that the model predicts the true VaR.

4.2 Data Analysis

Figure 5 shows the Dax time series from 3-Jan-1973 to 10-Jan-20066. The horizontal lines indicate different segments that will be inspected in detail.

Before we proceed with the calculation of VaR estimates, we should test whether the as- sumption of Pareto distributed tails holds. Figure 7 (left) shows the empirical mean excess function calculated for the overall series. In section 3.2.7 we have stated that under Pareto distributed tails the mean excess function must be a positive sloped line above a high thresh- old. We can see that this is approximately given for returns exceeding a threshold of zero. The right picture of figure 7 shows the empirical distribution function of the Dax series (for negative returns only). The GPD distribution function with estimated parameters from the total sample is superimposed. We observe a good fit for the negative returns exceeding zero. Remark that the abscissa has a log scale.

Although the GPD seems to fit very well to the overall series we might expect that there are periods of less good fit. The Dax series is segmented into six periods for which the empirical mean excess function is plotted in figure 8. For the most periods we can assume the tails to follow a GPD. But the period from 02/2000 to 02/2003 clearly does not have GPD distributed tails. Thus, we would expect the POT VaR estimate to fail in those periods where the GPD is estimated using these data.

4.3 Results

The VaR estimation is performed using the latest 1.000 observations which is approximately equivalent to the latest four years. For each method I calculate VaR with three probabilities (5%, 1% and 0.5%). The threshold for the POT estimate is chosen such that the tail always comprises 10 percent of the data, i.e. 100 observations. Finally, the number of VaR violations is evaluated and the binomial test is applied. The results are shown in table 1.

Before we analyze the results in more detail, we should check whether the model for the returns works. Using the common Box-Jenkins method to find an adequate model we have to look at the autocorrelation (acf) and the partial autocorrelation (pacf) of the returns. The same applies to the squared residuals in order to find a model for the conditional variance. The analysis of the series suggests using an AR(1)-GARCH(1,1) model. Whereas the negative returns of the Dax (see figure 6) exhibit strong evidence of heteroscedasticity the adjusted residuals are more likely homoscedastic and serially uncorrelated. This result is confirmed by analyzing the acf/pacf. Homoscedasticity can not be rejected for the adjusted residuals.

Figure 9 exhibits the negative dax returns and superimposed the calculated VaR estimates. The top picture shows results for the stress period around 2002 where we can observe daily losses exceeding two per cent frequently. In contrast, the picture at the bottom shows a relatively calm period where the maximum daily losses exceed two per cent rarely.

Whereas both, the conditional and the unconditional estimates are in line during the non stress period, they deviate significantly during the stress period. This is not astonishingly as they represent the average loss severity of the last 1.000 trading days. The adjustment to changes in the market conditions takes a long time. Thus, the unconditional estimates are hit several times in series.

In the stress period the conditional normal estimate seems to lie below the conditional POT estimate and is hit just two times more as indicated by the symbols at the top of the picture. The underestimation of VaR due to the normality assumption is in line with the theory. To analyze the power of the estimators more exactly we now turn to the tabulated results.

The three tables present the number of exceedances of VaR on the confidence levels 5, 1 and 0.5 per cent and the corresponding p-values of the binomial test in parentheses. For the first table the overall data from 1976 to 2006 has been used. The following two tables use only data from specific periods, the second with less market dynamic and the third table contains results from the stress period around 2002.

In the overall sample both, the conditional POT and the conditional normal estimators show similar results for the 5 per cent quantile. But, according to the theory, the conditional POT estimate dominates for higher quantiles.

In the non stress period the conditional POT shows good results even for the high quantiles. Also the unconditional estimators provide relatively reasonable results but are far from being reliable.

The period of most interest is the stress period. First, the unconditional estimators fail. The high p-value of the result of the quantile estimator at 0.5 per cent might be due to the specific sample but can not be considered systematically. The conditional POT estimator is not superior to the conditional normal estimator any more. The reason for this can be seen from figure 8 where the period from 2000 to 2003 does not exhibit heavy tails. But with no

[illustration not visible in this excerpt]

Table 1: Backtesting results of the Dax for three periods. The first table shows the results for the overall period. The second table contains the results for the non-stress period from 1994 to 1996 whereas the third table shows the results for the stress period from 2000 to 2003.

heavy tails, the normality assumption does not lead to an underestimation of VaR and thus both estimators are similar. The GPD estimate in the period itself is good as the used data comes mainly from the preceding period. But in the period from 2003 to 2006 the POT estimator fails because the then used data have no heavy tails. Thus, in order to apply GPD to the data we would have to set a very high threshold but then there remain too less exceedances making estimation of the parameters impossible.

To sum up, the dominance of conditional estimators over the unconditional ones is apparent. The most accurate estimator over all scenarios has been the conditional POT. But also the conditional normal estimator seems to be reliable, especially in non stress periods and then requires less working capital.

[illustration not visible in this excerpt]

Figure 5: Dax with horizontal lines indicating segments.

[illustration not visible in this excerpt]

Figure 6: Negative returns of the Dax index from 3-Jan-73 to 10-Jan-06. The first series shows the actual returns whereas the second shows the normalized returns which are the residuals of the AR(1)-GARCH(1,1) model.

[illustration not visible in this excerpt]

Figure 7: The left picture shows the Mean Excess function of the Dax index with all data used. The upward sloping curve above zero implies a GPD for excesses over zero. The right picture shows the empirical distribution function of the negative Dax returns and the fitted GPD distribution for excesses over the threshold u . 0.

[illustration not visible in this excerpt]

Figure 8: Mean Excess Function of negative Dax returns. The upward sloping curve indicates heavy tails. The approximately linear parts imply a GPD. In the period from 2000 to 2003 we observe no heavy tails nor an indication for a GPD.

[illustration not visible in this excerpt]

Figure 9: The top picture shows negative returns of the Dax from 2001-09-25 to 2003-02-14. Superimposed are several VaR estimates. The second picture shows negative Dax returns for the period from 1996-01-01 to 1996-12-17 which we refer to as non- stress period. The + ( - . symbols at the top indicate violations of the conditional POT (conditional normal) estimate.

References

[1] A. Balkema and L. de Haan. Residual life time at great age. Annals of Probability . 2:292-804, 1974.

[2] S. Coles. An Introduction to Statistical Modeling of Extreme Values . Springer, 2001.

[3]D.R. Cox and D. V. Hinkley. Theoretical Statistics . Chapman and Hall, 1974.

[4] P. Embrechts, Th. Mikosch, and C. Klüppelberg. Modelling Extremal Events for Insur- . ance and Finance . Springer, 1997.

[5] R. Fisher and L. Tippett. Limiting forms of the frequency distribution of the largest

or smallest member of a sample. Proceedings of the Cambridge Philosophical Society . 24:180-190, 1928.

[6] R.V. Hogg and S.A. Klugman. Loss Distributions . Wiley, 1984.

[7]Ph. Jorion. Value at Risk . McGraw-Hill, 2001.

[8] A. J. McNeil. Estimating the tails of loss severity distributions using extreme value theory. ASTIN Bulletin . 27:117-137, 1997.

[9] A. J. McNeil. Extreme value theory for risk managers. Internal Modelling and CAD II . pages 93-113, 1999.

[10] A. J. McNeil. Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of Empirical Finance . 7:271-300, 2000.

[11] J. Pickands. Statistical inference using extreme order statistics. The Annals of Statistics . 3:119-131, 1975.

[12] R. S. Tsay. Analysis of Financial Time Series . John Wiley & Sons, Inc., 2002.

[13] D. Würtz. fExtremes: Financial Software Collection - fExtremes . 2005. R package version 201.10060.

[...]


1In what follows I will use capital letters to indicate random variables and lower case letters for realizations of those random variables.

2See also Embrechts et al. [4] and Tsay [12, p. 270].

3The package fExtremes .or the statistical software R .ontains a function gevFit that provides maximum likelihood estimates of the parameters [13].

4See Hogg and Klugman [6]

5The fExtremes .ackage for R .ontains the method gpdFit() that implements the maximum likelihood esti- mator [13].

6The series containing data up to 1996 can be downloaded from McNeil’s homepage at http://www.math.ethz.ch/˜mcneil/data.html. The prices from 1996 to 2006 are ob- tained from Yahoo Finance.

Excerpt out of 23 pages

Details

Title
Value at Risk and Extreme Value Theory
College
University of Mannheim
Grade
1,0
Author
Year
2006
Pages
23
Catalog Number
V109947
ISBN (eBook)
9783640081257
File size
1189 KB
Language
English
Notes
In this article several quantile estimators are introduced. Besides renowned methods like historical simulation a new method called Extreme Value Theory is introduced. It allows to use the few extreme events in a time series more efficiently than other parametric methods. EVT is based on a specific class of distribution functions, called the Extreme Value Distributions.
Keywords
Value, Risk, Extreme, Value, Theory
Quote paper
David Hohlfeldt (Author), 2006, Value at Risk and Extreme Value Theory, Munich, GRIN Verlag, https://www.grin.com/document/109947

Comments

  • No comments yet.
Look inside the ebook
Title: Value at Risk and Extreme Value Theory



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free