Introduction
1) Continuous Dilemma
a) Transition Cost( If we want to hedge at high frequency to mimic the continuous/theoretical method, we have to face it )
b) Variance of return in hand (different time, different situation)
2)Risk Management
1) Financial risk inherent to nonfinnancial business
2) Market risk incurred by the provider of finanical instrument
Micromanagement/Macromanagement for hedging
Residual Risks(Legal risks, Fraud risks, Credit risks....)
3) Gap between trader and quant
a) Explain the main conclusion in a single sentence before discuss the subject matter
b) Explain the subject matter in a single sentence
c) Reject the whole project if unable to perform the above two steps.
Saturday, September 22, 2007
Note 05:Volatility Smile(II)
4)Universal Volatility Model
Interestingly,one can combine local volatility model( which regards the volatility as a
function of time and stock price) and stochastic volatility(jump diffusion) model together to construct a universal volatility model (Blacher/Lipton). More parameters, more accuracy...
5) Regime Switch Model
This model admit that the stock price must be modeled by different regimes which are different
parameters. In first sense, this model is not so realistic as even we allow smoothly (in time) swith from a regime to one regime, the market dynamics is not so simple as there is no reason to protect whether the real dynamics is governed by regimes. However, one can implement the above 1) 2) 3) 4) models as different regimes, with the inducement of more parameters, one can surely achieve higher precision.
6) Calibration by exotic option
As we have a lot of models in hand, every one seems to fit the volatility well. How can we do? Of course, directly calibrate the model using exotic option is a good idea.
Then there comes the question. Parameters V.S The Prediction
(to be continue)
Interestingly,one can combine local volatility model( which regards the volatility as a
function of time and stock price) and stochastic volatility(jump diffusion) model together to construct a universal volatility model (Blacher/Lipton). More parameters, more accuracy...
5) Regime Switch Model
This model admit that the stock price must be modeled by different regimes which are different
parameters. In first sense, this model is not so realistic as even we allow smoothly (in time) swith from a regime to one regime, the market dynamics is not so simple as there is no reason to protect whether the real dynamics is governed by regimes. However, one can implement the above 1) 2) 3) 4) models as different regimes, with the inducement of more parameters, one can surely achieve higher precision.
6) Calibration by exotic option
As we have a lot of models in hand, every one seems to fit the volatility well. How can we do? Of course, directly calibrate the model using exotic option is a good idea.
Then there comes the question. Parameters V.S The Prediction
(to be continue)
Wednesday, September 19, 2007
How to analyze Panel data using SAS
Step 1: Sort the data properly
eg:
proc sort data=a;
by state date;
run;
Step 2: invoke the TSCSREG procedure and specify the cross section and time series variables in an ID statement. And specify the linear regression model with a MODEL statement: the dependent variable is listed first, followed by an equal sign, followed by the list of regressor variables
eg:
proc tscsreg data=a;
id state date;
model y = x1 x2;
run;
In order to aid in model specification within this class of models, the procedure provides two specification test statistics Rejection of the null hypothesis might suggest that the fixed effects model is more appropriate.
a) F statistic that tests the null hypothesis that the fixed effects parameters are all zero.
b) a Hausman m-statistic that provides information about the appropriateness of the random effects specification
Fixed effects: the models are essentially regression models with dummy variables corresponding to the specified effects
Random effects:
We include a classical error term with zero mean and a homoscedastic covariance matrix
One-Way model: we also include error term with the index of cross section
Two-Way model: We include also error term with the time change
Usually you cannot explicitly specify all the explanatory variables that affect the dependent variable. The omitted or unobservable variables are summarized in the error disturbances. The TSCSREG procedure used with the Fuller-Battese method adds the individual and time-specific random effects to the error disturbances, and the parameters are efficiently estimated using the GLS method
Step 3: The following statements are used with the TSCSREG procedure.
PROC TSCSREG options;
BY variables;
ID cross-section-id-variable time-series-id-variable;
MODEL dependent = regressor-variables / options;
label: TEST equation [,equation... ];
eg:
proc sort data=a;
by state date;
run;
Step 2: invoke the TSCSREG procedure and specify the cross section and time series variables in an ID statement. And specify the linear regression model with a MODEL statement: the dependent variable is listed first, followed by an equal sign, followed by the list of regressor variables
eg:
proc tscsreg data=a;
id state date;
model y = x1 x2;
run;
In order to aid in model specification within this class of models, the procedure provides two specification test statistics Rejection of the null hypothesis might suggest that the fixed effects model is more appropriate.
a) F statistic that tests the null hypothesis that the fixed effects parameters are all zero.
b) a Hausman m-statistic that provides information about the appropriateness of the random effects specification
Fixed effects: the models are essentially regression models with dummy variables corresponding to the specified effects
Random effects:
We include a classical error term with zero mean and a homoscedastic covariance matrix
One-Way model: we also include error term with the index of cross section
Two-Way model: We include also error term with the time change
Usually you cannot explicitly specify all the explanatory variables that affect the dependent variable. The omitted or unobservable variables are summarized in the error disturbances. The TSCSREG procedure used with the Fuller-Battese method adds the individual and time-specific random effects to the error disturbances, and the parameters are efficiently estimated using the GLS method
Step 3: The following statements are used with the TSCSREG procedure.
PROC TSCSREG options;
BY variables;
ID cross-section-id-variable time-series-id-variable;
MODEL dependent = regressor-variables / options;
label: TEST equation [,equation... ];
Tuesday, September 18, 2007
Note 04: My understanding to Volatility Smile and it's Models
It is well known for almost everyone in finance that volatility surface is not flat.
In my understanding, volatility smile is a result of market multi-interaction: Hedging(fear of risk) , Stochastic Volatility/Jump Diffusion , Different opinions on future trend and so on. But the main reason is that the emergence of the new jobs:Quant. The reason is simple. Before 1987, the surface is flat. After 1987, with the increase of the usage of complex and sophisticated financial tools, the surface become curved. In this sense, both Stochastic Volatility/Jump Diffusion and Different opinions on future trend are not the critical idea for this effect since even before 1987, jump occurs frequently and volatility is still not a constant (eg, volatility cluster).
In this sense, the evolution of volatility surface from flat plane to curved one mainly comes from the hedging side boosted by quants. If someone(some companies) began to use the sophisticated model (eg, Garch model,local volatility models, SV/JD models) instead of BS model to model the options, the volatility surface appear naturally. And then more and more people noticed this effect and then they began to use new tools and then .... It is a feedback loop like electronic amplification circuit.
Within this philosophy, how can we model the volatility surface? The answer is that we need adaptive model can survive as the volatility surface itself is emergent effect due to market feedback of human knowledge. But that can not solve anything. Our model does not have intelligence, they can not evolve dynamically ...
So here, let me just analyze those basic models one by one , and thinking is still going on...
1) local volatility model
The most disadvantage of this model is that the asymptotic surface when maturity time period is large is flat. The reason is simple, when the time period is large, one can go almost all kind of path of different price of stocks, then there is no advantage of usage of local volatility. According to ito33.com, we can not local "everything", interest rate, volatility, correlation can not be parameterized at all, they are dynamically interacted and changed with the evolution of markets.
2) Stochastic volatility model
At first time, it is promising as it is adaptive in some degree. But first, maybe the fatal point, is that option is just the reflection of the human prediction for the market in the future. It is not true that everyone use the SV model or use simply vanilla option to calibration. In this sense, people's prediction can not agree each other toward a option. Secondly, jump happens. One can not mimic large jump with the usage of brownian motion. In mathematic language is that one can compensate the small jump by correcting the model but can not compensate the large jump (That's why we need compensated poisson process to mimic a levy process). It is true that Stochastic volatility can correctly reflect volatility cluster in different economical period but the jump nature determined SV model is not a perfect one . Third, it is not easy to hedge out the risk of uncertainty of volatility.
3) Jump diffusion model
Despite that jump model cannot predict any real jump like the impact of 911, the jump model statistically model the jump in a reasonable sense. And they can correctly mimic the fat tail effect and also fit the distribution well like other sophisticated models.
(to be continued...)
In my understanding, volatility smile is a result of market multi-interaction: Hedging(fear of risk) , Stochastic Volatility/Jump Diffusion , Different opinions on future trend and so on. But the main reason is that the emergence of the new jobs:Quant. The reason is simple. Before 1987, the surface is flat. After 1987, with the increase of the usage of complex and sophisticated financial tools, the surface become curved. In this sense, both Stochastic Volatility/Jump Diffusion and Different opinions on future trend are not the critical idea for this effect since even before 1987, jump occurs frequently and volatility is still not a constant (eg, volatility cluster).
In this sense, the evolution of volatility surface from flat plane to curved one mainly comes from the hedging side boosted by quants. If someone(some companies) began to use the sophisticated model (eg, Garch model,local volatility models, SV/JD models) instead of BS model to model the options, the volatility surface appear naturally. And then more and more people noticed this effect and then they began to use new tools and then .... It is a feedback loop like electronic amplification circuit.
Within this philosophy, how can we model the volatility surface? The answer is that we need adaptive model can survive as the volatility surface itself is emergent effect due to market feedback of human knowledge. But that can not solve anything. Our model does not have intelligence, they can not evolve dynamically ...
So here, let me just analyze those basic models one by one , and thinking is still going on...
1) local volatility model
The most disadvantage of this model is that the asymptotic surface when maturity time period is large is flat. The reason is simple, when the time period is large, one can go almost all kind of path of different price of stocks, then there is no advantage of usage of local volatility. According to ito33.com, we can not local "everything", interest rate, volatility, correlation can not be parameterized at all, they are dynamically interacted and changed with the evolution of markets.
2) Stochastic volatility model
At first time, it is promising as it is adaptive in some degree. But first, maybe the fatal point, is that option is just the reflection of the human prediction for the market in the future. It is not true that everyone use the SV model or use simply vanilla option to calibration. In this sense, people's prediction can not agree each other toward a option. Secondly, jump happens. One can not mimic large jump with the usage of brownian motion. In mathematic language is that one can compensate the small jump by correcting the model but can not compensate the large jump (That's why we need compensated poisson process to mimic a levy process). It is true that Stochastic volatility can correctly reflect volatility cluster in different economical period but the jump nature determined SV model is not a perfect one . Third, it is not easy to hedge out the risk of uncertainty of volatility.
3) Jump diffusion model
Despite that jump model cannot predict any real jump like the impact of 911, the jump model statistically model the jump in a reasonable sense. And they can correctly mimic the fat tail effect and also fit the distribution well like other sophisticated models.
(to be continued...)
Monday, September 17, 2007
Note03: How can we add our own function to QuantLibAddin
1. Add a new function to an existing category
A. Eg: Our function contains in
QuantLibAddin/qla/****.cpp
QuantLibAddin/qla/****.hpp
B. File
C. QuantLibAddin is itself a C++ Addin which can be loaded directly to standalone C++ client applications. So it's best to test the new functionality in a standalone program before autogenerating the source for the spreadsheets.
Then Edit the file
QuantLibAddin/Clients/C++/instruments.cpp
adding some example code to demonstrate the use of the new function.
D. Edit QuantLibAddin/srcgen/instruments.xml to provide definition of the new function. See the link for the details of the Edition!
E. Rebuild the srcgen project to generate the source for the Addins
F. Rebuild the Addins.
G. Amend the client files ...
QuantLibAddin/Clients/Excel/instruments.xls QuantLibAddin/Clients/Calc/instruments.sxc QuantLibAddin/Clients/C/instruments.c
... to demonstrate the use of the new function.
2. Add a new category
see link...
A. Eg: Our function contains in
QuantLibAddin/qla/****.cpp
QuantLibAddin/qla/****.hpp
B. File
generalutils.hpp in the qla directory can be included to pick up additional utility functions.C. QuantLibAddin is itself a C++ Addin which can be loaded directly to standalone C++ client applications. So it's best to test the new functionality in a standalone program before autogenerating the source for the spreadsheets.
Then Edit the file
QuantLibAddin/Clients/C++/instruments.cpp
adding some example code to demonstrate the use of the new function.
D. Edit QuantLibAddin/srcgen/instruments.xml to provide definition of the new function. See the link for the details of the Edition!
E. Rebuild the srcgen project to generate the source for the Addins
F. Rebuild the Addins.
G. Amend the client files ...
QuantLibAddin/Clients/Excel/instruments.xls QuantLibAddin/Clients/Calc/instruments.sxc QuantLibAddin/Clients/C/instruments.c
... to demonstrate the use of the new function.
2. Add a new category
see link...
Note 02: Seven Easy Steps to debug Excel Add-ins in Visual Studio 2005
Before any step, you should have a program to debug!
Step 1: To build the Debug version, use Visual Studio's Build - Configuration Manager... menu option and select the Debug configuration as Debug mode
Step 2: Select the Project in the Solution Explorer and show its Properties page. Select the "Debugging" node in the left pane, and change the "Command" property to the preferred version of EXCEL.EXE. (You can omit this step if you do not need to change the Excel version from the beginning)
Step 3: Press F5 (or use the Start - Debug menu option) to start Excel.
Step 4: As soon as Excel starts, it will attempt to open your the Debug build of your add-in. (This is because the XLL+ AppWizard set the "Command Arguments" property of the configuration to "$(TargetDir)/Tutorial1.xll" when the project was created.)
Step 5: Use the add-in function in Excel
Step 6: You need to set a break-point in VC 2005 (use the F9 key (or the right-mouse menu) to add a break-point to the add-in function)
Step 7: Let the formula cell calculate. The ways to go to the cell and click on the F2 key then the Enter key. (Alternatively, you can simply change the value of dependent cell and the Excel will automatically recalculate)
Step 1: To build the Debug version, use Visual Studio's Build - Configuration Manager... menu option and select the Debug configuration as Debug mode
Step 2: Select the Project in the Solution Explorer and show its Properties page. Select the "Debugging" node in the left pane, and change the "Command" property to the preferred version of EXCEL.EXE. (You can omit this step if you do not need to change the Excel version from the beginning)
Step 3: Press F5 (or use the Start - Debug menu option) to start Excel.
Step 4: As soon as Excel starts, it will attempt to open your the Debug build of your add-in. (This is because the XLL+ AppWizard set the "Command Arguments" property of the configuration to "$(TargetDir)/Tutorial1.xll" when the project was created.)
Step 5: Use the add-in function in Excel
Step 6: You need to set a break-point in VC 2005 (use the F9 key (or the right-mouse menu) to add a break-point to the add-in function)
Step 7: Let the formula cell calculate. The ways to go to the cell and click on the F2 key then the Enter key. (Alternatively, you can simply change the value of dependent cell and the Excel will automatically recalculate)
After you press Enter, Excel will call the add-in function and the DevStudio debugger will break at the break-point. You can step through the lines of the function (with F10 and F11) or continue (with F5).
Close Excel and return to Visual Studio.
Note 01: Typical Example for using QuantLibAddin C++ client
#includeusing namespace std;#include
using namespace QuantLib;
using namespace ObjHandler;
using namespace QuantLibAddin;
int main() {
// deal with errors
try {
// setup log file
setLogFile("*****.log");
// direct log messages to stdout
setConsole(1);
// write log message to log file
logMessage("************");
ObjHandler::obj_ptr MyObject(new *****(***));...
// store "MyObject"
storeObject("MyObject",Myobject) ;
//ObjectHandler::Repository::instance().storeObject("MyObject", Myobject);
// write to log file the object's ID
logObject("Myobject");
// retrieve "MyObject" Retrieve the object with the
//given ID and recast it to the desired type
// retrieveObject("MyObJect",Myobject);
return 0;
} catch (const std::exception &e) {
std::ostringstream s;
s << "Error: " << e.what();
ObjectHandler::logMessage(s.str(), 1); return 1;
} catch (...)
{ ObjectHandler::logMessage("Error", 1);
return 1; }
Sunday, September 16, 2007
Planning in this week
Get familar with QuantLib, Boost, the way to export functions from QuantLib to excel and the way to debug through Excel to VC8
Subscribe to:
Comments (Atom)