This is the fifth in Risk in the Market's series on CVA, in
which we will look at the data requirements of a CVA
BUILDING A CREDIT VALUE ADJUSTMENT INFRASTRUCTURE - DATA
Our previous post looked at the Strategies
of Credit Value Adjustment and how
banks will have different approaches, when it comes to devising
their CVA strategies, according to their requirements (such as size
of derivatives operation).
The calculation of CVA is complex and involves portfolio-wide
Monte Carlo simulations of exposures and good credit risk data for
all the bank's OTC derivatives counterparties.
Due to previous investment in the capabilities necessary for
calculating economic and regulatory capital, many banks already
have in place all (or at least part) of the different elements
needed to build a CVA solution.
But there lies the problem - many of these elements are
dispersed across different departments and a more consolidated
approach is required for CVA.
So what are the elements?
Well, they can be broadly grouped under 3 headings:
In broad terms, a CVA solution needs to access the following
The challenge faced by banks isn't generating this data. Much of
it is standard input to current platforms used to calculate market
and counterparty credit risk, and therefore already available:
- Securities data - available from the
front-office trade capture and pricing systems
- Static data - is generally the same as
that used by limit management solutions
- Market data - can be sourced from trading
and risk management systems
- Credit risk data - can be sourced from
systems that calculate economic or regulatory capital (especially
if the bank is already using an internal ratings approach for
In fact, the challenge is in the form of consolidating and
normalizing the data so it can be used for a centralised CVA
The calibration of the market data simulation models will
determine whether the bank's current market data is sufficient, or
whether it needs to be supplemented by historical time series. This
in turn depends on how the CVA is going to be used by the bank, for
example, for risk management, regulatory or derivatives pricing
We'll take a look at that particular aspect in more detail in
the next blog in this CVA series.
However, the sourcing and cleansing of the data is only one part
of the story.
Dealing with missing or unreliable data
What happens if the data simply doesn't exist, or it is
This can be the case for many smaller counterparties or in the
case of less liquid markets.
In these circumstances, it is up to the banks to either create
synthetic data or apply more approximate methods if the data is
To illustrate this further, let's look at a scenario where a
credit spread curve is not available from market data for a smaller
In this particular situation, the credit risk could be
approximated by using a probability or default equivalent for
- With the same rating
- Operating in a similar business
- In the same geographical region
The data used would be determined by each individual case;
therefore, it is important that the systems in place can manage
these mappings in a flexible and transparent way.
As we saw in our earlier post about the strategies
of CVA, each bank will have different requirements in terms of
hedging or trading CVA depending on the size of the derivatives
operation and bank strategy.
Therefore, their analytics requirements will also be affected by
The next blog in our CVA series will look at analytics as
part of a CVA solution.