This webinar provides practical insights, equipping viewers with the knowledge to enhance their climate risk management using …
Hello, everyone, and
welcome to today’s webinar on climate stress
testing for credit risk, where we’ll take a deep
dive into the evolving landscape of financial risk management
amidst the growing concern of climate change. My name is Akshay Paul, product
manager for the Quant Finance Products at MathWorks. Financial institutions worldwide
are facing a new frontier of challenges, with climate
risk at the forefront of emerging threats to credit
risk assessment and management. Our session today is
dedicated to addressing some of the challenges
associated with navigating these uncharted waters. I’m joined by two of my
colleagues, Elre Oldewage and Eduard Benet,
both application engineers at MathWorks
and experts in this field. All right. So today, they’ll be
going through the details of climate stress
testing by touching on four key challenges– data management,
model development, deployment and reporting,
and overall governance. Throughout this webinar,
we aim to provide you with practical
examples and tools for developing climate
stress tests, leveraging both open source and
commercial data sets. Whether you’re just
beginning to consider the implications of
climate risk or looking to enhance your existing
models, today’s session is designed to equip you
with the knowledge and tools necessary for success in this
new era of financial risk management. A couple of quick logistics
before we get started– firstly, please use the
Q&A panel on the Teams chat or on the Teams application
to ask any questions. While we have a Q&A
session at the end, we’ll go through
all those questions and answer any new
ones you might have. Secondly, the webinar
is being recorded and will be available on
our website after the event. So I’d like to start
off with a quick poll to get a sense of where
folks are currently at in their assessment
of climate risks within their organization. I’m sorry, there is a small
technical problem with the poll, so we’re going to have
to skip the polls. OK, all right. So the polls,
we’ll skip for now, but we’ll be able to– if
the problem gets resolved, we’ll post them in the chat,
and then you can answer those a little later. All right. So before we get into the
details of the challenges involved, I’d like to set the
stage a little bit with a primer into the traditional financial
stress tests and climate stress tests, highlighting some of the
differences between the two. So as most of you all know,
traditional stress tests and climate stress tests, while
sharing the overarching goal of evaluating the resilience
of a financial institution under adverse conditions,
they diverge significantly in their scope, time horizon,
and the nature of risks they assess. Traditional stress
testing primarily focuses on short- to
medium-term economic scenarios, such as market crashes, interest
rate hikes, or recessions that could impact the
bank’s capital adequacy. These tests are grounded
in historical data and economic cycles,
aiming to ensure that institutions can withstand
acute financial shocks. Climate stress tests,
on the other hand, introduces a novel
dimension by assessing the long-term financial risks
associated with climate change, including both physical risks,
like floods, hurricanes, wildfires, cyclones,
and transition risks as economies shifts towards
low-carbon alternatives. Unlike traditional stress
tests, climate stress tests relies less on
historical data and more on predictive models
and scenarios, spanning multiple decades. It requires integrating
complex climate models with financial forecasting,
addressing uncertainties inherent in long-term
climate impacts, and assessing how
these risks cascade through economies, markets,
and specific portfolios. All right. So to summarize, the key
differences between the two are in the time horizon,
the data requirements, scenario design, and
assessment of exposure. So with that, I’d
like to hand it off to Elre, who’s going to go into
the details, starting with data. Take it away, Elre Thank you, actually. Hi, everyone. I’m Elre Oldewage an application
engineer here at MathWorks. And I’m going to
be talking to you about the challenges involved
in climate stress testing. So there are many
different dimensions to consider when performing
climate stress tests, but the process will follow
this general pattern. First, you need to collect
relevant climate data. This can be information
like the risk of natural disasters, energy
demand, or policy requirements. Then you need to do
some kind of modeling to process that data into
a form that can be ingested by your financial models. For example, wind
speed in kilometers per hour for hurricanes
or megajoules of energy, if you’re considering
energy demand, is not directly consumable by
a probability of default model. This needs to be transformed
into financial shocks in some way so that
you can compute the impact on your portfolio. As we’ll see, the data
and modeling steps can be closely intertwined
because the kind of processing you do on the data depends on
what you want to use it for. We’ll discuss
strategies to do this in a way that is modular and
reusable so that you can easily apply different models
to the same data source or apply the same model
to different data sources. Once you’ve
developed your model, you need to share the outcome
of the stress test in some way. This may be by making the model
accessible to other members of your team via
deployment, or it might be by generating some
sort of climate impact report. Throughout, we’ll see
MathWorks Modelscape platform, which supports the full
climate risk modeling workflow. Modelscape is built on top
of our core capabilities in computational
finance and other tools, like our image processing
and mapping toolboxes. And it addresses many of the
challenges we talk about today. We will take a look at each
of these four key issues– data, models, deployment and
reporting, and governance, in turn. I will be discussing
the data portion, and my colleague Eduard will
discuss the modeling deployment and governance aspects. So before talking too much
about the nitty gritty details of climate data, it’s important
to understand that there are two kinds of climate risk. One is physical
risk, which refers to the risk caused by
the increasing frequency and severity of natural events. The other is
transition risk, which refers to the risk
caused by our response to climate change issues, like
requiring a minimum energy efficiency standard
for rental properties or developing new technology. These kinds of risks expose
you to economic losses, stranded assets, and issues
with inaccurate valuation, which can then, in turn,
have knock on effects to downstream models. Another important concept
when considering climate data is that of climate scenarios. So as actually mentioned,
where regular stress testing is based on historical
data, climate stress testing relies on scenario analysis. Scenario analysis is a
well-established method for developing
strategies that are robust to a whole range
of plausible future states if you don’t know what
future is going to hold. As an example, the
NGFS, or Network for Greening the
Financial System, formalized these possible
scenarios in 2019. The scenarios vary
along the two dimensions of physical and transition risk. And whether or not
climate targets are met is the physical access,
and the orderliness of the transition policies is
the transition risk access. So this gives rise to
four possible scenarios, each describing a
potential future with accompanying
predictions for variables like CO2 emissions,
carbon price development, changes in energy generation
technology, and so on. So this is just one example
of climate scenarios. There are other
versions of this, and there are other
frameworks as well, like MIT’s FS scenarios. The scenarios vary a little bit
in terms of their definitions and assumptions, and they might
track different variables and so forth. So armed with this knowledge, we
can now talk about climate data. It comes from many
different sources, both commercial and open source
and in many different forms. We just talked about the NGFS
and other climate scenarios. For physical risk, you often
end up working with maps. One example on the left here
is showing average rainfall per six hours on the left. Or you might work
with map layers like the one on the from BRGM. BRGM is France’s reference
public institution for Earth science data. And what we’re
showing you here is exposure to different
kinds of flooding events and also the reliability
of that estimate. You might need to use government
policy data, like minimum energy efficiency requirements. And also you’d need the
corresponding efficiency ratings for all the properties
in your portfolio. You might need outputs of
natural catastrophe modeling frameworks, like
Climada and Oasis. And there are a whole host
of online data providers that you may need
to pull data from. Now what we’re hearing
from our customers and what roles we’re
seeing in our own projects is that dealing with all of
this data is a challenge. Specifically, there
are two issues here– acquiring the data and
preprocessing the data, especially if you want to
do this in an automated and repeatable way. Getting hold of data
requires some work. You don’t want to be downloading
zips manually from a website. It’s difficult to
track where you put it, and you end up with
multiple copies scattered across different machines. And you’d need to repeat
this process manually every time the data changes. If you’re leveraging
online data providers, you may not have the in-house
capacity or enthusiasm to build custom APIs
to interact with them. We have a suite of tools
to help you acquire open source and commercial data
in an automated and repeatable way. We partner with all
the data providers listed here and many more
to make it easy for you to get the data you need. We also have the
expertise and enthusiasm to build custom APIs
if that’s required. The other challenge here
is data preprocessing, partially because the data
formats can be so different. You may be comfortable
dealing with time series data from the climate scenarios, but
what if the data is categorical, like energy efficiency ratings? What if it’s a data
format that requires domain-specific expertise,
like maps or satellite images? Handling this kind
of data correctly can be especially
difficult if you don’t have in-house expertise
with techniques like image processing and geoprocessing. You might encounter
even more exotic, unstructured data,
like this example, where we needed to
incorporate commuting routes to compute scope 3 emissions. So here, we aren’t even
dealing with proper maps. The routes are just
a series of points that indicate commuter routes. And this sort of thing can
be even trickier to work with if you’re not a geo data expert. And this example is
not even too esoteric. We might encounter similar data
if we were considering supply chain problems or,
as we’ll see later, the path of a tropical cyclone. Let’s consider an example to
make all of this more concrete. So I’ll be using an
example that computes the impact of increasing flood
risk on a mortgage portfolio. This is based on
a bespoke solution we built for one
of our customers in France using components from
our Mapping Toolbox and Image Processing Toolbox. Suppose you have a portfolio
of mortgages located in an area that is increasingly
prone to flooding due to climate change. You want to know what impact
that flooding risk has on your mortgage portfolio. Now, if you’re considering
something like flooding risk, then that usually
simplifies quite easily to finding a trustworthy
data provider, potentially using one of our
nice data connectors, that can provide you with
regions or polygons, these blue shapes here, that
indicate different levels of flooding risk. All we need to do,
in theory, is check whether the latitude and
longitude of our property is located within a particularly
risky flooding region. But really, the story
can be more complicated. For example, the BRGM
API that we mentioned before provides data at
various precision levels, depending on the size
of the region or tile that you request
the information for. And it considers multiple kinds
of risks in a single tile. So for example here, red
is water table flooding. Orange indicates
cellar flooding. Gray indicates no flooding risk. And the hue of each color
indicates the reliability of the information. So really, we’re capturing
multiple dimensions of flooding risk on a single
tile, both the kind of flooding and the reliability of the data. So to be able to
use this API, you need to choose, firstly, an
appropriate region or tile size to get good precision. And then you need to
do further processing to map each of these risk
types to a different layer to get to your polygons
of risk on the right here. So you can see the
data processing can get quite complicated. And what we found in working
with customers on these projects is that the best way to
handle this is to treat the preprocessing as models. This makes the preprocessing
repeatable, for example, if you want to do the same
preprocessing on a new version of the underlying data. And it also makes it modular
so that you can reuse and tweak parts of the
preprocessing as you need. Now, these preprocessing steps
can happen one after the other. For example, you
start with your tile, capturing all the different
kinds of flooding. You need to extract
the kinds of flooding that you’re interested
in as a layer and then convert
that to a polygon. On the other hand, for
each of your properties, you need to map the street
address to a latitude and longitude, project that
into the same coordinate system as your polygon, if you
hadn’t done that already. And only then can you
compute the flooding risk for the property. So as you see here, we
end up with these series of small modular data models
dependent on one another in a flow like this. And the smaller you can
break up these steps, the easier they are to reuse. So let’s take a step back. We’ve talked about the
benefits of modular data models to help with data acquisition
and preprocessing. In our example,
these steps amount to determining the flooding
risk of a particular property, in other words, going from our
BRGM tile and property address on the left to the actual
risk polygons, the probability of flooding on the right. But having this risk
measure doesn’t yet tell us anything about
its impact on the numbers we care about, like
the financial loss we expect to actually incur
over our portfolio. We need additional modeling
to go from our process data to expected credit loss,
lifetime PD, and other models from our risk
management toolbox. And we’ll talk about
this in just a minute. But first, let’s take this
example one step further. So suppose that
our loan portfolio is global and also contains
properties in Florida in the US. To get an accurate impression of
the physical climate risk posed to these properties,
we’d also need to consider the impact of
tropical cyclones, which are a real risk here. In our case, the input data is
synthetic tropical cyclone paths and the maximum wind speed
along this path, which you can see visualized
here in the bottom left. We can use this,
along with information about the Floridian
properties in our portfolio, in much the same way as
the French flooding example to come up with a risk
measure about the likelihood of your properties
being hit by a cyclone. We would need a bit more mapping
expertise to handle the paths instead of 2D layers. But with our mapping
toolbox, that’s easy. Notice that we’re still
interested in exactly the same financial models as we
were for the flooding example. We still want to compute the
impact of the physical risk on our portfolio. We just changed
the physical risk. Here, we see the benefit
of modularity again. If our data processing
steps and models are built in a
modular way, then it becomes easy to consider
different risks. So to recap, data
acquisition and preprocessing can be quite complicated
because the data comes in many different forms,
from many different places and can be tricky to
work with if you don’t have domain-specific expertise. The best way to cope
with this complexity is to encapsulate these
steps in data models so that the preprocessing is
modular, repeatable, and can be done automatically. We at MathWorks have
experiencing in establishing processes like these. Modelscape, seen here, offers
a comprehensive platform and a suite of tools from
our core capabilities to help you with this. We also have expertise in
handling the trickier data formats that would
typically require domain specific experience. I’ll now hand over to
my colleague Eduard, who will talk more about
the models, deployment, and governance aspects. Take it away, Edu. Thank you, Elre. My name is Eduard. I’m another engineer
in Elre’s team. And what we’ll do
is we’ll take it a bit further down the path
of modeling to show you or to emphasize, if possible, a
bit more the importance of being very modular when
approaching these climate stress testing problem. So let’s go back to the
example that Elre, you showed. So in the end, if we
recap everything we did, we have two separate
physical risks. One is cycling. The other one is flooding. And then we have
an objective, which is the impact on our
property portfolio. Now, on the one side,
this require data APIs. Requires mapping tools,
image processing tools. On the other side, it requires
the standard risk processing tools, like expected
credit loss, probability of default
models, and whatnot. Now, the only chunk
that’s really new or here is that, what
is your– how do you adjust with PD with
your flooding risk? How do you adjust with PD this
transition risk adjustment and whatnot? And that’s what we will
talk about a bit more. These out of steps in
building these models. And they get more and
more complicated the more you look at these climate risk. Now, Elre talked
about physical risks. Now, we’ll take a bit of
turn, and we’ll start– the examples I’ll show will be
more focused on the transition risk aspects, but the process
or the concepts would apply. So let’s look at the
standard model development. You normally gather data
and then you build a model. And after you build the
model, you test the portfolio, and then eventually
you build a report. Now, if you try to do
that in the climate space, very soon, you’ll find that
this is a very inefficient way of doing things. Because when you start putting
all the data providers you need, when you start putting all the
models you want to look at, the process explodes,
and this is not scalable. This is, for example,
the current set of models that we have working
together in a single project. On the top, right, you can see
the hurricane and the flooding models that we showed you,
alongside the different models that take care of the
post-processing steps of the data, which eventually
lead to the mortgage portfolio climate impact. So the first thing
that comes to mind is, like, why would you
ever want to do that, or you want to modularize
the problem so much? So if you look specifically
at just one of the problems, lie the flooding problem,
we built this tool. And as Elre was saying,
it was a bespoke project we built for a
customer, specifically on French data provider. But eventually, if you’re
building this capability, you don’t want to
stick with France. You want to look at flooding at
many, many different regions. So this model in the top
right, the flooding data model, eventually,
what will be is we have multiple versions
of the model, right? There’ll be a model for the US. There’ll be a model for the
UK, another one from Spain, for France, for every
single region where we can get flooding data. And each of these regions–
because this is normally governmental data, we’re
going to give us the data in different format, right? Some of them will
give us an API. Some of them you’ll be
able to download some zip files with some, like, I don’t
know, Geo mapping files inside. And some of them will give you
a client in its own language to manage the model. And you need to be able
to ingest all that. And eventually, you just pass
your event to your PD model. So what I’m going
to do is I’m going to move to a different
example, transitionry space. What I’m going to
try to emphasize, how important it
is to be modular when approaching these climate
stress testing problem. So for that, I’m going to look
first at some climate scenarios. The ones you see on screen, this
is the EPPA model from the MIT. For those of you
familiar with them, it’s similar to an
integrated assessment model, but there’s some key
differences in there, right? So this model gives you the
projection of certain variables in some years to come. So in this case, you
see from 2020 to 2100. But unlike the normal
integrated assessment model, this model gives you multiple
paths for each variable. So eventually,
what you can do is you can compute a mean value
and some confidence intervals on what it’s
actually going to be the evolution of that variable,
according to the model. So what you see on screen? In the screen, you see, first
of all, two separate regions. I pick the United States,
and I pick Europe. I’m looking at three
different variables. I’m looking at the emissions
coming from agriculture, coming from electricity, and
coming from transportation. And then you see
three different colors we state for the
base scenario, which normally stands
for, we do nothing and we hope for the best. And then two more
restrictive scenarios, where the end policy is that
the global temperature gets below a certain value. And as you can see, then it
requires more immediate action to achieve these policies. Right, so we want
to use this model. We want to make sure that we
incorporate this into a PD model and see how our
portfolio behaves on these different scenarios. So now that we have the
data coming from MIT, we need to build a model. So the first
problem we’ll see is that the data coming
from these models is not just something that
we can consume, right? In this case, for example,
it’s in tons of CO2. If you look at the
energy, it’s going to be in megajoules, right? So it’s not a value that
you can simply utilize. And then you have to start
looking at what models are available in the literature. So when we started
this process– so, to look at the breadth
of models available– one that came up
initially was this one. They said the change in
probability of default can be adjusted for climate
by multiplying this value by a shock, which they name u. And this shock is the
change between being on the response or the
baseline scenario B to moving to the more
restrictive scenario P. So then you need the
formula for that, right? And then if you look, for
example, at this paper– that’s from a very
interesting group– what they say is that,
OK, you can define the shock as the change between
the variables between scenario P and then the baseline
scenario, right? If you at the change
sometimes in the variables, sometimes in the
market share, you can define these shock, right? And then you eliminate this
problem of having the variable in a market that
you cannot consume. You actually have some
actual magnitude, right? How much the emissions or how
the market share of this energy is changing. And then you can apply this
value to each asset, right? Of course, you have to map the
sectors of the variable you’re looking at to your assets. You have to map the
regions to your assets. But eventually, this is
not a complicated problem. You can say, OK,
let’s go and calculate the change in our portfolio. And you can see here, this
has just been anonymized. But this is the example
on a couple large banks. And this distribution
you see for the value is because the scenarios
used are not a fixed path, but they’re giving me a breadth
or a statistical distribution and on how these
variables evolve. Right. So let’s say you build this
model and you’re quite happy with it. Well, normally what happens
is that when we discuss this with customers is
that immediately, we see that this is not
scalable, because nobody wants this as a single model, right? And the reason for that
you’ll see very quickly. When you build this model,
immediately someone’s going to say, yeah,
but we decided that we want to compare
maybe these scenarios with some other scenarios. This is the immediate
thing that will come up. What if instead of
using the MIT model, I use some other
integrated assessment model, for example, the NGFS? So essentially,
what I want to do is I’m going to swap this thing. And immediately, you see that
this is actually a much trickier problem than it seems. So to show you that,
let’s look at what this NGFS-equivalent
data would look like. So, first of all,
this is closest– it’s the closest value
to the NGFS scenario that we saw before. Three scenarios below 2 degrees. One is a baseline,
and one that’s a much delayed transition. And the first thing that you
notice is that in this case, there is no statistical
distribution. There’s one path per variable. That, however, is actually the
easiest part of the problem that you have in
dealing with that. Second problem is that if
you look at the regions where these variables come,
one side is the United States. That’s the same
way we had before. But the second one is
28 European countries. So probably the European
Union posts across the UK. It doesn’t include
Eastern Europe. And the MIT scenarios did. So if you want to
compare apples to apples to see how these scenarios
compare to the other ones, you probably have to do
some region adjustment to combine this
region here at 28 plus some other
additional regions. Usually, when they
provide these values, they also provide the
Eastern Europe counterpart. So you have to be able
to match those two. And this is not a
trivial problem anymore. This one, however,
is actually not bad compared to the
other problem, which is that the variables
that you’re looking at, they’re not the same. So in the previous
slide, we saw emissions. And you can see that in
the bottom two variables, we also have emissions. We have tons of emissions of
CO2 coming from the electricity and from the demand
of transportation. Maybe not exactly the same, but
that’s actually fairly close. For by these standards,
you will be quite happy. However, if you wanted to look
at agricultural production, that’s a different
story entirely. In this case, you can see how
the only variable that comes in this model is the energy. It’s not CO2 anymore. So that is unfortunately not
directly comparable to CO2 because there’s multiple
ways of generating energy. So if you want to use
this model and compare it to the previous one and
see if your portfolio was behaving the same way, you need
to do a lot of transformation. You need to
transform the region. You do transform the variables. And luckily, once you’re
done, the final model is actually the same. So in that aspect, you
you’re pretty good. So what would that mean? Well, it means that if you
want to switch scenarios, what will happen is that the
simple model that you had to calculate
the shock will now become a collection of models
aimed to deal with the data switch that you unify
regions, unify variables, and eventually
compute the shock. So at this point,
you’re pretty good. You managed to
stretch your portfolio using two different scenarios
and finally get a value. But now we just look at the
very top left of the problem. We just looked at the data. So when we start
plotting or talking about more people
about this problem, this thing just keeps growing. I’m just going to
say, well, yeah, we can use either of these two. We actually use the scenarios
provided by the Bank of Canada. Well, if you do
that, in this case, you’d see that you’re pretty
lucky because those scenarios are fairly close to the NFTS. So all these framework
you produce actually can be reused for
the Bank of Canada. But they can even increase
the complexity of the problem and say, this
model you’re using, this is too simple for us. We decided to use a
different methodology. For example, the one provided
by the Bank of Canada. They probably said
methodology to change– or approach this
change in a paper called Assessing
Climate-Related Financial Risk. So how does this work? So they start from
the scenarios, and you saw how they look like. And from the
scenarios, they say, well, you can compute a shock
on some of the variables that we provide on the scenarios
called the risk factor pathways. And when you look at that,
you’re like, pretty good. That’s actually the same
equation I was using before, right? So I can reuse the model. So again, this modularity
that we built is helping us. This process is fairly
straightforward. It’s just a time
series refactoring. But when you look
at the PD model, it’s actually more complicated. They said, no, look, the pretty
model that you need to use is entirely different one. It’s this one here. And what this model does is
very different approach, right? So they defined a
adjusted model that depends on obviously the
original one you had, plus some factor values,
alpha, beta and S. Alpha is a value that needs
to be fitted, depending on or according to the different
sectors of the economy that they provide. And S is a sensitivity that
depends not on the sector, but on the segment
of the sectors. And you can see that some of
them is a one-to-one mapping, but some of those have
subsegments in there that need to be fitted. And finally, F is just
this risk factor pathways that come from the scenario
itself, which they call RFPs. And they’re variables like the
projected revenue, projected indirect costs, and whatnot. So what do you do with that? So what you can do is it’s
actually fairly straightforward. If you ahead– and
well, straightforward as in if you’re familiar
with portfolio optimization. It’s fairly straightforward
to go and calibrate the values of alpha and
beta for the sector level. And once you have those, you
can actually go and calibrate the values of the sensitivities
at the segment level. So eventually, you
get with a matrix that looks like that, right? On the bottom part, you have
these RFPs or these variables, like revenues, low carbon,
indirect cost, and whatnot. And on the right, you
have the actual sectors of the economy like air
transportation or electric power transmission. And you can see how the
sensitivities is quite– seems to be correct, as in air
transportation is very direct. It’s very sensitive to climate. So eventually, after you
do this for all sectors and for all variables, you
get with a much bigger matrix, right? Again, the one I was
showing you was just a sharp bit, or an
extract a bit of this one. And having alpha
S and these RFPs, it’s easy to adjust the
probability of default for each of the assets
within the portfolio. From there, it just goes back
to the standard risk management problem, right? You can just pack
your portfolio. You can prove the loss given
default and expected shortfalls and whatnot. It’s an economics or a
risk management problem. Again, there’s no
self-complexity. But having explained
this example, if we now go back to our
overall overarching problem and how this looked
like, well, now we swap the bottom right part
of the problem, right? We use a different
set of scenarios, and we change the model. So all in all, if we
continued, somebody said, well, why
don’t use this model? But no, I don’t want to use
And I want to just go back to the MIT or the NTFS models. So you can see how
the problem needs to be split up in order to
be efficient in rerunning it. So the key concept we want to
emphasize is the modularity. If you’re not modular–
and it happened to us at the beginning–
it’s very difficult to scale up these problems. These data providers
change quite often. And it’s hard to deal with
them if you have to rebuild everything from scratch. So just to give you
a bit of a sense, all the models that we described
look more or less like that. And if you actually put
them on the model scale or on a real model
management platform, this is the actual dependencies
just for this subset of problems that we looked at today. If you look at the right, you’ll
see the ones I just showed you. There’s the integrated
assessment models like the NGFS or the Bank of Canada. There’s the EPPA model from MIT. There’s certain sub models to
manage and unify these data. There’s finally a climate shock
model, which, for some reason, this seems to be quite
unified and most of them use. But we expect that
at some point, somebody’s going to come
up with a different one. And then these are
the two PD models that we showed you at the end. And of course, for
each of them, you can have multiple versions,
multiple states of the code. But this is just the
overall top level view on how the models
depend to one another. And again, just for
this particular project. So now that we see how the
ordering process end-to-end works, let’s look at how
we actually start deploying and start managing or
running the process in So we have our diagram of
models and how they depend. So the first step, if you
want to perform the task, you have to pick a model
and stress your portfolio. A model means you have to
pick the entire path what you have to pick. What’s the final model
you’re going to use? What is the region where
you’re going to apply the model or map the region to each of
the assets in the portfolio? You have to pick what
climate scenarios you want to stress test to. And again, each
of the models will have their own set of
scenarios that are, again, similar, but not quite. And finally, you have
to assess the climate impact of the portfolio. You have to build
a pipeline that concatenates all these models
together and eventually gives you– or speaks a final answer. Now, the key thing here
is what LG was saying. You will have to
repeat this process because it’s rare that somebody
uses the same scenarios. Even if you do, to give you an
example, when we started looking at this problem three or four
years ago, from that point till now, the NGFS
scenarios, which is one of the most common
or well-known set of data, have changed three
or four times. I think they’re now
in version 4 or 5. And that means that
even though you would think that, because
it’s the same scenario, the values wouldn’t change
that much, they actually do. The regions change. The variables change. So repeating this process,
even though it seems simple, it’s actually not quite. And being able to just simply go
and swap a box in this diagram and just submit the pipeline
and get the results again is actually quite powerful. So how easy is to
deploy, how easy is to rerun this
process actually comes down to how is this
to concatenate this model? How easy is to build a pipeline
from your inventory of models? And finally– I think I already
mentioned at the beginning– your final goal is
probably to build a report. And maybe or maybe not, but
this is fairly common process. And again, same problems apply. You are using various
different models. And that means you have to have
probably various different types of reports. And even if you use, for
example, different data sets, like, different
scenarios, you probably have to switch entire sections
of the model explaining why is one use or why
is the other one used. And different templates
are going to be necessary. Different requirements
are going to be needed to be linked to the report. So eventually, there
is a clear need for a tool that allows
you to automate and adapt to the rapid turnaround
that these models have. And that’s how a report or
automated reporting tool really helps in this space. It lets you automate
building these reports while easily simply switching
the models within the platform. So we saw the entire end-to-end
process right from data to model and reporting. Now we’ll see– well,
we propose as a solution or as a platform to
run all these steps. And that’s something
we called ModelScape. ModelScape is our own
proprietary risk management tool. So why we want to
show you ModelScape? So if we recap
all the challenges we saw– we saw that the
data had multiple APIs, had these regions. You needed to account for
different regions and variables. And you need to
account or you need to have different tools
that are maybe less standard in the finance space. You have models that
are rapidly evolving. You need to constantly
adapt these models or constantly allow these
models to ingest data coming from many different places. Because of how
broad the space is, the language where
these models is built on is probably going to be
different, especially in the data side because
the different providers will give you in different formats
and different languages. And finally, you want to build
pipelines on top of all that. And that’s where models helps. You saw it through all
the entire presentation. That’s where I was–
or we were using to show the inventory
of the models. This is the front page. It’s fully
customizable, but this is the one we’ll be
using as an inventory to store all of
our climate models. And you can see that most
of them– because it’s me and Elre usually managing
this process– have me listed as an owner
because I’m usually the one entering the model. And this platform, or
the advantage of it, is that you manage the
end-to-end workflow that we described today. So ModelScape is a
cross-language platform to manage the full
end-to-end workflow from development of the
model to the deployment and the monitoring. So the governance
piece or the inventory is what you’ve been seeing. It’s the list of models
or the database of models with all the
metadata and built-in to know who owns the
model, where’s the code, and how to run it and whatnot. And it also supports the
development of the model. Some of you already have parts
of this process built in-house. But some of these
parts, sometimes you need to build them or you
need help building them. That’s what we bring– where development pitch in. You don’t have to build
everything from scratch. We can just use reuse
the ready-made tools that we have for a lot
of these processes. Like for example, the Mapping
Toolbox or the image processing bit. This is a build environment. You’ll be building. A lot of models will be
changing quite often. Being able to quickly
review and gather the latest version of the
code, get a quick review, see if it actually works, is
giving you the expected results and quickly submit an assessment
to it, it’s quite necessary. You have to evaluate
the models, right? Some of the models are
built with a simple API when you can just pass
the inputs and outputs. But some others are
more complicated. They require a visualizing
app like the MIT scenarios, especially when you
still searching for data to be processed or what
fields you can use. You will then need
automated reporting. And again, most of these models
will have a set of requirements either from governmental
requirement– from government requirements
or internal company ones. And it’s usually very
easy to simply check whether the model fits the
requirements that were set and link those to the final
report that’s being built. And as we said, it’s
language agnostic. Each of the models can be
built in its own language with its own framework. And what ModelScape does, it
just brings everything together. And as you saw, especially on
the data side, we get models or we build models in
many different languages. And we simply put
them all together so that the various models can
consume the data in a combined way. There’s no need
to recode or unify everything in one single place. And you see here
just one dashboard is the one that we
use for the climate when we deal with
the climate space, but it’s fully customizable. So you would eventually have
your own dashboard adjusted to your own to your own. But anyway, we just
give you a glimpse on how complex the climate
justice problem is. I hope or we hope
that we made the point that the key point or
the key aspect of it is to be very modular and break
the problem in as much pieces as possible so that
you can actually reuse and swap the
bits as necessary and do it in a really– with a really quick turnaround. And finally, we want
to position ModelScape as a tool or the place
to do all this process in the end-to-end
workflow regardless of what is your current
development language for each of the models. So with that, I’m going to send
the presentation back to Akshay, who’s going to host
a bit of a Q&A. And I encourage any of you who
have questions to please ask it. Awesome. Thank you so much,
Elre and Edu, for going to the details of the different
aspects involved in climate stress testing and
specifically demonstrating how our platform addresses
some of the challenges faced throughout the process. So we have been
monitoring the questions. And before we
actually get to those, it looks like we do have
the polls up and running. So we wanted to run
a quick audience poll to see where your current
capacity building is at in your organization. So the question is around
climate-related expertise and capacity building
at your organization. Do you have a team of
climate experts in-house? Are you looking to
build a team in-house? Or are you either
working or looking to work with external
consultants on this? So, feel free to
make selections. It’s multiple choice. And once you’re done submitting,
just x out of the, the dialogue, and it’ll stay around
in the chat window. If you want to make a change to
your selection, you can do that. But the results
come in live and it seems like it’s a little
weighted towards folks who already have experts in-house. But otherwise, pretty
close for the other two. So I’ll let that poll
run in the background. And we can switch over to the
Q&A part of this webinar now. So as I mentioned, we’ve
been looking at the questions and trying to compile similar
ones so that we don’t need to– don’t need to repeat a
question and also making sure that we get everyone’s
questions answered. So, feel free to keep using the
chat window or the Q&A panel to ask questions. And let’s get started. So the first one is
around an assumption of static balance sheet for
regulatory stress testing. And this was asked a
couple of times, actually. So I can get started
and then Elre and Edu can add their thoughts if they
have anything extra to add. So, yes, traditionally–
and regulatory stress tests have been conducted for the
last four, five years maybe now, and there’s been tens
of them, like high tens, I think, by now
across the world. For the most part, it has
been under the assumption of a static balance sheet, but
that’s a little unrealistic. And going forward,
there is more of a shift towards having a dynamic
balance sheet assumption in some of the regulatory stress tests. So, personally, at MathWorks,
what we’re doing is we have a number of
academic partnerships that we currently have in place. And we’re engaging with
modelers in academia to look into this area
and working with them to figure out what that
would mean in software. So, Elre, Edu, would you
like to add anything to that? Well, yeah, no, maybe
when we initially looked at this problem– I think it was around
three or four years ago– all the models that initially
included climate on them were assuming a
static balance sheet. But more and more,
we’re starting to see the new models do
assume a dynamic one, right? So I think the couple ones– definitely the
first one we showed was assuming a balance sheet. I think that for the
second one, there’s an option already to assume a
dynamic one if I’m not mistaken. But anyway, yeah, the
original models, yeah, are definitely static. But more and more
nowadays, the news that are showing up to
assume a dynamic one. Awesome. The next question
is around data. And specifically, do you provide
the data to your customers? Do customers need a separate
relationship with data vendors? So we do not provide
our own data. We do, however, have connectors
to a number of leading climate data providers, which
can be accessed directly through our tools. So customers would license
directly with data providers either on the physical or
transmission risk side. And then through
that partnership, we can integrate
the data directly in our tools with the connectors
that we already have built. Actually– Sure, go ahead. –on this climate space,
though, a lot of the data comes from
governmental agencies. And oftentimes, the connect
that we provide is enough. There’s no need for an
additional agreement with because there’s no– data provided by the government. It’s usually free of access
or free to navigate, right? So especially on the physical
risk side, flooding data, European data is something that
is provided by public agencies, and it’s fairly straightforward
to access and manage. And we’ll help you with that. If it’s connecting to
a proprietary vendor, that’s a different story. But on the climate space
so far what we’re seeing is that a lot of the data is
actually on the public domain. That’s a good point. Thanks, Edu. All right. Do you have tools for managing
different geolocation coordinate systems? And I can let Elre
take this or Edu. Yeah, absolutely. So we have a whole
Mapping Toolbox to help you visualize your
data in a geographic content. You can build map displays
for more than 60 different map projections. So if you have a specific
map projection in mind, we’ve probably got
something for you. And it can also help you do
things like import raster data, vector data. We support a wide
range of data formats and can connect to
several web map servers that people typically use. And then, of course,
it’s a mapping toolbox, so it’s got some nice
complicated mapping functions as well– stuff like resampling,
interpolation, trimming, whatever your heart would
desire for geocoding and that sort of thing. Awesome I think the next one’s
also around one of the maps that you showed in
your slides, Elre. So it’s around what the map
of the cyclones represents. I’ll let you take that. Yeah, sure. So what we were seeing there
were simulated paths of tropical cyclones over a 1,000-year span. And then the color was
showing the maximum wind speed for that cyclone. So the reason that we’re
using simulated data here is that tropical cyclones
are firstly quite rare. And secondly, our record keeping
is pretty spotty until like the late 1800s. So if we were trying to do
risk prediction for cyclones, we’d only have about 100
years’ worth of good data, and not all of that
data is consistent. So to be able to do anything
useful with these sorts of models, what they
do is they simulate cyclones over a much
longer period of time. And obviously, the models
doing that simulation is trained on what
data we do have, and then we draw conclusions
from those simulated paths. Thanks, Elre. The next question is actually
very relevant for work that Edu recently did. So do you have tools for running
integrated assessment models? Edu, you want to talk a bit
about the dice 2023 model work that you just did? Yeah. so first of all, these
models are big, right? And when we started
looking at them, we realized that running
them might not be the most straightforward thing. So we recently started
with one of the smaller versions of these models,
which is called dice. It’s a model produced
by Professor Nordhaus. And they just– I mean with the
whole hype of the climate space in finance was a model that was
initially developed in the ’90s. And then they got it
got revamped in 2016, and it got ramped again on 2023. And we just got a bunch
of people interested in us giving them an
implementation that can be quickly run and
can be quickly changed. So the interesting
thing about these models is that they combine
what’s basically the economy today in a country
with some sort of climate model at the simplest– or the simplest space. And then you just run
an optimization problem that tells you if you want to– depending on how much
you want to leave for your great
grandchildren, how much you need to consume today. So because these models,
they grow quite a bit, there’s a trend from
bigger institutions to rerun run these models
and then just provide you with the results. And that causes
that these models keep having different versions
where they add new variables. They add new regions. And it’s a bit harder to
keep track to manage them. And more and more,
people are asking them if you can just provide
them with a runnable version that they can just
tweak and run locally with their own
assumptions and with– yeah, basically with
their own assumptions. And it’s something
we’ve been doing. Most commonly, we get asked
about the smaller versions of these models because
they’re easier to absorb, and easier to understand
what’s going on. There’s less moving parts. And eventually, the
results, they do vary, but it’s not like you get
absolutely different results, which usually are
on the same lines. And for all these models,
which are just projections into the future, that’s
oftentimes good enough. Being able to run your own
model with your own assumptions in a reasonable time frame
is probably the best option. Perfect. All right, next
question is about whether the model
considers wildfires. So I can take this one. The platform itself is more
of a is more plug and play. So it really– we can
incorporate or develop whatever models are needed
for the particular risk. Physical risks are
very different, depending on what region
you’re talking about. So we work closely
with the customer and where their assets are
located in order to develop, validate the most appropriate
model for their risks. OK. Next, we have a question on. validation of
climate risk models, given that there is
no history to go on. Elre, can you take that? Yeah. So I’d say in the
first instance, that’s a very good point. There is no historical data when
we’re doing these climate stress tests, but that’s
why it’s important that we use multiple
scenarios, and that we choose our scenarios carefully. Because if you’re considering
multiple possible scenarios, then that gives you– it
kind of hedge you against– hedges, you against
various plausible futures that could happen. So you’re safe against
the worst-case scenario. You’re doing very well in the
best-case scenario and so on. And especially, if you’re
using something like MIT’s EPPA scenarios with multiple
paths per possible future, then that gives you
uncertainty balance on what you expect
to happen as well. So, that’s really quite useful. In a different sense,
ModelScape actually has a way for you to do model
validation, which is generally used for credit risk models. But we’re starting to
work with customers to understand the extent to
which these sorts of climate models can actually be
validated and what that means. Perfect. The next couple are more Well,
they should be pretty quick. So this one’s around IAM models. Says, “IAM models provide a
bunch of climate variables and economic data at
the macro level at most. Still, we need to convert the
results into financial risk variables with new tools such
as climate risk credit, right?” Yes, that is absolutely correct. And that was shown
in the slides. I think Edu, it was, who
went over that in the demo. So again, the recording
and the slides will be shared afterwards, so
feel free to review it again. And reach out if you have
any questions or feedback. It’s a good point, actually. But yeah, let me say
something about that, though. Yes, it’s actually something
that surprised us quite a bit when we initially
looked at these models. The variables that
come out from them, they’re usually unconsumable
on a finance space. But I think most
people developing these models have realized
of that limitation and how hard it is to transform
megajoules per year for x sector to something you
can actually use. And nowadays, most
of the models, beyond the normal variables
that you would produce– like an energy consumption or
like CO2 emissions or whatnot– they also provide
derived variables like projected interest rates
or projected income value or projected unemployment
and stuff like that. So more and more, if you
look, for example, NGFS data, or you look at the Bank
of Canada scenarios, they do provide variables
of this type that are directly pluggable
in a much easier way because they’re the common
economic variables that can be used. If you look at the earlier
versions, definitely not, right? But the more you look
at the newest ones, there’s always a
subset of variables that the pure interest
rates, projected interest rate in the next 100 years. Here you go, according
to the scenario. And that actually makes
them more usable and more attractive to everybody. Yeah, that’s correct. All right, “Is this
platform-installed software? Or does it run as a
service in the cloud?” So the short answer is, both. Parts of it can be
usually hosted somewhere. And it’s pretty easy to
share models within Teams using our platform. And then, there’s
one last question, and that’s around modeling
available for chronic climate risks. So again, this goes back
to the data and the models that we develop. And it differs based on where
your geographic location is, what the most pertinent
physical risks are to you and your organization. But short answer is yes, chronic
climate risks can be modeled and are being modeled as well. Let me quickly do a scan– I know we have
three minutes left– to see if there are
any more questions. All right. I think we covered
almost all the questions. There have been questions
answered in the chat while the talk was going on. So I think we’ve gotten through
all the questions there. So thank you, everyone, again
for all your great questions. And yeah, so as
you’ve seen, we’ve– as we presented in
the examples today, our team has been
working on this space with a number of customers
for the last few years now. And to do this, we’re using
an existing library of APIs to connect the
commercial and open data or building new ones
to local data sources. We’re leveraging
our existing suite of models across the
computational finance suite of products
or building new ones as required by engaging
our network of leading academic partners in the
climate risk modeling space. And finally,
integrating the solution into your existing
enterprise risk stack. So if there’s a project that
you’re currently working on and have run into some
or all of these problems, we’d like to hear from you. If this is an area you’re
looking to do work in but not sure where
to start, again, feel free to reach out to us. You can either contact us on our
email, which is on the slide– [email protected],
or reach out to Elre, Edu, or me through LinkedIn. Our LinkedIn profiles have been
posted on this slide as well. So with that, I’d like to thank
everyone who attended today for your time and attention. And I look forward to
seeing your participation in one of our future webinars. Bye bye.