Actuarial Professionalism Considerations for Generative AI

Link: https://www.actuary.org/sites/default/files/2024-09/professionalism-paper-generative-ai.pdf

Graphic:

Excerpt:

This paper describes the use and professionalism considerations for actuaries using
generative artificial intelligence (GenAI) to provide actuarial services. GenAI generates text,
quantitative, or image content based on training data, typically using a large language model
(LLM). Examples of GenAI deployments include Open AI GPT, Google Gemini, Claude,
and Meta. GenAI transforms information acquired from training data into entirely new
content. In contrast, predictive AI models analyze historical quantitative data to forecast
future outcomes, functioning like traditional predictive statistical models.


Actuaries have a wide range of understanding of AI. We assume the reader is broadly
familiar with AI and AI model capabilities, but not necessarily a designer or expert user. In
this paper, the terms “GenAI,” “AI,” “AI model(s),” and “AI tool(s)” are used interchangeably.
This paper covers the professionalism fundamentals of using GenAI and only briefly
discusses designing, building, and customizing GenAI systems. This paper focuses on
actuaries using GenAI to support actuarial conclusions, not on minor incidental use of AI
that duplicates the function of tools such as plug-ins, co-pilots, spreadsheets, internet search
engines, or writing aids.


GenAI is a recent development, but the actuarial professionalism framework helps actuaries
use GenAI appropriately: the Code of Professional Conduct, the Qualification Standards
for Actuaries Issuing Statements of Actuarial Opinion in the United States (USQS), and the
actuarial standards of practice (ASOPs). Although ASOP No. 23, Data Quality; No. 41,
Actuarial Communications; and No. 56, Modeling, were developed before GenAI was widely
available, each applies in situations when GenAI may now be used. The following discussion
comments on these topics, focusing extensively on the application of ASOP No. 56, which
provides guidance for actuaries when they are designing, developing, selecting, modifying,
using, reviewing, or evaluating models. GenAI is a model; thus ASOP No. 56 applies.


The paper explores use cases and addresses conventional applications, including quantitative
and qualitative analysis, as of mid-2024, rather than anticipating novel uses or combinations
of applications. AI tools change quickly, so the paper focuses on principles rather than
the technology. The scope of this paper does not include explaining how AI models are
structured or function, nor does it offer specific guidelines on AI tools or use by the actuary
in professional settings. Given the rapid rate of change within this space, the paper makes no
predictions about the rapidly evolving technology, nor does it speculate on future challenges
to professionalism.

Author(s): Committee on Professional Responsibility of the American Academy of Actuaries

Committee on Professional
Responsibility
Geoffrey C. Sandler, Chairperson
Brian Donovan
Richard Goehring
Laura Maxwell
Shawn Parks
Matthew Wininger
Kathleen Wong
Yukki Yeung
Paul Zeisler
Melissa Zrelack

Artificial Intelligence Task Force
Prem Boinpally
Laura Maxwell
Shawn Parks
Fei Wang
Matt Wininger
Kathy Wong
Yukki Yeung

Publication Date: September 2024

Publication Site: American Academy of Actuaries

Coordinating VM-31 With ASOP No. 56 Modeling

Link: https://www.soa.org/sections/financial-reporting/financial-reporting-newsletter/2022/july/fr-2022-07-rudolph/

Excerpt:

In the PBRAR, VM-31 3.D.2.e.(iv) requires the actuary to discuss “which risks, if any, are not included in the model” and 3.D.2.e.(v) requires a discussion of “any limitations of the model that could materially impact the NPR [net premium reserve], DR [deterministic reserve] or SR [stochastic reserve].” ASOP No. 56 Section 3.2 states that, when expressing an opinion on or communicating results of the model, the actuary should understand: (a) important aspects of the model being used, including its basic operations, dependencies, and sensitivities; (b) known weaknesses in assumptions used as input and known weaknesses in methods or other known limitations of the model that have material implications; and (c) limitations of data or information, time constraints, or other practical considerations that could materially impact the model’s ability to meet its intended purpose.

Together, both VM-31 and ASOP No. 56 require the actuary (i.e., any actuary working with or responsible for the model and its output) to not only know and understand but communicate these limitations to stakeholders. An example of this may be reinsurance modeling. A common technique in modeling the many treaties of yearly renewable term (YRT) reinsurance of a given cohort of policies is to use a simplification, where YRT premium rates are blended according to a weighted average of net amounts at risk. That is to say, the treaties are not modeled seriatim but as an aggregate or blended treaty applicable to amounts in excess of retention. This approach assumes each third-party reinsurer is as solvent as the next. The actuary must ask, “Is there a risk that is ignored by the model because of the approach to modeling YRT reinsurance?” and “Does this simplification present a limitation that could materially impact the net premium reserve, deterministic reserve or stochastic reserve?”

Understanding limitations of a model requires understanding the end-to-end process that moves from data and assumptions to results and analysis. The extract-transform-load (ETL) process actually fits well with the ASOP No. 56 definition of a model, which is: “A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information.” Many actuaries work with models on a daily basis, yet it helps to revisit this important definition. Many would not recognize the routine step of accessing the policy level data necessary to create an in-force file as part of the model itself. The actuary should ask, “Are there risks introduced by the frontend or backend processing in the ETL routine?” and “What mitigations has the company established over time to address these risks?”

Author(s): Karen K. Rudolph

Publication Date: July 2022

Publication Site: SOA Financial Reporter

Morning with Meep – ASOP 56 – Modeling

Description:

I take a look at the Actuarial Standard of Practice 56: Modeling — in effect as of October 2020.

Links: Keep Up With the Standards: On ASOP 56, Modeling

https://www.soa.org/sections/modeling/modeling-newsletter/2021/april/mp-2021-04-campbell/

ASOP 56: Modeling

http://www.actuarialstandardsboard.org/asops/modeling-3/

Author(s): Mary Pat Campbell

Publication Date: 3 May 2021

Publication Site: Meep’s Math Matters at Youtube

Keep Up With the Standards: On ASOP 56, Modeling

Link: https://www.soa.org/sections/modeling/modeling-newsletter/2021/april/mp-2021-04-campbell/

Excerpt:


If nothing else, having a checklist to go through while working on modeling can help you make sure you don’t miss anything. Hey, ASB, make some handy-dandy sticky note checklists we can stick on our monitors to ask us:

3.1 Does our model meet the intended purpose?

3.2 Do we understand the model, especially any weaknesses and limitations?

3.3 Are we relying on data or other information supplied by others?

3.4 Are we relying on models developed by others?

3.5 Are we relying on experts in the development of the model?

3.6 Have we evaluated and mitigated model risk?

3.7 Have we appropriately documented the model?

Author(s): Mary Pat Campbell

Publication Date: April 2021

Publication Site: The Modeling Platform at the Society of Actuaries