Impact of AI on Mortality – Essay Collection

Link: https://www.soa.org/resources/research-reports/2024/ai-mortality-essay-collection/

PDF: https://www.soa.org/4a5e85/globalassets/assets/files/resources/research-report/2024/impact-ai-mortality/2024-impact-ai-mort-essays.pdf

Graphic:

Excerpt:

The Society of Actuaries (SOA) Research Institute’s Mortality and Longevity Strategic Research Program Steering Committee issued a call for essays to explore the application of artificial intelligence (AI) to mortality and longevity. The objective was to gather a variety of perspectives and experiences on the use of AI in mortality modeling, forecasting and prediction to promote discussion and future research around this topic.


The collection includes six essays that were accepted for publication from all submissions. Two essays were chosen for prizes based on their creativity, originality, and likelihood of further thought on the subject matter.

Author(s): multiple

Publication Date: September 2024

Publication Site: Society of Actuaries, SOA Research Institute

Actuarial Professionalism Considerations for Generative AI

Link: https://www.actuary.org/sites/default/files/2024-09/professionalism-paper-generative-ai.pdf

Graphic:

Excerpt:

This paper describes the use and professionalism considerations for actuaries using
generative artificial intelligence (GenAI) to provide actuarial services. GenAI generates text,
quantitative, or image content based on training data, typically using a large language model
(LLM). Examples of GenAI deployments include Open AI GPT, Google Gemini, Claude,
and Meta. GenAI transforms information acquired from training data into entirely new
content. In contrast, predictive AI models analyze historical quantitative data to forecast
future outcomes, functioning like traditional predictive statistical models.


Actuaries have a wide range of understanding of AI. We assume the reader is broadly
familiar with AI and AI model capabilities, but not necessarily a designer or expert user. In
this paper, the terms “GenAI,” “AI,” “AI model(s),” and “AI tool(s)” are used interchangeably.
This paper covers the professionalism fundamentals of using GenAI and only briefly
discusses designing, building, and customizing GenAI systems. This paper focuses on
actuaries using GenAI to support actuarial conclusions, not on minor incidental use of AI
that duplicates the function of tools such as plug-ins, co-pilots, spreadsheets, internet search
engines, or writing aids.


GenAI is a recent development, but the actuarial professionalism framework helps actuaries
use GenAI appropriately: the Code of Professional Conduct, the Qualification Standards
for Actuaries Issuing Statements of Actuarial Opinion in the United States (USQS), and the
actuarial standards of practice (ASOPs). Although ASOP No. 23, Data Quality; No. 41,
Actuarial Communications; and No. 56, Modeling, were developed before GenAI was widely
available, each applies in situations when GenAI may now be used. The following discussion
comments on these topics, focusing extensively on the application of ASOP No. 56, which
provides guidance for actuaries when they are designing, developing, selecting, modifying,
using, reviewing, or evaluating models. GenAI is a model; thus ASOP No. 56 applies.


The paper explores use cases and addresses conventional applications, including quantitative
and qualitative analysis, as of mid-2024, rather than anticipating novel uses or combinations
of applications. AI tools change quickly, so the paper focuses on principles rather than
the technology. The scope of this paper does not include explaining how AI models are
structured or function, nor does it offer specific guidelines on AI tools or use by the actuary
in professional settings. Given the rapid rate of change within this space, the paper makes no
predictions about the rapidly evolving technology, nor does it speculate on future challenges
to professionalism.

Author(s): Committee on Professional Responsibility of the American Academy of Actuaries

Committee on Professional
Responsibility
Geoffrey C. Sandler, Chairperson
Brian Donovan
Richard Goehring
Laura Maxwell
Shawn Parks
Matthew Wininger
Kathleen Wong
Yukki Yeung
Paul Zeisler
Melissa Zrelack

Artificial Intelligence Task Force
Prem Boinpally
Laura Maxwell
Shawn Parks
Fei Wang
Matt Wininger
Kathy Wong
Yukki Yeung

Publication Date: September 2024

Publication Site: American Academy of Actuaries

Actuarial ChatBots

Link: https://riskviews.wordpress.com/actuarial-chatbots/

Graphic:

Excerpt:

Here are several examples of ChatBots and other AI applications for actuaries to try.

Answers that you might get from a general AI LLM such as ChatGPT may or may not correctly represent the latest thinking in actuarial science. These chatBots make an effort to educate the LLM with actuarial or other pertinent literature so that you can get better informed answers.

But, you need to be a critical user. Please be careful with the responses that you get from these ChatBots and let us know if you find any issues. This is still early days for the use of AI in actuarial practice and we need to learn from our experiences and move forward.

Note from meep: there are multiple Apps/Bots linked from the main site.

Author(s): David Ingram

Publication Date: accessed 28 Aug 2024

Publication Site: Risk Views

Comments on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

Link: https://www.regulations.gov/document/TREAS-DO-2024-0011-0001/comment

Description:

Publicly available comments on Dept of Treasury’s request for information on AI use, opportunities & risk in financial services sector.

Example: https://www.regulations.gov/comment/TREAS-DO-2024-0011-0010 — comment from ACLI

The NAIC has developed its definition of AI, and the insurance industry has responded with
information in accordance with that definition. Any definition developed by Treasury should
align with, or at a minimum not conflict with, definitions of AI in existing regulatory
frameworks for financial institutions.

The Treasury definition of AI should reflect the following:
o Definitions should be tailored to the different types of AI and the use cases and
risks they pose. The definition used in this RFI is similar to an outdated definition put
forth by the Organization for Economic Coordination and Development (OECD),
which could be narrowed for specific use cases (e.g., tiering of risks under the EU
framework).
o There are also distinctions between generative AI used to make decisions, without
ultimately including human input or intervention, and AI used with human decisionmaking being absolute or the usage being solely for internal efficiencies and
therefore not impactful for customers.
o AI covers a broad range of predictive modeling techniques that would otherwise not
be considered Artificial Intelligence. A refinement to the definition that classifies AI
as machine learning systems that utilize artificial neural networks to make
predictions may be more appropriate.
o The definition of AI should exclude simpler computation tasks that companies have
been using for a long time.

Author(s): Various

Publication Date: accessed 9 Aug 2024

Publication Site: Regulations.gov

Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector

Link: https://www.federalregister.gov/documents/2024/06/12/2024-12336/request-for-information-on-uses-opportunities-and-risks-of-artificial-intelligence-in-the-financial

Excerpt:

SUMMARY:

The U.S. Department of the Treasury (Treasury) is seeking comment through this request for information (RFI) on the uses, opportunities and risks presented by developments and applications of artificial intelligence (AI) within the financial sector. Treasury is interested in gathering information from a broad set of stakeholders in the financial services ecosystem, including those providing, facilitating, and receiving financial products and services, as well as consumer and small business advocates, academics, nonprofits, and others.

DATES:

Written comments and information are requested on or before August 12, 2024.

….

Oversight of AI—Explainability and Bias

The rapid development of emerging AI technologies has created challenges for financial institutions in the oversight of AI. Financial institutions may have an incomplete understanding of where the data used to train certain AI models and tools was acquired and what the data contains, as well as how the algorithms or structures are developed for those AI models and tools. For instance, machine-learning algorithms that internalize data based on relationships that are not easily mapped and understood by financial institution users create questions and concerns regarding explainability, which could lead to difficulty in assessing the conceptual soundness of such AI models and tools.[22]

Financial regulators have issued guidance on model risk management principles, encouraging financial institutions to effectively identify and mitigate risks associated with model development, model use, model validation (including validation of vendor and third-party models), ongoing monitoring, outcome analysis, and model governance and controls.[23] These principles are technology-agnostic but may not be applicable to certain AI models and tools. Due to their inherent complexity, however, AI models and tools may exacerbate certain risks that may warrant further scrutiny and risk mitigation measures. This is particularly true in relation to the use of emerging AI technologies.

Furthermore, the rapid development of emerging AI technologies may create a human capital shortage in financial institutions, where sufficient knowledge about a potential risk or bias of those AI technologies may be lacking such that staff may not be able to effectively manage the development, validation, and application of those AI technologies. Some financial institutions may rely on third-party providers to develop and validate AI models and tools, which may also create challenges in ensuring alignment with relevant risk management guidance.

Challenges in explaining AI-assisted or AI-generated decisions also create questions about transparency generally, and raise concerns about the potential obfuscation of model bias that can negatively affect impacted entities. In the Non-Bank Report, Treasury noted the potential for AI models to perpetuate discrimination by utilizing and learning from data that reflect and reinforce historical biases.[24] These challenges of managing explainability and bias may impede the adoption and use of AI by financial institutions.

Author(s): Department of the Treasury.

Publication Date: 6/12/2024

Publication Site: Federal Register

Should researchers use AI to write papers? Group aims for community-driven standards

Link: https://www.science.org/content/article/should-researchers-use-ai-write-papers-group-aims-community-driven-standards

Excerpt:

When and how should text-generating artificial intelligence (AI) programs such as ChatGPT help write research papers? In the coming months, 4000 researchers from a variety of disciplines and countries will weigh in on guidelines that could be adopted widely across academic publishing, which has been grappling with chatbots and other AI issues for the past year and a half. The group behind the effort wants to replace the piecemeal landscape of current guidelines with a single set of standards that represents a consensus of the research community.

Known as CANGARU, the initiative is a partnership between researchers and publishers including Elsevier, Springer Nature, Wiley; representatives from journals eLife, Cell, and The BMJ; as well as industry body the Committee on Publication Ethics. The group hopes to release a final set of guidelines by August, which will be updated every year because of the “fast evolving nature of this technology,” says Giovanni Cacciamani, a urologist at the University of Southern California who leads CANGARU. The guidelines will include a list of ways authors should not use the large language models (LLMs) that power chatbots and how they should disclose other uses.

Since generative AI tools such as ChatGPT became public in late 2022, publishers and researchers have debated these issues. Some say the tools can help draft manuscripts if used responsibly—by authors who do not have English as their first language, for example. Others fear scientific fraudsters will use them to publish convincing but fake work quickly. LLMs’ propensity to make things up, combined with their relative fluency in writing and an overburdened peer-review system, “poses a grave threat to scientific research and publishing,” says Tanya De Villiers-Botha, a philosopher at Stellenbosch University.

Author(s): HOLLY ELSE

Publication Date: 16 Apr 2024

Publication Site: Science

doi: 10.1126/science.z9gp5zo

Post Office scandal explained: What the Horizon saga is all about

Link: https://www.bbc.com/news/business-56718036

Excerpt:

The Post Office had prosecution powers and, between 1999 and 2015, it prosecuted 700 sub-postmasters and sub-postmistresses – an average of one a week – based on information from a computer accounting system called Horizon. Another 283 cases were brought by other bodies including the Crown Prosecution Service.

Some went to prison for false accounting and theft. Many were financially ruined, even though they had repeatedly highlighted problems with the software.

After 20 years, campaigners won a legal battle to have their cases reconsidered. To date only 93 convictions have been overturned. Under government plans, victims will be able to sign a form to say they are innocent, in order to have their convictions overturned and claim compensation.

….

Horizon was introduced by the Post Office in 1999. The system was developed by the Japanese company Fujitsu, for tasks like accounting and stocktaking.

Sub-postmasters complained about bugs in the system after it falsely reported shortfalls – often for many thousands of pounds.

Some attempted to plug the gap with their own money, as their contracts stated that they were responsible for any shortfalls. Many faced bankruptcy or lost their livelihoods as a result.

The Horizon system is still used by the Post Office, which describes the latest version as “robust”.

….

Nobody has ever been held accountable for the scandal.

The heavily criticised former Post Office chief executive, Paula Vennells, said she would hand back her CBE after a petition calling for its removal gathered more than a million signatures.

Lib Dem leader Sir Ed Davey is among several politicians who have faced questions, as he was postal affairs minister in the coalition government. He said he regretted not asking “tougher questions” of Post Office managers, describing what had happened as “dreadful”.

The inquiry is hearing from Post Office investigators, Fujitsu, civil servants and others.

Author(s): By Kevin Peachey, Michael Race & Vishala Sri-Pathma

Publication Date: 11 Jan 2024

Publication Site: BBC News

Is Social Security’s Website Suddenly Saying the System Owes You Far Less?

Link: https://www.forbes.com/sites/kotlikoff/2023/10/20/is-social-securitys-website-suddenly-saying-it-owes-you-far-less/?sh=e3603bc7f679

Graphic:

Excerpt:

Social Security states, at this link: retirement/planner/AnypiaApplet.html, that “(Its) Online Calculator is updated periodically with new benefit increases and other benefit amounts. Therefore, it is likely that your benefit estimates in the future will differ from those calculated today.” It also says that the most recent update was in August 2023.

This statement references Social Security’s Online Calculator. But they have a number of calculators that make different assumptions. And it’s not clear what calculator they used to produce the graphic, see below, that projects your future retirement benefit conditional on working up to particular dates and then collecting immediately. Nor is Social Security making clear what changes they are making to their calculators through time.

What I’m quite sure is true is that the code underlying Social Security’s graphic projects your future earnings at their current nominal value. This is simply nuts. Imagine you are age 40 and will work till age 67 and take your benefits then. If inflation over the next 27 years is 27 percent, your real earnings are being projected to decline by 65 percent! This is completely unrealistic and makes the chart, if my understanding is correct, useless.

….

The only thing that might, to my knowledge, reduce projected future future benefits over the course of the past four months is a reduction in Social Security’s projected future bend point values in its PIA (Primary Insurance Amount) formula. This could lead to lower projected future benefits for those pushed higher PIA brackets, which would mean reduced benefit brackets. This could also explain why the differences in projections vary by person.

….

Millions of workers are being told, from essentially one day to the next, that their future real Social Security income will be dramatically lower. Furthermore, the assumption underlying this basic chart — that your nominal wages will never adjust for inflation — means that for Social Security’s future benefit estimate is ridiculous regardless of what it’s assuming under the hood about future bend points.

….

One possibility here is that a software engineer has made a big coding mistake. This happens. On February 23, 2022, I reported in Forbes that Social Security had transmitted, to unknown millions of workers, future retirement benefits statements that were terribly wrong. The statement emailed to me by a worker, which I copy in my column, specified essentially the same retirement benefit at age 62 as at full retirement age. It also specified a higher benefit for taking benefits several few months before full retirement.

Anyone familiar with Social Security benefit calculations would instantly conclude that there was either a major bug in the code or that that, heaven forbid, the system had been hacked. But if this wasn’t a hack, why would anyone have changed code that Social Security claimed, for years, was working correctly? Social Security made no public comment in response to my prior column. But it fixed its code as I suddenly stopped receiving crazy benefit statements.

Author(s): Laurence Kotlikoff

Publication Date: 20 Oct 2023

Publication Site: Forbes

Large language models propagate race-based medicine

Link: https://www.nature.com/articles/s41746-023-00939-z

Graphic:

For each question and each model, the rating represents the number of runs (out of 5 total runs) that had concerning race-based responses. Red correlates with a higher number of concerning race-based responses.

Abstract:

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.

Author(s):Jesutofunmi A. Omiye, Jenna C. Lester, Simon Spichak, Veronica Rotemberg & Roxana Daneshjou

Publication Date: 20 Oct 2023

Publication Site: npj Digital Medicine

How I Created a Data Visualization With Zero Coding Skills, Thanks to ChatGPT

Link: https://nightingaledvs.com/data-visualization-using-chatgpt-to-code/

Graphic:

Excerpt:

Data visualization is cool but at the same time it’s bit daunting that I need to know lots of tech stacks to actually implement it.

I totally agree that even when I was studying data visualization, I spent a pretty substantial amount of time learning how to code, handle web hosting, work with Python, SQL, and more, all while absorbing knowledge on information visualization.

Thankfully, we no longer need to deep dive into technical gatekeepers in this field. This doesn’t mean that technical knowledge is not valuable, but rather that we no longer need to be intimidated by technology because AI can spoon-feed us knowledge and do the heavy lifting for us. Are you excited? Let’s get started!

I’m going to build the data visualization that one of my students posted on weekly write-up homework.

Author(s): Soonk Paik

Publication Date: 4 April 2023

Publication Site: Nightingale

ChatGPT: A conversation about underwriting and life insurance

Link: https://www.dig-in.com/opinion/chatgpt-a-conversation-about-underwriting-life-insurance

Excerpt:

RGA decided to put ChatGPT to the test by posing three questions related to underwriting and the future of life insurance. The answers from ChatGPT that follow are entirely AI-generated. Following those responses is a brief FAQ that further explores this emerging topic.

How will COVID-19 affect U.S. long-term mortality?

It is difficult to predict the long-term effects of COVID-19 on mortality in the United States. The pandemic has already had a significant impact on mortality, with over 1.1 million COVID-attributed deaths reported in the country. However, it is possible that the mortality rate from COVID-19 could decline over time as more people become immune to the virus and as effective vaccines are developed and distributed.

There are also other factors that could affect the long-term mortality rate from COVID-19. For example, the effectiveness of public health measures, such as mask-wearing and social distancing, in controlling the spread of the virus could have a significant impact on the number of deaths. In addition, the availability of effective treatments for COVID-19 could also play a role in reducing the mortality rate from the virus.

It is important to continue monitoring the situation and taking steps to control the spread of the virus to minimize the impact on mortality.

Author(s): ChatGPT, Jeff Heaton, Dave Rengachary

Publication Date: 14 Feb 2023

Publication Site: Digital Insurance

Insurtech Regs, ‘Dark Pattern’ Spottting on NAIC’s To-Do List

Link: https://www.thinkadvisor.com/2022/12/16/insurtech-regs-dark-pattern-spottting-on-naics-to-do-list/

Excerpt:

In August [2022], Birny Birnbaum, the executive director of the Center for Economic Justice, asked the [NAIC] Market Regulation committee to train analysts to detect “dark patterns” and to define dark patterns as an unfair and deceptive trade practice.

The term “dark patterns” refers to techniques an online service can use to get consumers to do things they would otherwise not do, according to draft August meeting notes included in the committee’s fall national meeting packet.

Dark pattern techniques include nagging; efforts to keep users from understanding and comparing prices; obscuring important information; and the “roach motel” strategy, which makes signing up for an online service much easier than canceling it.

Author(s): Allison Bell

Publication Date: 16 Dec 2022

Publication Site: Think Advisor