OPINION: AI and underwriting

8. December 2025

By: Tom Shinya, Senior Expert, Marine Underwriting Dept., Tokio Marine & Nichido Fire Insurance Co., Ltd and an IUMI Executive Committee Member.

The rapid adoption of AI, especially large language models (LLMs) in generative AI, is changing economies and personal lives worldwide. While this growth delivers clear benefits, it also brings risks of misuse and error, which calls for regulation. Yet, on balance, AI’s advantages far outweigh its drawbacks and “non-use of AI” is not an option for business anymore. Marine insurance is no exception and AI tools are advancing quickly across markets.

AI could be described as a miraculous external brain but if we look carefully, this type of alternative brain has been around since the dominance of the internet where we have been able to search for advice and information rather than think through with our own brain. Younger generations often view extended contemplation as inefficient and would rather turn to the internet immediately. With these trends becoming the norm, it is natural that most people believe that AI should handle knowledge-intensive business tasks. However, this belief is partly valid―AI can solve tasks only if humans retain the capacity to judge the output. AI is not perfect and insurance professionals need to be able to evaluate the output, especially when the request to AI is complex―the harder the problem, the more the need for human verification.

Drafting an insurance wording is a good example. AI is not yet able to consistently produce wordings that satisfies the underwriter’s intention with appropriate legal expressions and exact commercial terms; even with a very precise prompt. Policy language demands strict, unambiguous expression, grounded in the legal meanings of terms under the applicable law. Marine insurance often compounds complexity by using foreign-language wordings and could split governing laws. Marine insurance wordings require specific terms regarding a specific cargo, proper use of the terms on international trade and terms that makes sense from a maritime perspective – this is not always what AI produces. The problem here is that an inexperienced underwriter is not always able to identify what is wrong with the AI produced wordings and this could lead to trouble when a loss occurs.

What makes it even more problematic is that AI output is generally well-structured, authoritative and polished. In these cases logical gaps, definitional inconsistencies or misapplied terms can be harder to detect compared to those drafted by humans. As a result, subtle mistakes—sometimes fatal ones—can go unnoticed even for skilled underwriters. Errors caused by AI are for many reasons including mis-referenced materials and flawed sources. Detecting these errors requires human verification― researching authoritative books or consultation with experts is often the solution.

I also believe that insurance practice relies heavily on human insight. For example, underwriting integrates an understanding of insurance theory, knowledge of the subject-matter insured, knowledge of maritime and trade practice, legal, geopolitics, regulations and more. Most importantly, an ability is required to “imagine” how perils may affect the insured interest and accordingly “imagine” how losses may occur. Technical knowledge and experience of utilising reinsurance will be another element of the underwriting insight. On top of those, there could be the competition generating a difficult balance for prudent underwriting. In such situations the experienced practitioner will develop a strategy to adjust conditions or pricing. Considering that underwriting is built on such elements, generative AI is, at best, an assistant.

Another critical limit which AI cannot overcome is access to data. As the matter becomes more specialised, high-quality information is scarcer on the internet. Much remains in non-digitised books or paywalled sources, or even just inside our heads. This will remain as a limiter even in a future of artificial general or super intelligence.

Having said all the above, I still cannot deny that AI is proving to be an incredible problem-solving tool making it invaluable for insurers. But insurers must be able to judge if an AI-generated answer is sound or not. Our future is likely to be defined by two parallel dynamics; improving operational efficiency driven by AI and a heightened demand for professionals who can properly evaluate AI’s outputs. This in turn creates a new strategy for insurers who should deliberately develop specialists with deep domain knowledge. This strategy will reshape our industry, especially in terms of education and training. Paradoxically, the easier it becomes to obtain answers from AI, the more vital and rigorous learning becomes. Studying might be seen as less relevant when there is a tool with instant answers at hand. Learning with books might seem a backward step. But it is the practitioner’s expertise that will evaluate the responses given by AI. And there is no shortcut to acquiring that capacity.