2026 CLFP Recertification

Educational Content: Using Artificial Intelligence in Equipment Finance

The Code of Hammurabi dates back to 1700 BC.  Fast forward to the 1800s AD, and leasing began to become prevalent with the rise of the railroad industry.  With the development of the personal computer in the early 1980s and the launching of the first website in the early 1990s, conducting business changed dramatically.  And so did the risk of fraud.

According to various definitions of Artificial Intelligence (AI), AI refers to computer systems performing tasks typically/historically performed by humans/human intelligence.  AI initially surfaced in the 1950s when British mathematician Alan Turing conceptualized that machines could think like humans.  Over the decades, AI has continuously and rapidly advanced. Companies in various industries are adopting AI for several reasons, including increased efficiency, enhanced speed, improved accuracy, and enhanced decision-making capabilities.

 

Companies within the finance industry are using AI and investigating the adoption of AI for a variety of purposes, including fraud detection.  AI algorithms can rapidly analyze vast amounts of data to identify patterns and detect anomalies, potentially including fraudulent activities by applicants, vendors/equipment sellers, and existing customers.  Additionally, AI algorithms might be able to quickly identify customers more likely to become delinquent with payments and/or default.

 

While there are many benefits and options available with AI, there are also risks and potential unintended consequences that need to be carefully considered.  Automation biases, when humans trust computer-generated outputs even though their judgment or other data is telling them something different, is the most common risk. Some other risks

include (but not limited to): privacy, cybersecurity, misinformation, incorrect decisions, lack of accountability, lack of transparency/explainability, etc. 

 

A key component to successfully using AI within one’s organization is that it should be used to support, but not replace, human decision-making.  One way to achieve this is through manual audits, which look for errors or anomalies that AI may have missed or incorrectly classified. A second method, known as “red teaming,” uses ethical hackers to simulate attacks so that vulnerabilities can be identified. Finally, regular training for those who use the AI system should be conducted so that these groups understand where it might fail and how to verify its outputs.

 

AI is relatively new, but like many new products and services developed over the centuries, its acceptance and usage are growing rapidly.  Users seeking to leverage the numerous benefits of AI should be aware of the associated risks and ensure that adequate controls are in place.

SOURCES:

https://www.linkedin.com/pulse/responsible-use-ai-checks-balances-william-klinkowsky-un2lc/

https://www.coursera.org/articles/history-of-ai?msockid=09452bee20cc63d435ce3ff3217b62ef