Unauthorized AI at Work and Cybersecurity Threats

Unauthorized use of AI tools by employees at work in financial advisory or other service providing firms may conflict with this  guidance by the EBSA…. If a data breach occurred because of an unauthorized use of AI it could result in a lawsuit such as the one filed against JP Morgan Chase for its software glitch.

As we’ve written before, new data shows that some of the biggest risks to a firm’s cybersecurity can come from inside the firm.[1] Inside threats used to appear in the form of failures in communication between departments that impact cross functionality. Now, however, there may be a new source of concern: unauthorized use of artificial intelligence (AI) tools at work.

According to a report on a survey conducted by Microsoft and Linked in in May of 2024, “75% of people in desk jobs are using AI to perform some tasks without the knowledge of the company.”[2] The report also noted that generative AI – explain – doubled in the six months preceding the study. This explosion in the use of AI creates a significant risk when it occurs at firms handling highly sensitive client data, such as financial advisors and certified financial planners. “More than three-quarters of employees using AI admit to bringing their own AI tools to work, straining enterprise cybersecurity and data privacy standards.” Unauthorized use of AI alone should be cause for concern among information managers, but use of AI tools and programs can “expose critical data” unless a firm has some policies in place.

This threat of inadvertent data exposure is more than hypothetical: earlier in May of 2024, JP Morgan Chase, which administers retirement plans for large companies, admitted to a data breach triggered by a  “software issue that caused certain reports run by three authorized system users to include plan participant information that they were not entitled to see….”[3] That software glitch exposed the personal information of nearly half a million participants.

In 2023, the Department of Labor issued new guidance to plan sponsors concerning cybersecurity threats on the recommendation from the Government Accounting Office. Most of that guidance is aimed at assisting sponsors in evaluating how their service providers protect and maintain participant privacy. The guidance also advised service providers, including those with recordkeeping or fiduciary responsibilities. One of the recommendations made by the DOL, through its Employee Benefits Security Administration (EBSA) was that service providers should conduct annual risk assessments and that they have strong protocols limiting access to IT systems.[4]

Unauthorized use of AI tools by employees at work in financial advisory or other service providing firms may conflict with this  guidance by the EBSA. This may be especially true as to protocols that limit access to IT systems and risk assessments. If a data breach occurred because of an unauthorized use of AI it could result in a lawsuit such as the one filed against JP Morgan Chase for its software glitch. A firm may want to consult with compliance counsel about how they can update their annual risk assessments and protocols to capture potential risk from AI tools.

Earlier this year, we wrote about a trend in using AI to maintain retirement plan policy libraries as a way of recordkeeping how a plan’s fiduciaries fulfilled their duty of prudence. In that article, we suggested that a policy concerning use of AI might include the four T’s: transparency by making clear who and how AI systems are created; continuously testing and revising AI systems to ensure against inadvertent breaches of privacy; specific identification of approved AI tools and their purpose; and a program to train whichever programs are approved away from potential privacy breaches.

Additionally, advisors working with public entities may need to take note of changes in state laws. For example, Arkansas has two new laws that requires public entities to have clear cybersecurity policies and also meet certain requirements to obtain cybersecurity insurance.[5] State legislatures may be increasingly focused on cybersecurity regulation in the coming years.

[1] https://www.bcgbenefits.com/blog/cybersecurity-inside-and-out

[2] https://www.ciodive.com/news/enterprise-generative-ai-governance-lacks-microsoft-linkedin/715699

[3] https://www.plansponsor.com/participant-sues-j-p-morgan-over-data-breach

[4] https://www.plantemoran.com/explore-our-thinking/insight/2021/05/new-dol-guidance-for-cybersecurity-risks-for-employee-benefit-plans

[5] https://www.ncsl.org/technology-and-communication/cybersecurity-2023-legislation

These articles are prepared for general purposes and are not intended to provide advice or encourage specific behavior. Before taking any action, Advisors and Plan Sponsors should consult with their compliance, finance and legal teams.

Back to Blog

Latest Entries

Need a Proposal?

Before leaping into the unknown, we recommend a thorough examination of your plan. Because we are experts in the field, we know the marketplace and know what your existing vendor is capable of offering.  Through this examination, we can help you optimize the service you receive.

get xpress proposal