Artificial intelligence is already helping commercial lenders streamline intake, review financial information, detect fraud, summarize documents, and speed routine communications. For SBA lending teams, those efficiencies are attractive, but they come with legal risk if sensitive borrower, deal, or counsel-related information is entered into the wrong AI tool.
What the law currently says
The law on AI and privilege is still developing, and the reported authority remains limited. The clearest current decision is United States v. Heppner, where a federal court in the Southern District of New York held that a client’s use of a public generative AI tool did not create attorney-client privilege or work-product protection for the resulting materials under the facts presented there.
That decision appears to have turned on traditional privilege principles, not a broad anti-AI rule. Commentators describing Heppner emphasize that the user was acting on his own, not at counsel’s direction, and was using a public tool whose terms undermined any reasonable expectation of confidentiality.
Where uncertainty remains
The important point for lenders is that Heppner does not appear to resolve every AI-privilege question. Available commentary notes that the court’s reasoning was closely tied to the use of a public, unsecured platform, and it leaves open the possibility that attorney-directed use of an enterprise tool with stronger confidentiality protections could be analyzed differently.
Why this matters in SBA lending
Your team handles highly sensitive material every day: borrower financials, credit memos, guarantor information, SBA file documentation, and communications with legal counsel. If your loan officers or compliance staff are feeding any of that material into a consumer AI tool, even for something as routine as drafting an email or summarizing a credit package, you may be:
- Waiving attorney-client privilege on prior legal advice embedded in those documents
- Exposing confidential borrower data to third-party AI vendors in potential violation of privacy obligations and your institution’s data governance policies
- Creating discoverable records that could surface in an adversarial action with a borrower or regulatory body.
Practical recommendations
- Do not input borrower-specific, deal-specific, or counsel-related information into public consumer AI tools. That is the clearest current legal risk area under the emerging case law.
- If your institution wants to use AI, work through approved enterprise tools with contractual confidentiality, data segregation, and retention controls, while recognizing those steps reduce risk but do not automatically guarantee privilege.
- Keep a human in the loop for underwriting, adverse-action, servicing, and exception decisions, especially where judgment, fairness, or policy interpretation matters.
- Update AI policies and training so lending staff understand that convenience does not override confidentiality, privacy, or privilege concerns.
- Involve counsel before using AI for litigation-sensitive matters, SBA enforcement issues, guarantee disputes, or internal investigations, because those are the situations where privilege questions are most likely to matter.
For now, the safest takeaway is simple: AI can be a useful tool in commercial lending, but lenders should assume that public AI platforms are the wrong place for confidential legal or borrower information unless counsel has specifically approved the workflow. Deploying AI responsibly means understanding where its use could undermine the very legal protections your institution depends on. When in doubt, pause before you prompt. For more information, contact Ethan at 267-470-1186 or esmith@starfieldsmith.com
Comments are closed.