Client Alert: AI: A Powerful Tool with Significant Risks

March 27, 2026

          AI tools like ChatGPT, Claude, and Gemini have rapidly become indispensable resources for all of us. We understand that charter school operators, who often run lean organizations with limited administrative staff, may find AI especially appealing. Need a first draft of a parent handbook section? A summary of new state reporting requirements? A template for a Governing Board policy? AI can produce polished, professional-sounding content in seconds, saving valuable time and resources.

          But AI has a significant blind spot. AI models are trained on massive datasets that overwhelmingly reflect the policies, regulations, and governance structures of traditional public school districts. When you ask an AI tool to “draft a student discipline policy” or “summarize our obligations under federal law,” AI does not know (or care!) whether you operate a district school, a charter school, or a private school. The result is often a well-drafted and legal-sounding policy that reflects the requirements for school districts, not charter schools. It may include obligations that your charter school does not actually have to comply with or omit obligations you do need to comply with. And AI does not know to flag these issues for you!

          We have seen a recent rise in charter schools adopting AI-drafted policies that incorporate inapplicable statutes, regulations, and requirements.  Unfortunately, once your school officially adopts a policy, you need to follow that policy, even if the policy contains requirements that are not needed. Review of proposed policies by legal counsel experienced in charter school law remains essential before any AI-generated policy is adopted.

          The risks of AI are not limited to policy drafting. A recent ruling out of the Southern District of New York illustrates a different but equally important danger: the assumption that your conversations with AI are private.

          In United States v. Heppner, a case from New York, a criminal defendant shared materials he received from his attorneys regarding case strategy with AI platform Claude. Heppner input this information without his counsel’s direction or request. While executing a search warrant, the government recovered multiple electronic devices containing Heppner’s AI prompts and outputs. Heppner’s criminal defense lawyer argued that the government should not be able to view the content of Heppner’s AI chats because such content was protected by the attorney-client privilege. The court disagreed, ruling that no protections attached to Heppner’s use of the AI platform, because Claude is not an attorney, and the documents were not created by or at the direction of Heppner’s lawyer.

          This ruling could mean that if you feed details of anything that may eventually result in litigation – a pending parent complaint, employee grievances or termination documents, special education disputes, communications with ASBCS, contracts, or investigations – those AI conversations are not privileged and could later be discoverable. And if the AI-generated output contains information that you learned from your attorneys, sharing that information with a public AI platform could waive the privilege that attached to the original attorney-client communication itself.

           It is not yet clear if an Arizona court would make the same decision. But for now, we recommend that you do not treat AI as legal counsel, and instead focus on using AI tools wisely, with an understanding of their limitations and risks. The safest course is to treat every AI conversation as if it could one day be read by someone you did not intend to see it.