Stacie H. Rosenzweig is an attorney with Halling & Cayo S.C. She focuses her practice on the representation of lawyers and other credentialed professionals.

Where does AI go from here?

Where does AI go from here?

Remember when you were a kid, you were told not to copy directly from the encyclopedia? (I guess that’s an age test.)  And then at some point you were told not to rely on Wikipedia for research, because any random yahoo could edit it? And then regardless of whether you graduated last year or 50 years ago, the consequences for plagiarism, which were supposed to be swift and severe, were drilled into your head at the start of every school year?

Now, the focus isn’t on the World Book but artificial intelligence (AI). I asked my 12-year-old whether his teacher had any warnings for using ChatGPT to complete assignments, and he said she hasn’t discussed it—but, that said, he knew there were websites where teachers could upload homework and determined whether it was written by AI, and he knew that sometimes ChatGPT just gets it wrong, so it’s not a good idea. Still, people use it.

We’ve already heard about DoNotPay’s now-defunct offer to use ChatGPT to “represent” a client in a Supreme Court argument (and the aborted attempt to do so in a much lower stakes proceeding in traffic court). As with just about any new tech, commenters are divided—either AI like ChatGPT represents the inescapable future, and lawyers should “be afraid” of their impending obsolescence; or ChatGPT may be a fun toy but that’s about it.

Even though I’m posting this on the Saturday of a three-day-weekend, you’re probably not actually reading this until Tuesday or later. But if you’re as terminally online as I am, you’ve already heard about the lawyer who has used ChatGPT to perhaps accelerate his own obsolescence—allegedly, he used it to generate a whole brief, which was then filed in the Southern District of New York. (The docket and most of the pleadings are available on Court Listener.)

When the opposing counsels got to work on a response, they apparently searched for some of the cases cited in the brief, and couldn’t find several of them, and alerted the court. Now, there may be a non-nefarious reason why someone can’t find a case—typically human error. Some editing mistake or typo. But the reason here is, almost certainly, ChatGPT just made things up.

Now, once the responsible lawyer was called on it, the right thing to do would have been to fess up and withdraw the pleading at the very least. But, that’s not what this lawyer did—he doubled down. The court ordered him to file copies of the disputed cases, with an affidavit. And he purported to do so.

 Even my readers who have not heard about this and do not know how legal research works knows where this is going. 

Readers, the text of these cases was, as ChatGPT helpfully told me this morning, “nonsense” or “rubbish,” which “convey a similar meaning” to the word I would have liked to use “without using explicit or potentially offensive language.”  The court, in a scathing order to show cause, has ordered the lawyers to show up in person on June 8 and beg for forgiveness show cause for why they shouldn’t be sanctioned.  The responsible attorney, Steven Schwartz, has blamed ignorance of exactly how ChatGPT worked for these “bogus” (judge’s word) filings.

(Attorney Schwartz, if you’re reading this, I know some good New York ethics attorneys with whom you might like a word between now and June 8.)

Let’s for the sake of argument assume that this whole episode was due to horribly misunderstanding ChatGPT and particularly, its limitations. That may be an explanation, but not an excuse—Rule 1.1 requires lawyers to provide “competent representation of a client.” If a lawyer doesn’t know how, whether, or when to use a particular tool, the lawyer should either find someone who does, or use a different tool. (Remember when Marge Simpson was leaving for a spa getaway and Homer asked her urgently how to use the pressure cooker, and she just yelled, “don’t”—and then he didn’t? Homer Simpson understands this on some level.)

What we don’t know yet is whether these “bogus” cases were generated before or after the judge demanded to see them, which here is a material difference. The screenshots attached to Schwartz’s affidavit aren’t date-stamped.

If these cases were generated, again by ChatGPT, after the opposing counsel and judge called them out, that’s an even bigger problem, because there aren’t really any benign explanations.  I am not sure “are the other cases you provided fake”/”no, the other cases I provided are real” (from the screenshots of Schwartz’s AI chat) is at all helpful—lawyers know that “just trust me” isn’t a substitute for due diligence. Doubling down on a technology you already know you don’t understand isn’t great.  Using it to cover your tracks after you know you messed up is worse—that gets us into dishonesty and misrepresentation territory.

While we wait to find out what happens on June 8 (S.D.N.Y. is sadly not part of the audio streaming pilot, not that I would expect the responsible lawyers to consent to the streaming in this case), there are some takeaways:

·      Generative AI still isn’t ready for prime-time in a legal context. I’ve found it helpful for translating legalese into regular English (when I do know what the term means but my lawyer brain won’t find a normal word) and finding alternatives to cursing in blog entries, but it’s not there yet for substantive legal research.

·      If opposing counsel calls you out for incorrect information in a court pleading, take that seriously. If the court asks for an explanation or you otherwise need to respond, either explain (with citations to actual law from reliable, established sources) why you’re right, or acknowledge the error, confer with your client, and correct or withdraw the pleading. Mistakes happen. Sometimes very competent and experienced people still get things wrong. How you handle errors is important.

·      There is also a message in here for local counsel—the person signing the pleading is responsible for ensuring the facts have or are likely to have evidentiary support and that the claims and defenses are warranted by existing law, or a good-faith argument to modify, extend, or reverse a law. If that’s you, as local counsel, you need to make sure whatever you’re signing is solid. Historically, it has been reasonable for lawyers to trust other lawyers to actually cite and print real cases. I would hate to see the (highly publicized) bad acts of one lawyer make life miserable for all local counsels going forward. We’ll see what the judge does here.

Why I Didn't Say "Hi" at the Bar Conference

Why I Didn't Say "Hi" at the Bar Conference

“When did you feel like you really ‘became’ a lawyer?”

“When did you feel like you really ‘became’ a lawyer?”