ethicking.com

View Original

This Blog Title Was Not Written By AI

I confess that I enjoy some online things that Internet privacy and safety experts say I shouldn’t, like those AI-generated portraits and social media quizzes (I mean, how else am I going to know what kind of a potato I am if not through disclosing my mother’s maiden name and my high school mascot?).

I am also a master procrastinator. So, of course, when ChatGPT, an artificial intelligence chatbot under the OpenAI umbrella, went viral last month, I quickly signed up, and started asking it to write a letter from George Carlin complaining about a bad experience at a Denny’s in Fresno. (I didn’t keep the results but it was definitely a serviceable complaint letter, but not in Carlin’s voice. Baby steps.)

Shortly after that, Joshua Browder, CEO of a company called “DoNotPay” (which bills itself as “the World’s First Robot Lawyer”*) announced on Twitter that the company would pay any person $1,000,000 (and later, $5,000,000) to cede control of their Supreme Court argument to its OpenAI-based “robot lawyer.” The lawyer or pro se party arguing the case would wear AirPods and “let our robot lawyer argue the case by repeating exactly what it says.”

This announcement, of course, generated a bunch of social media subpoenas to the terminally online members of the ethics bar, myself included; this is, once again, an issue spotter for an EPR exam. My nerd friend Brian Faughnan did a great blog post on the subject, pointing out the numerous rules of professional conduct that would at least arguably be violated by this scheme. Go read that and come back.

(Waiting patiently.)

Okay, now that you’re primed—I do want to add a couple of points to Brian’s excellent take.

I do think DoNotPay’s experiment would be interesting in, say, traffic or small claims court, with a client who could easily afford the worst-case outcome of the case and gave informed consent. That said, “Interesting” doesn’t cover other problems and conflicts that can’t be waived, such as Model Rule 5.4’s requirement of professional independence from one who recommends, employs, or pays the lawyer, even if no one in their right mind would pay millions to test their chatbot in a proceeding involving a $300 personal loan.

Beyond that, an additional point is that AI, machine learning, and that universe is only going to as good as its data. Garbage in, garbage out. And, unfortunately, a lot of that garbage in is racist and sexist, with predictable results.

And hey, guess what? This week we learned that OpenAI did take efforts to dial down the level of toxicity spewed by ChatGPT, which is great; but they did so by paying Kenyan laborers a buck or two an hour to review horribly graphic and violent text (which I will not detail here).

As always, this is something to watch. Assuming we can get to a less-toxic AI interface (preferably one that doesn’t exploit overseas workers), it might be fun to watch a mock trial or CLE conducted through AI. Maybe a law themed improv program? (I would totally go, though I’m not sure there would be many takers.) AI is here, for better or worse, even if it’s not quite ready to take over the mic.