Welcome to Tuesday, March 31, 2026. Today is National Crayon Day. If you're in or near Easton, Pa., Crayola no longer gives factory tours there, but there is what they call the Crayola experience, which I'm told is worthwhile for the younger set. It's just off I-78, near the New Jersey border. There's one in Orlando and at the Mall of America, too. Today is also Eiffel Tower Day, but I'll get to that.
1657: English Parliament presents the Humble Petition and Advice to Lord Protector Oliver Cromwell, offering him the crown, but he declines.
1736: Bellevue Hospital is founded in a New York City almshouse, becoming the first public hospital in the United States.
1770: Immanuel Kant is appointed Professor of Logic and Metaphysics at the University of Königsberg.
1932: Ford publicly unveils its flathead V-8 engine. Some 25 million copies were made.
1939: The Hound of the Baskervilles, the first of 14 films starring Basil Rathbone as Sherlock Holmes and Nigel Bruce as Dr. Watson, is released. Here's a playlist of all those films.
1948: U.S. Congress passes the Marshall Aid Act to rehabilitate war-torn Europe.
1949: RCA Victor of Camden, N.J., introduces the 45 RPM record player and the 7-inch single.
Birthdays today include: Philosopher Rene Descartes; composers Johann Sebastian Bach and Joseph Hyden; Sinn Féin founder Arthur Griffith; labor leader Cesar Chavez; actor William Daniels (voice of KITT in Knight Rider, St. Elsewhere, 1776); singer Lefty Frizzell; hockey great Gordie Howe; fashion designer Liz Claiborne; singer-songwriter J.D.Loudermilk; actor Richard Chamberlain; actress Shirley Jones (Elmer Gantry; Oklahoma!, Partridge Family); trumpeter Herb Alpert (amazingly, at 91, he's still touring; he's playing the Ryman Auditorium in Nashville tonight at 7:30); talkshow host Michael Savage; The Turtles' Al Nichol; Gabe Kaplan (Welcome Back Kotter); Al Gore; comedian Rhea Perlman; and AC/DC's Angus Young.
If today's your day, too, here's wishing you a happy one.
* * *
One of the problems associated with AI is the misuse of it. What started me down this proverbial road this morning is an interesting case cited by Power Line’s Bill Glahn:
The case involves a Polish national, Danuta Dec, who overstayed her visa. She sought a waiver to remain in America. U.S. Citizenship & Immigration Services (USCIS) said no. Dec then appealed to federal district court, who dismissed the case for lack of subject matter jurisdiction. The appeals court upheld the district court’s dismissal. All of that is handled in a rather straightforward manner.
But it turns out that Dec’s attorney had included some non-existent cases and a fake quote in her brief. These transgressions did not impact the outcome of the case.
Look, whatever else might be said about this case, it is emblematic of the kind of day-to-day abuse of AI that is becoming more prevalent. How much of what we see online ends up being generated? Well, that’s the problem; it’s nearly impossible to tell.
Let me preface the remainder of my remarks by telling you I’ve always been something of a tech geek. I was building computers in my basement. When the neighbors found out, they often viewed me as some kind of mad scientist. ("You're building what in your basement?") So, technology has been a larger part of my life than many other folks. I certainly don't claim to have been on the cutting edge of any of it, merely closer than some. I wasn't after inventing anything; I was merely trying to understand, to the degree possible, what already existed. I also spent a couple of decades making my living with computers.
That said, while I have no problem with AI use, I do have a problem with the lack of checking the output of the program, which is clearly what happened in the case Glahn cites in his piece.
I’ll admit to using AI to research some of my articles. As a couple of overly simple examples, I will toss a question to Alexa when my memory isn’t filling in the gaps, such as “Alexa, did Peter White play on any of Al Stewart’s albums?” or “Alexa, did George Burns play on any Star Trek episodes?" (Obviously, the answers are yes and no, though the latter would have been amusing.)
I will occasionally use Claude to look at polling statistics, vote counts, or more technical items, since it has the ability to sort through mountains of data and often gives me useful analysis. For example (and I doubt many of you know this), I write a blog on the subject of Ham radio, and often I need to do some research on radio or antenna specs, often delving into the theoretical. I’ve also used Claude to do circuit design for an audio processing project that a friend was working on. I’ve even used it to give me an overview of a bill being argued in Congress. I have found that, when I use AI in that manner, about 75% of the time, there are no accuracy issues. But that leaves an error rate of 25%, which I find unacceptable.
The key difference, though, is that, unlike the great legal mind mentioned in the Power Line piece, I actually go back through and confirm AI’s findings.
I have found that the inaccuracy issue springs mostly from the sources the AI leans on. In short, garbage in, garbage out. In those cases, the sources AI has used are places I generally try to stay away from, as a rule. Wikipedia is one glaring example of such. Far too often, Wikipedia and other such online sources use mainstream media sources as well as the informational cesspool that is social media, often without telling you that’s what it’s doing.
Another example is YouTube. Anyone who has sifted through the mountainous pile of AI-generated horse stable carpeting posted there these days understands the problems involved.
One of the first indications that the video you are watching is AI-generated is that the person on your screen never actually moves anything other than their lips and perhaps their arms occasionally, often repeating the same gesture over and over with no real connection to what is being said.
The next and bigger indication is that the synthetic speaker has problems with contextual pronunciations. Example: "Read" is a verb meaning to interpret written text, pronounced as "reed" in the present tense and "red" in the past tense. "Red" is an adjective describing the color, pronounced as "red." A lot of content creators these days use AI that is, as often as not, clueless on the contextual use of these. Someone may (or may not) have actually written the text you’re hearing rendered as speech, but the pronunciations are often butchered. Here again, however, the apparent issue is that in a lot of these cases, nobody cares enough to check the work they’re posting.
Recommended: Have We Turned the Corner on the Man-Made Climate Change Scare?
Mispronunciations are an issue with Alexa, as well. I've written fairly recently about using the Alexa infrastructure for home automation, so I have some experience here. As an example, there’s a town near me here in western New York called Nundea. The proper pronunciation is “Nun-DAY,” which Alexa pronounces “Noon-duh,” no matter how often I try to correct it. And don’t even get me started on its mangling of the name of the town of Chili, which is pronounced “Chy-Lye” (not to be confused with my favorite cold-weather dish). In both cases, I’ve corrected the system, but it goes right back to mispronouncing them the next day. Eventually, I simply gave up.
The case cited by PowerLine’s Glahn is one of many cautionary tales of relying too much on AI. This case, in my view, borders on fraud. Glahn points out that a verbal slap on the wrist from the judge (who obviously and laudably did his homework on the motion submitted) was all the mouthpiece got for her misdeed. And therein, I think, lies a problem that leads to more of this kind of thing, in much the same way as an "enlightened" DA causes more crime by not coming down on the perp.
But perhaps the real issue is the voluminous size of legal documents anymore, which pretty much requires any legal interaction to be driven by computer. Nobody can read all this junk that we’ve got our lives dependent on.
Wrap your minds around this: The average bill during the 80th Congress (1947-1948) was two and a half pages. Of course, that was the day when bills before Congress had one specific purpose, and were what we would call “clean” bills. Those days have passed, and what we have now is a tendency to write and vote on these huge omnibus monsters. I recall writing about the so-called “Affordable Care Act,” which by comparison was on the order of 2,500 pages. Who has the time or the patience to read through all of that and decide on its validity? That situation pretty much requires AI to sort out all the relevant legal implications. Until just recently, the average Congresscritter didn’t have that kind of power, much less the average judge, and certainly not the average citizen trying to decide whether a proposed law is worthwhile. Remember Nancy Pelosi suggesting we need to pass the bill to find out what’s in it? Yeah, it’s like that.
I cannot imagine that criminal cases are much different.
Folks, we’re in a brave new world, one that is way beyond what Shakespeare envisioned of Miranda's understanding in The Tempest, and also way beyond what Huxley foresaw. That should frighten us.
For his part, Huxley was decrying the deeply dehumanizing aspect of the technically “perfected" world, which was presented as an exciting and bold future, but was, in his view, a barely imaginable horror. The questions Huxley asks run along the lines of: Is this highly techological future something to greet with wonder or with wariness?
I can’t help but wonder what he would say about the AI of today. Probably, "I told you so".
Yes, we are operating in new and uncharted territory, in which it’s impossible to foresee all the implications. The sad part is, we find ourselves trapped by our own inventiveness. Our legal system has become so complex a thing, and we rely so heavily on it, that we need artificial help to decide on its validity. Thing is, the systems we have started relying on to understand our own legal and technical complexities are as flawed as we ourselves are.
What could possibly go wrong?
Go wrong?
Go wrong?
Go — slap!
Thought of the day: Computers make mistakes much faster and with more accuracy.
Investigative reporting and honest analysis are becoming rare these days. You get it here. That's why it's important to become a PJ Media VIP member. Not only do you support the reporters and writers who support YOU, but you also get 60% off the regular price by going to this link and using the promo code FIGHT.







Join the conversation as a VIP Member