Attention, trust and GPT3

When AI is smart enough to write an essay, then what happens?

Just “stole” this article from SETH’S BLOG. Worth a read. The point made by the professor in the article below nails it; The professor stated that ChatGPT’s writing is very smart, like a 12th grader. “It’s a clean style. But it’s recognizable,” he added. The professor shared, “If you were teaching somebody how to write an essay, this is how you tell them to write it before they figure out their own style.”

GPT3 is back in the news, because, as expected, it’s getting better and better. Using a simple chat interface, you can easily ask it a wide range of questions (write a 1,000 word essay about Clara Barton) that certainly feels like a diligent high school student wrote it.

Of course, this changes things, just as the camera, the typewriter and the internet changed things.

It means that creating huge amounts of mediocre material is easier than ever before. You can write a bad Seinfeld script in about six minutes.

It means that assigning rudimentary essays in school or average copywriting at work is now a waste of time.

But mostly it reminds us that attention and trust don’t scale.

If your work isn’t more useful or insightful or urgent than GPT can create in 12 seconds, don’t interrupt people with it.

Technology begins by making old work easier, but then it requires that new work be better.

The picture is taken from News 18 article Student Gets Caught For Cheating In Test Using ChatGPT.

While checking the essays after submission, one of them flagged AI usage in the student’s rudimentary answer. The professor stated that ChatGPT’s writing is very smart, like a 12th grader. “It’s a clean style. But it’s recognizable,” he added. The professor shared, “If you were teaching somebody how to write an essay, this is how you tell them to write it before they figure out their own style.”

He mentioned that he plugged the suspect text into software made by the producers of ChatGPT to determine if the written response was formulated by AI and was given a 99.9% likely match. He then tried the AI and asked a set of questions that students may have asked, but ChatGPT generated similar answers, but no direct matches since it produces unique responses.

When he asked the student, he agreed to use AI in the test and as a result, he failed the class. The undergrad was also turned over to the school principal.

I would love to hear from you