clock menu more-arrow no yes mobile

Filed under:

This bot will do your homework for $9.95 a month. Does it actually work?

According to one 10th-grade history teacher, it’s unlikely to get you an A.

EssayBot is a homework AI that was originally designed to generate branding copy.
Getty Images/iStockphoto

“EssayBot is the highly acclaimed online platform giving essay writing assistance to students and subject authors. As the program has been produced with the most sophisticated tools and technologies, it is extremely automated and individualized. This US-based corporation works with the only purpose to give honest and convincing aid to authors for creating superior volumes that will get rewards and praises.”

That’s what EssayBot says when I asked it to describe itself. The service aims to be the holy grail for the world’s burnout 11th-graders. Type in your prompt — any prompt, from your history assignment to the question “what is EssayBot?” — and the machines get to work.

Your opening paragraph is pulled whole cloth from a database of scholastic material. Then the diction is gently rephrased, with synonyms swapped in for non-essential words, until it can fly under the radar of the average plagiarism detector. From there, you can import a laundry list of additional paragraphs related to the subject of your essay, or you can use a drop-down menu called a “sentence creator,” perched patiently next to your blinking cursor. Write a word and EssayBot does its best to think up a sensible follow-up clause, based on the contours and language of what you’ve already got written down. All this for only $9.95 a month, or $49.95 a year. If you’ve ever spent a sleepless school night staring at an empty Word doc, you know what it’s like to be desperate enough to pay up.

I discovered EssayBot via YouTube ad, and when I put the site’s name into Google, I found hundreds of cautiously hopeful students taking to forums and review sites over the past year, asking if EssayBot is too good to be true. Procrastinating teens are an underserved market.

Aaron Yin, the proprietor of EssayBot, has been trying to sell AI text generation for years with limited success. His first attempt came in 2017 with a service that automatically constructed résumés, and the tech infrastructure of EssayBot was initially intended to help small businesses generate branding copy. But that angle never took off. Instead, Yin needed to find a hungrier demographic, and the millions of young men and women on a humanities deadline were a match made in heaven. “We use the same technology [from the business writing] for EssayBot,” he says. “To help students write essays.”

Yin considers EssayBot to be a streamlined version of what kids are already doing with their papers. He tells me he held focus groups full of college kids during EssayBot’s initial development and found that they all used similar tactics to write their essays. They would research and copy down the finer points of the arguments they wanted to use, they would reword those passages, and they turned to Google Scholar to find citations. If you’re extremely generous in your interpretation, you can argue that EssayBot is essentially a harmless mechanization of the academic process — rather than, you know, cheating. “The technology is actually a little similar to translation,” says Yin. “You’re putting something in a different way.”

There’s reason to believe what Yin is selling. In 2019, AI text generation is closer to the mainstream than ever. In February, there was a brief mania over the Elon Musk-backed company OpenAI and its silver-tongued text generator. Journalists from Wired, the Guardian, The Verge, and Vox were all invited to play with the fancy new algorithm that could generate cohesive short stories with reasonably consistent clarity. The generator has yet to be released to the public, with OpenAI claiming that it was “too dangerous” in our current Facebook-poisoned news culture. No matter how hyperbolic that warning might be, it seemed we were fast approaching a world where machines could demand column space.

It’s a reality echoed by Neil Yager, the chief scientist and co-founder of Phrasee, an AI platform that formulates ideal, scientifically precise email headlines for press releases and marketing campaigns. He says that whether we realize it or not, we’re already reading a fair amount of computer-generated text as part of our media diet. “In things like weather reports, it’s called data to text. You take some numbers, like the humidity and temperature, and use an algorithm to automatically to spin that into a story,” he explains. “You have some simple logic in there. ‘If the temperature is above this, then say that it’s going to be a warm day.’ Robo-journalism is quite a big field.”

Still, it was difficult to believe that technology could adequately replicate a standard five-paragraph high school essay. Sure, EssayBot was able to introduce itself in its own uncanny syntax, but that was easy. How would it hold up in the eyes of a wary teacher? So I got my hands dirty in the EssayBot module and resolved to craft an essay about Brown v. Board of Education, a Supreme Court case any American student will inevitably write about at least once during their academic career.

EssayBot gave me a rock-solid opening paragraph, after which I was presented with a suite of additional paragraphs I could plug into the copy. As before, each of those paragraphs was plucked from the web and rephrased into something less plagiaristic by the site’s algorithm. I continued that process until I had about 700 words that tracked the basics of the trial and some light analysis about segregation in the public school system today. The results were uneven. The language and the facts were mostly reasonable, but the overall narrative was jumbled. The essay wasn’t tethered to a concrete thesis and read like a loose distillation dreamed up by an entity that knew all the information but wasn’t able to synthesize it into an authentic argument.

I decided to use the automatic sentence creator to fill out the conclusion, where things got funnier, and more dire. The sentences themselves were grammatically correct, but they’d often contradict each other within the text. At one point, EssayBot wanted to add “the solution is to change the way schools are run,” exactly one sentence after it added “the solution isn’t to simply change the way schools are run.” It figures that when you ask something non-sentient to write for you, you can expect something non-sentient in return.

So, naturally, when I emailed the essay to my 10th-grade history teacher Mr. Lourey, he gave it an easy F.

”The paper would probably earn a very low score in most classes, because it doesn’t seem to be clearly answering a prompt,” he wrote. “I guess if a teacher assigned a short essay that asked students to simply summarize an event, then maybe this type of paper could fly under the ‘teacher radar.’ But most properly designed writing prompts on civil rights would ask students to make some sort of original claim … even if I did not identify the paper as a creation of AI, it would earn a failing grade.”

His reaction didn’t surprise me, nor did it surprise Yager. An AI text generator like EssayBot is simply incapable of responding to a multifaceted essay proposal with a human point of view. Your best bet is simulacrum, and simulacrum can break down very, very quickly. In fact, Yager says Phrasee’s AI model starts to degenerate after about 150 words or so.

Algorithms “don’t write like how you or I would write an essay. It doesn’t think, ‘Okay, here’s my idea, and here’s how I’m going to argue this point.’ Instead, it’s writing one word at a time with no idea where it’s going,” he explains. “There’s no understanding there. It’s not trying to get any point across; any point it makes is purely random and accidental. That’s the limitation of the technology today. … It studies the statistical properties of the language and can generate new text that shares those properties.”

That said, Yager is a little concerned about the future. As tools like EssayBot get better and more AI software hits the market, there will eventually come a moment, he says, where text generation will be a major concern for academia. “Technology is going to help people cheat. It’ll be a bit of an arms race. Things will improve over time, and so will the detection methods,” he says. “Even now, though it’s not great quality, I bet people are getting away with it.”

Yin, of course, would never call EssayBot software for cheaters, and he says that over the past year, he’s only ever gotten one angry email from a teacher. He points to a service called Chegg, which provides specific answers to classroom textbook questions for $15 a month. EssayBot, in his comparison, is a research tool rather than a flat, rote cheat sheet. A shortcut rather than misconduct.

”A student could use Chegg [to answer a problem,] and after graduation, if they saw a similar question, they still couldn’t do it,” says Yin. “With EssayBot, after graduation, if a student became a marketing specialist and write marketing material, they could still use EssayBot.”

Perhaps one day we might need to formally establish the parameters for how much a robot is allowed to assist you in the writing process. Until then, be careful with the machines. They might just flunk you.

Want more stories from The Goods by Vox? Sign up for our newsletter here.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.