What do you think about AI-generated writing?

We want to hear your thoughts on it — and how Medium should approach it on the platform

Scott Lamb
The Medium Blog
3 min readDec 8, 2022

--

At the recent Medium company offsite, one of the most interesting conversations was about AI and writing. A small group of us from across several teams talked about what was coming and debated how we might approach it. We disagreed on some but agreed on a lot, including the main point — AI-generated content is here to stay, so the question has to be: how should we engage with it?

Image via Midjourney (prompt: “a robot sitting down and writing a literary novel”)

If you’ve been following along, it’s been a big moment for this topic. Last week, OpenAI released ChatGPT (GPT stands for “Generative Pre-trained Transformer”), a natural language AI chatbot, which launched thousands of social media threads. But it’s not just ChatGPT. OpenAI’s GPT-3 language model came out in 2021 and generated a lot of concern about the direction of the written word. Earlier this year, Google engineer Blake Lemoine claimed that LaMDA — a Google natural language chatbot — had become sentient (you can read his interview with the bot here). These are all examples of language models, which use probabilities and massive data sets of text to generate new text.

The programs using these language models are getting really, really good at quickly generating a lot of very readable text from prompts and example writing, and that’s where the issue arises for us: As an open platform, what’s the right policy or approach to this new kind of content?

We know you’ve been thinking about it, too — there have been nearly 200 new stories added to the ChatGPT tag page in the last week. Alberto Romero thinks ChatGPT is the world’s best chatbot, whereas Clive Thompson sees a problem: “The bot always sounds confident, even when it’s talking out of its ass.” We’ve seen writers and publications using it in creative ways, like how UX Collective used it to write a version of their 2023 UX Trends Report — though for what it’s worth, we like the human-generated version better, Fabricio Teixeira — to using ChatGPT to program apps on AWS and create AI-generated art prompts. On the tools side, Zulie Rane tried a bunch of services (Jasper.ai, Copy.ai, Article Forge, and Rytr) to see which could actually create copy that sounded like it came from a human.

We’ve been talking about how all of this impacts Medium and what our policy on AI-generated writing should be. While peers like StackOverflow have implemented temporary bans on ChatGPT content, citing that AI-generated responses are “substantially harmful” to the site, we’re still exploring our options (hence this post). Some of the central questions, as we see them:

  • We’ve seen AI be confidently wrong — this Ben Thompson piece on homework is a great example. People can be confidently wrong, too. But what’s the impact on a platform like Medium that is so centrally driven by human knowledge?
  • What does it mean for art? Does a piece of art lose meaning without a human being behind it? Does a poem about the human experience lose it’s impact if it’s not actually written by a human?
  • Do you expect to know when what you’re reading is written by a human vs. a machine?
  • Are you concerned about the implication of AI-generated writing impacting the livelihoods of writers? How might we create a system where AI supports human creativity instead of competing with it?

We’re curious what you think. How do you think Medium should approach AI-generated content? What are good and bad examples of AI content? What are you concerned about? What are you excited about?

Share your thoughts in the responses, and help us shape our path forward.

--

--

Scott Lamb
The Medium Blog

VP, Content @ Medium. I'm here to support people writing words on the internet. Priors: BuzzFeed, YouTube, Salon.com