Why AI Isn't as Good at Writing as You Think

preview_player
Показать описание

Is AI-generated writing *real*? My last video was written by a machine learning algorithm that had been trained on my previous video scripts, and while I was making that video, I kept thinking about this question: What is the difference between that AI’s script and my scripts that I write?

Let's find out!

00:00 - Intro
02:15 - How AI Writes
05:45 - The Context Problem
10:09 - The Structure Problem
15:15 - So What?
23:09 - Conclusion

SOURCES:

General AI Stuff

Algorithms and Bias

AI in Education

AI and Climate Change

* To Support Me: *
Рекомендации по теме
Комментарии
Автор

I once prompted an AI with two words: “Eating” and “makeup”. It wrote a paragraph with the phrase “I once ate an entire tube of lipstick and I have no regrets”. I took that paragraph over to another AI and got it to write another several paragraphs that focused entirely on the makeup and not at all on the eating.

ikarikid
Автор

Earlier this week, I had a teacher accuse me of having an AI write my story for creative writing, even though I didn't. So, multiple teachers looked at it, and they all said the same thing. It doesn't help either that I'm the first at my school to deny that sort of allegation. Looking at it, I can see where they are coming from, but I know that my writing is genuinely mine.

theunpopularcuber
Автор

When people say that AI can't do something, a response I see a lot is that it can't do it "yet" and that we just need to wait for technology to catch up. But that frames "technology" as some natural force rather than the product of human effort. It also assumes that technology can just advance through anything and has no limits. But just because we can put a man on the moon doesn't mean we can put a man on the sun.

notarabbit
Автор

sometimes ai delivers the funniest and most unintentionally brilliant lines . one time i was messing with it and it had two characters appear and one was like ‘i am (insert long ass roman emperor name with like 200 different titles), and this is my sister Druba’ and i laughed at that for a solid 40 minutes

CorpseTongji
Автор

I'm not a writer but a visual artist, and it was quite interesting to me to hear a writer's perspective - what you say about forming thought during writing was so fascinating to me because the same happens to us as we draw and paint, but I never heard someone make that point as concisely for visual art as you did it for writing.

DoomBloomArt
Автор

In software, this introspection caused by communication is called "rubber ducking" (or "rubber duck debugging"), where you explain the problem in great detail to an inanimate object (or a colleague who isn't really expected to offer insight), simply for the clarity you gain from doing so.

RyanZerby
Автор

The biggest issue I have with creative AIs (i.e. ones that create art or text) is that whatever model is created after training is a representation of that training data that can ultimately can only be changed by adding different data.
In other words, such an AI represents what exists, and only what exist. It means that if AIs like these were to replace humans, human creativity will be immediately frozen.
The AI cannot generate anything new. Only remix what already exists.

yuvalne
Автор

This is exactly the reason why I'm against fully automated programming tests. I don't JUST want to know if they happen to know the most optimal solution to fizzbuzz or if they know by heart where exactly a pair of curly braces needs to go. I need to be able to pick someone's brain, I want to know why they chose one solution over another, I want to see how they interpret intentionally ambiguous requirements, and so on. In essence, I want to know how someone THINKS

StillGamingTM
Автор

I've been playing around with interactive AI storytelling recently. What I find myself appreciating about it is that random spontaneity factor which can break me out of my own particular tunnel vision. Like, I might be trying to take the story in one particular direction, editing the AI responses as I go to keep everything on track, but then the AI will sometimes throw me a curveball I'd never even considered, but upon reading it I'm just like "YES!" and immediately shift gears to follow that new train of thought.

As this video concludes, AI is just a tool. It's the proverbial infinite monkeys with infinite keyboards. What matters is how we use that tool.

EmeralBookwise
Автор

My favourite news story from last year is that members of Finnish parliament did have conversation with GPT-3 bot and asked how to combat poverty. GPT-3 bot essentially said that greedy rich people should give their money to poor or have socialist revolution.

pulliss
Автор

"Writing generates knowledge." Interestingly, this section relates to why some people talk to themselves. I don't have an inner monologue; or I do but I can't hear it without externalizing it, either by writing or talking. It's embarassing & complicated when you share living space and suddenly learn you aren't alone when you thought you were 😆

silversam
Автор

As a data scientist working with language in industry, I feel I ought to weigh in on some of these points.

If AI or ML don't work as terms, I'd recommend Large Language Models (LLMs) for this. These are the models now dominating the field, GPT-3 being one of many.

With regards to GPT-3 being trained on the whole internet - this is worth going into more depth. As you rightfully point out, the predictions that come out of a language model are reflective of the data that goes in. If you feed in biased input, you're going to see biased output. You joke about cancelling GPT-3, but I contend that as data scientists, we are responsible for the outputs of our models. We need to be very aware of what data we train on and reducing the biases that these show. With the largest training sets we're seeing today, all we're learning is that these datasets are far too large to truly know what's in them, and knowing your data is literally lesson one of data science. Filtering is rarely done to balance out the demographics of those who created the data. The focus is on getting as much data as possible, and if that means the vast majority comes from white men, so be it.

To me, language models in their current form are incredibly strong on analysing existing text. Not only are they a massive step up on what we can do with context, but I would contend they are the most in tune with the way humans learn text. Whilst this is absolutely a debated question, my personal inclination is towards Micheal Hoey's theory of Lexical Priming, which in it's most basic idea is language as pattern matching. Language models use training tasks that seem fairly optimal by this theory, BERT's masked token prediction, for example, which is only improved by the masked span prediction of SpanBERT. Of course, there is a limit on the amount of context that can be taken in, so I'll not claim that we'll never make anything better, but I do feel like we're very much on the right track.
At the same time, they're really not much good at language generation. Sure, it's a step up from what we could do previously, but it's a step up in syntax only. Semantics aren't there, and aren't going to be there without a large change in methodology. All a language model is doing, when generating text, is predicting which word is most likely to come next. The most clever thing it does with semantics is in working out which words are similar to each other, contextually, and that is only the first layer of these models. Cross-modal embeddings are a step-up, but I can't see much meaningful improvement of text generation without a radical new way of injecting real-world knowledge. Structure is, I think, a surmountable issue. Currently models use positional encodings to provide information about where a token appears in a sentence. I could see an introduction of a similar encoding to show where a sentence appears in a text. This would be domain specific, but domain specific models can and will be made. Intent is harder, but I think some exploration with secondary training objectives and sentiment will lead to more progress there. I remember a paper on spinning models to always write positive or negative sentences when specific people or companies were prompted - that in itself is a very basic form of intent.
The major problem remains, though, that any embedding is understood only in terms of what it's similar to and contextually appears with, and is completely unconnected to the real world thing it signifies.

To steal a turn of phrase from a very-well regarded paper - when it comes to text generation, a large language model is naught but a stochastic parrot.

ProjectSeventy
Автор

I know you didn't mention AI art for this video but everything you've talked about here applies so much to that field as well. Most people in support of AI just see art as a means of producing artworks, when in truth it's much like what you've said about writing. The process, the meaning, the understanding that is generated by the act of creating is so important to artists too and it's something that, at least as of now, AI can't replicate. It's no wonder that so much AI stuff looks so corporate.

spagettysylph
Автор

"What we have here is a failure to communicate." - I think that's the most interesting lesson I took from this latest essay. The reason you could describe AI writing as 'not real' is that an AI has no ideas of its own. It can't really think for itself and it has no ideas if wants to communicate (something I would argue is an essential part of being human). An AI model can 'read' the entire internet but it can't understand the thought processes of the people who generated that content - their hopes and fears, the things they love and the things they hate. As you demonstrated with your list of biased terms, people choose words for a reason (good and bad) but an AI won't do that; for now, all it knows is 'this usually comes next'.

johnbell
Автор

Written words are not thought; a mind must read words to create thought. Our thoughts are impossible to communicate directly -- I cannot give another person direct access to my brain so they can know my thoughts. Instead, I depend upon words to communicate my thoughts. The words are not important, but the thoughts behind the words are what I seek to transmit. My thoughts have better form and coherence because of words. I use words in my mind to lend better structure to my thoughts, then write or speak words so that other people can know my thoughts. If I am successful, then I feel satisfaction because I have touched other people that now know a part of me.

The Machine Learning Algorithm can create words that form technically correct sentences, but did its words come from thoughts? Do we know another entity by reading those words? If I allow it to write for me, I deputize it represent me without actually knowing my thoughts. Not only is that useless to my expression, also it deprives me of an important human need. I need to be heard. Writing offers me opportunity to understand my own thoughts better, to express my them, and present them to others, as I have done with these two paragraphs.

LaCafedora
Автор

Basically;
1. AI doesn't know what the hell it's writing about
2. AI doesn't know WHY it's writing
3. AI has an unnaturally structured/improvised workflow

lynnclaywood
Автор

I feel as though students who use AI to create essays miss the point of the essays themselves. The point isn't to write a thousand words of text on a given topic, but rather the experience of doing so. By which I mean, in order to write a proper essay of a given length about a given topic you need to research (the amount of which grows with your word count), you need to form arguments and theories, structure these components. And by doing so, you are more knowledgeable and capable about X topic by the time that you are through.

AI text generation *may* have valid applications, but when it comes to academic studies, it most certainly does not. It's just as much of a cheat than having that kid down the road write your essay for you.

schemage
Автор

Very well put :)

The simple fact is: AI lacks intention. You can especially see the pitfalls of AI when you task it with creative writing instead of straightforward fact-heavy prompts like those you'll see in school essays, and with the latter, it only maintains basic structure.

I asked an AI to write an essay about Azula from Atla and why she's a great character. The AI generated a few paragraphs of text, exploring the different facets of Azula's character in clearly defined sections- at least that's what was attempted. The same phrases would be repeated where a human author would normally take a different angle, 'cautionary tale', 'vulnerable', etc. The essay itself wasn't some horrible abomination, just a very lukewarm and surface-level exploration of an easy target without much backing for any point that was presented, more of an outline you could make in 15 minutes than an essay. When you look at it from a technical perspective, the AI's word-by-word algorithm, short context window and limited understanding of structure become very obvious.

That's not to say that AI is all bad- as you mentioned it's a very valuable tool and a good jumping point. The whole debate is really interesting and I'm curious to see how these programs develop in the future, especially with the growing interest in AI spreading to the general public- people like me haha.

saltyputses
Автор

Loved this!

Feels like if students are using AI for assignments perhaps the motivation and assignment should be questioned as well. Like why don't they want to write it in the first place?

QuestingRefuge
Автор

Our standard for deciding whether a machine is "intelligent" is to see if it can pass for a human.
Our standard for deciding whether a human is "intelligent" is to see whether s/he can pass for a machine.

whycantiremainanonymous