【转录12/20】【35:22-39:09】
It caused misunderstanding. If AI doesn't fully get the context, it might totally mess up a response. Like that classic joke in advertisement. A client would say, 'I want colorful black.' Humans can’t even figure that out, how AI knows what to do?
Yeah. I wonder how.就是本来它语言,就不管是哪一门语言的话,它都会有多重含义。就好像我们那个什么英语听力还有什么中文听力,就网上经常讲的那些。
段子。
对,那些段子。然后它都是一个单词或者一个短语的话,它都有多重的含义嘛。所以如果AI它不能完全理解上下文的话,它可能就会完全就会搞错。
对啊,就是广告圈之前最出名的那个笑话,就是客户说我要五彩斑斓的黑。所以这个我就觉得是个很经典的例子。你这样说这种,就不管是哪一种语言,哪一种场景,其实都不太makesense。
对啊,那其实是什么意思我都还没有搞懂。
就是可能他觉得黑不是他想要的东西,就是一种感觉。就很多时候我们都说一种感觉。当我们在给AI输入一些提示词的时候,如果是说一种感觉,AI可能很有可能就无法理解了。就过于感性的东西。
对,因为有时候就好像说,我叫AI帮我生成一段话的时候呢,它会非常的,就它逻辑性非常强嘛,但是呢它又非常的冷冰冰,然后又说请你带点人味。但是呢我又觉得它可能生成出来的效果,反正永远都不是我想要的,就是它就没有什么人味。它就没有办法理解我这个人味是什么意思。
就这个时候你要说得更具体一些,单纯说人味是不够的。
对对。
也就是说我喜欢用哪些词,或者是我请参考某一段文字的风格之类的,那它会更容易去理解。
对啊。好,学习了,下次就这样去做。
Okay, so what's number three?
Well, the number three is lack of grounding. Sometimes AI doesn't have access to real-time or accurate info, so it just makes stuff up to fill the gaps, like when an essay footnote cites random sources that didn't even exist.
Yeah, just exactly what I mentioned just now, I asked them for some links, they gave me the links already expired, or probably like never existed.
Yep.
所以就有时候AI的话,它没有实时或是准确的信息。因为我记得好像有一些模型的话,它的数据来源已经是可能是一年前或是两年前,对吧?然后所以它有时候它就会,我们问它一些实时的问题的时候呢,它就会凭空编造一些东西,然后来填补这个空白,就好像it knows everything.
Yep.
就好像我刚才有举过一个例子,就说我去叫它帮我去找一些网页,它给我那些网页全都已经不存在了,我都不知道为什么不存在它还能找得到。That's ridiculous.
Yep, that's called making stuff up. 一本正经在胡说八道。
Yeah. They did a very good job, huh?
Yeah. Although a lot of models right now can have internet search with it, but still I try it, sometimes it still make up something. Yeah. Sometimes.
Okay, so what's next?
The last one is overfitting, which means if AI gets trained on specific data too much, it starts to stick to patterns that might not always apply. Let's say it's trained to recognize faces with two eyes, one nose, and one mouth.