I haven’t read the whole thread yet, but I do agree with Nathan that Gary was off-base about ChatGPT not being able to make implicit connections (“the cow died”) example.
In one of my first ChatGPT sessions I asked it on old philosophy question that supposedly AI would always have an extremely hard time with (mentioned in this old article):
Explain what happened in the following situation:
A husband turns to his wife and says “I’m leaving you,” the wife responds “Who is she?”.
ChatGPT: In this situation, the husband has told the wife that he is leaving her. The wife’s response of “Who is she?” suggests that she believes the husband is leaving her for another woman. This could imply that the wife suspects her husband of being unfaithful and wants to know who the other woman is. It is possible that the husband’s decision to leave has come as a surprise to the wife, and she is trying to understand the reason behind it. The conversation between the husband and wife suggests that their relationship is in trouble and that the husband has decided to end it.
Just in case it had seen reference to that old chestnut before, I asked it a number of similar questions (e.g. a wife cries after finding Grindr on her husband’s phone) and it responded similarly well (e.g. the wife was worried that the man was secretly gay and cheating on her with men, etc).
Frankly, I was floored with how well it did on implied context with very limited information.