There is a problem between chat-gpt and me. That is, I instruct the chat-gpt to calculate the output of the following javascript code.
The correct answer of this code is 243 but chat-gpt tells me the actual answer is 60. Can someone explain why chat-gpt can’t produce the correct answer?
I tried to trace the control-flow of the code. I expect for the actual control flow of this code. I thinks there might be some issue in this code.
const h = (a) => {
const g = (b) => {
if (b <= 1) {
return a;
} else {
return a * g(b - 1);
}
};
return g;
};
const f = (a) => h(a);
const c = f(3);
console.log(c(5));
3
Answers
Your chat-gpt may be fake, the result I calculated using chat-gpt is 243
Because basic ChatGPT (or any other large language model) is not a computer that deterministically executes code.
The Code Interpreter plugin for paid ChatGPT enables ChatGPT to interact with a Python interpreter within the session, to actually run code.
Since generative language models are non-deterministic by their sheer nature, the same input can give various different outputs due to how sampling works:
3
729
Finally, using the ChatGPT API, and turning the random sampling
temperature
high enough, we get the perfectly sensible answerTurning it low enough also gives an incorrect answer:
Using the paid service with the code interpreter (beta) active, ChatGPT claims
BUT
You can always confuse ChatGPT and should never rely on the answers without thorough double checking
That said, on same paid service with standard settings, the prompt
gives the answer