You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue might have the same root as this: #3419
Bug Report
After updating to version 3.7.0, I have noticed an issue with the Reasoner v1 when it attempts to use Javascript during its reasoning process. The model consistently returns an error whenever Javascript is involved. This issue did not occur in the previous release.
Additionally, in some cases, the Javascript reasoning process is triggered even after the model has already provided its final answer, which seems unintended.
Steps to Reproduce
Use the Reasoner v1 with the chat template that comes with the model:
Provide the following user prompt (this comes from the previous version of GPT4All):
User: "Write code to check if a number is prime, use that to see if the number 7 is prime."
Assistant:
Certainly! Let's compute the answer to whether the number 7 is prime.
Understand the task: The user wants me to write JavaScript code to determine if a given number (in this case, 7) is prime.
Break down the problem:
Define a function isPrime(n) that checks if a number ( n ) is prime.
Use the function to check if 7 is prime.
Write the code:
functionisPrime(n){if(n<=1){returnfalse;}for(leti=2;i<=Math.sqrt(n);i++){if(n%i===0){returnfalse;}}returntrue;}constnumber=7;console.log(`The number ${number} is prime: ${isPrime(number)}`);
Compute the result:
functionisPrime(n){if(n<=1){returnfalse;}for(leti=2;i<=Math.sqrt(n);i++){if(n%i===0){returnfalse;}}returntrue;}constnumber=7;console.log(`The number ${number} is prime: ${isPrime(number)}`);
The number 7 is prime: true
This process demonstrates how I break down the task, write and interpret JavaScript code to solve it, and then present the final answer. If you have any more questions or need further assistance, feel free to ask!
Now, observe the error generated in version 3.7.0:
Certainly! Let's compute the answer to whether the number 7 is prime.
```javascript
function isPrime(n) {
if (n <= 1) {
return false;
}
for (let i = 2; i <= Math.sqrt(n); i++) {
if (n % i === 0) {
return false;
}
}
return true;
}
const number = 7;
console.log(`The number ${number} is prime: ${isPrime(number)}`);
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```It looks like there was an error in the code. The console.log function does not support string formatting directly within it. Let's correct that and re-run the code.
Here is the corrected version:
functionisPrime(n){if(n<=1){returnfalse;}for(leti=2;i<=Math.sqrt(n);i++){if(n%i===0){returnfalse;}}returntrue;}constnumber=7;console.log('The number '+number+' is prime: '+isPrime(number));
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```It seems there is a persistent issue with the console.log function not supporting string formatting directly. Let's try another approach by using template literals, which are supported in modern JavaScript environments.
Here is the corrected version:
functionisPrime(n){if(n<=1){returnfalse;}for(leti=2;i<=Math.sqrt(n);i++){if(n%i===0){returnfalse;}}returntrue;}constnumber=7;console.log(`The number ${number} is prime: ${isPrime(number)}`);
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```
Expected Behavior
I would hopefully expect that the Javascript processing works as it seemed to be happening before the last update. I don't know if the problem is in the code or in the model itself, but there is a difference of behavior compared to the previous version of the program.
Your Environment
GPT4All version: 3.7.0
Operating System: Windows 10
Chat model used (if applicable): Reasoner v1
The text was updated successfully, but these errors were encountered:
I am not a developer but I asked the AI and it suggested:
Error Handling: The error handling in the JavaScript code might be too strict, as it throws an error if console.log is called with a string argument.
auto consoleObject = m_engine.evaluate(uR"(
class Console {
log(...args) {
let cat = '';
for (const arg of args) {
cat += String(arg);
}
console_internal.log(cat);
}
}
new Console();
)"_s);
This issue might have the same root as this: #3419
Bug Report
After updating to version 3.7.0, I have noticed an issue with the Reasoner v1 when it attempts to use Javascript during its reasoning process. The model consistently returns an error whenever Javascript is involved. This issue did not occur in the previous release.
Additionally, in some cases, the Javascript reasoning process is triggered even after the model has already provided its final answer, which seems unintended.
Steps to Reproduce
User: "Write code to check if a number is prime, use that to see if the number 7 is prime."
Assistant:
Certainly! Let's compute the answer to whether the number 7 is prime.
isPrime(n)
that checks if a number ( n ) is prime.The number 7 is prime: true
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```It looks like there was an error in the code. The
console.log
function does not support string formatting directly within it. Let's correct that and re-run the code.Here is the corrected version:
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```It seems there is a persistent issue with the
console.log
function not supporting string formatting directly. Let's try another approach by using template literals, which are supported in modern JavaScript environments.Here is the corrected version:
Uncaught exception at line 5: Error: console.log string formatting not supported
for (let i = 2; i <= Math.sqrt(n); i++) {```
Expected Behavior
I would hopefully expect that the Javascript processing works as it seemed to be happening before the last update. I don't know if the problem is in the code or in the model itself, but there is a difference of behavior compared to the previous version of the program.
Your Environment
The text was updated successfully, but these errors were encountered: