An error occurred: Field missing. Details: {'conversation_id': '[CENSORED]', 'message_id': '[CENSORED]', 'is_completion': False, 'moderation_response': {'flagged': False, 'blocked': False, 'moderation_id': '[CENSORED]'}} #1469
Replies: 2 comments 1 reply
-
It seems like the exception gets thrown at the end of the for loop once it stops receiving data back from ChatGPT. If you just take out the exception handling and ignore the exception you can still use the message. If you print out the contents of the buffer each time through the for loop for the .ask() function it will sometimes show more text and other times show the exact same text but one thing is for sure, when the full text is sent it still tries to loop and ultimately tries to throw the exception instead of exiting the loop gracefully like it did earlier today before this started to happen. I do have a workaround for the time being though, so this isn't the worst thing in the world. It's just so odd that this happened out of nowhere with the same unchanged code I was using for 2 days. But, I've now modified the code to work past this for the moment by dismissing the specific field exception and I also tried with both the v4 proxy and without and it works fine both ways. It does seem like with the proxy the response comes a little faster, not really sure why that is or if it's just in my mind. I also verified that I have access to the GPT-4-browsing model which is funny because it's disable through the website for the time being so not sure why it's still working through this method 🤷♂️ |
Beta Was this translation helpful? Give feedback.
-
This is my workaround for this issue until it's fully understood. I just wait for the exception to get thrown then set the value of response back the last known good value returned from revChatGPT.ask() and that is the full answer every time. I do this in the finally block after passing on the ValueError exception which occurs when message is empty on the last iteration through the for loop. It's a really wonky workaround but it gets the job done. prev_text = "" Hope this helps some people out 🙏 |
Beta Was this translation helpful? Give feedback.
-
I had everything working great today and then out of the blue I started getting this error and I can't figure out why. I even tried to deploy the v4 proxy and went through it and get the same exact error message.
An error occurred: Field missing. Details: {'conversation_id': '[CENSORED]', 'message_id': '[CENSORED]', 'is_completion': False, 'moderation_response': {'flagged': False, 'blocked': False, 'moderation_id': '[CENSORED]'}}
But what makes this really strange is that when I go to the ChatGPT page in browser and look under the conversation with the same ID it shows the prompt was sent successfully and the answer shows up as you would expect. The answer is just never getting sent back to the client but rather that error is raised saying "field missing" but doesn't say which field is missing. And if I send multiple requests 1 after another without waiting all the following requests will fail because of rate limit "one at a time" which is to be expected but once the rate limit clears, I still get the field missing and the message never shows up locally.
I'm trying to figure out what could have possibly changed since this was all working perfectly earlier today and just out of the blue started doing this and I can't quite figure out why. Everything still works just fine through the webpage and I even updated my session and access tokens after logging in again just to see if that was the problem but still just get field missing.
This is the chunk of code I'm using that worked fine before, it also worked without session_token but I included it anyways
chatbot = Chatbot(config={
"session_token": session_token,
"access_token": access_token,
"paid": True,
"conversation_id": conversation_id
})
response = ""
retry_limit = 5
retry_count = 0
while retry_count < retry_limit:
try:
# Asking the chatbot using the prompt
print(f"Sending Prompt to ChatGPT: {prompt}")
for data in chatbot.ask(prompt):
response = data["message"] # Seems to throw exception here claiming a field is missing
print(response)
break # Break out of the loop if successful
except Exception as e:
retry_count += 1
print(f"An error occurred: {str(e)}")
print(f"Retrying... ({retry_count}/{retry_limit})")
Any help would be appreciated since I was quite enjoying having this wired into VoiceAttack so I could use voice to text to ask it questions and get a response back in a nice natural Azure voice which was absolutely killer until it stopped working 🙏 I've tried different VPN's, I've tried without a VPN at all and tried the v4 proxy you link to in the project (but I assume that proxy is already embedded into this since I see the "PUID" print out every time I initialize the chatbot object. I'm just genuinely curious what changed to cause this to happen since everything is working fine through the web browser and the prompts are showing up under the conversations just not being sent back via the message field like they were just a few hours ago.
Also, do I need to use the V4 proxy moving forward when this is working again to get rid of the restrictive rate limit since I'm a plus account or does that automatically happen when I set "PAID: True" in config? Also, does the proxy have to be HTTPS or can it just be HTTP because for some reason it's saying my TLS version is out of wack and won't connect via HTTPS to the local proxy when I set it up. Also, it only seems to connect to it with HTTP and if I use the environment variable to set the proxy and not the actual proxy field going into the chatbot object. Not sure if their formats are different or what the deal is but when it connects it just says CONNECT "" when I set the proxy field on the chatbot object but when I use the environment variable it connects properly and sends the prompt to ChatGPT correctly but then says the other side terminated the connection and I get the missing field error thing again.
I hope this can get figured out because this was an amazing experience to have Jarvis responding to me in Windows using VoiceAttack and a little python scripting, honestly, a very cool experience especially when you pair it with SSML and Azure Text to Speech to give a very natural voice with emotion to it.
Beta Was this translation helpful? Give feedback.
All reactions