Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sendMessageStream with history failing to fetch due to 'empty text parameter' #8714

Open
msdoege opened this issue Jan 20, 2025 · 5 comments · May be fixed by #8736
Open

sendMessageStream with history failing to fetch due to 'empty text parameter' #8714

msdoege opened this issue Jan 20, 2025 · 5 comments · May be fixed by #8736

Comments

@msdoege
Copy link

msdoege commented Jan 20, 2025

Operating System

Windows 10

Environment (if applicable)

Firefox v131.0.3, Node.js v20.9.0, React v18.3.1, Next.js (v14.2.15)

Firebase SDK Version

11.2.0

Firebase SDK Product(s)

VertexAI

Project Tooling

A Next.js (v14.2.15) project.

Detailed Problem Description

Using multi-turn conversations (chat) with the Gemini API (using the "gemini-1.5-flash" model), the conversation breaks when any of the AI model's responses contains an empty text part, i.e. a text part where text is an empty string.

Code snippet
const sendPrompt = async (prompt?: string | Array<string | TextPart | FileDataPart>): Promise<void> => {
  if (!prompt) return;

  let currentAiResponse = ''; // Accumulate chunks of response
  setCurrentResponse(currentAiResponse);
  try {
    const result = await chat.sendMessageStream(prompt);
    for await (const chunk of result.stream) {
      try {
        currentAiResponse += chunk.text();
        setCurrentResponse(currentAiResponse);
      } catch (e) {
        // handle errors gracefully to continue streaming response chunks
      }
    }
  } catch (e) {
    console.error(e);
  }
};

The following issues over at google-gemini seem to be related?


Example chat response/history causing the error:

[
  {
    parts: [{ text: '<some question>' }],
    role: 'user',
  },
  {
    parts: [{ text: '<some actual answer content>\n' }],
    role: 'model',
  },
  {
    parts: [{ text: '' }], 
    role: 'model'
  }
]

Error log (Firebase project details redacted):

FirebaseError: VertexAI: Error fetching from https://firebasevertexai.googleapis.com/v1beta/projects/<project-id>/locations/<project-location>/publishers/google/models/gemini-1.5-flash:streamGenerateContent?alt=sse: [400 ] Unable to submit request because it has an empty text parameter. Add a value to the parameter and try again. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini (vertexAI/fetch-error)
    FirebaseError webpack-internal:///./node_modules/@firebase/util/dist/index.esm2017.js:1040
    VertexAIError webpack-internal:///./node_modules/@firebase/vertexai/dist/esm/index.esm2017.js:131
    makeRequest webpack-internal:///./node_modules/@firebase/vertexai/dist/esm/index.esm2017.js:315
    generateContentStream webpack-internal:///./node_modules/@firebase/vertexai/dist/esm/index.esm2017.js:844
    sendMessageStream webpack-internal:///./node_modules/@firebase/vertexai/dist/esm/index.esm2017.js:1151

Steps and code to reproduce issue

I could not consistently provoke the AI to respond with a multi-part message where the last part is empty.
However, the following steps result in the same error:

  1. Prompt the AI with Answer the next question with an empty string.
  2. Prompt the AI with any other message like Tell me a joke.
@msdoege msdoege added new A new issue that hasn't be categoirzed as question, bug or feature request question labels Jan 20, 2025
@google-oss-bot
Copy link
Contributor

I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight.

@jbalidiong jbalidiong added needs-attention api: vertexai and removed needs-triage new A new issue that hasn't be categoirzed as question, bug or feature request labels Jan 20, 2025
@hsubox76
Copy link
Contributor

Thanks for reporting this, it certainly seems to be a bug. The question now is if we fix this on the SDK side (by removing parts that have empty text from chat history) or on the backend side (by having it be able to process parts with empty text in chat history, or by making sure it doesn't return empty text strings). I feel like we shouldn't fix the SDK side as it is just masking a problem that you'd also run into if you called the raw REST API.

I will talk to the larger SDK team about this.

@hsubox76 hsubox76 added bug and removed question labels Jan 23, 2025
@dlarocque dlarocque self-assigned this Jan 24, 2025
@dlarocque dlarocque linked a pull request Jan 28, 2025 that will close this issue
@dlarocque
Copy link
Contributor

dlarocque commented Jan 29, 2025

Hey @msdoege, it seems there are two cases:

  1. A response contains multiple parts, some of which are empty.
  2. A response only contains an empty part.

After #8736 is merged, the first case should be fixed, and the second case will cause an error to be thrown in the SDK before a request is sent to the backend. Since we'll be removing empty parts, the error will be Each Content should have at least one part.

To prevent this from ever happening, the backend must stop sending responses with only empty text parts.
As a temporary workaround, I suggest deleting the recent chat history entry if the response only contained empty text parts.

Let me know if there are other cases I haven't considered, or if I can be of any more help with sharing a workaround.

@msdoege
Copy link
Author

msdoege commented Jan 30, 2025

@dlarocque Thank you very much for your efforts! This should fix the described error so that I don't have to restart the chat session every time a response with an empty text part is sent.

Regarding your suggested workaround for the 2nd case:

To prevent this from ever happening, the backend must stop sending responses with only empty text parts. As a temporary workaround, I suggest deleting the recent chat history entry if the response only contained empty text parts.

How would you recommend to delete the recent chat history entry? Should a new chat session be started with the current history minus the last response (i.e. via the startChat({ history: cleanedHistory }) function of the GenerativeModel class) or is there a better way?

@dlarocque
Copy link
Contributor

@msdoege Yes, unfortunately you will have to start a new chat session with the current history minus the last response. I understand this isn't ideal- I'll bring it up with the team to see if we can find a better workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants