Skip to content

Commit

Permalink
Google integrations: change answers to replies (#216)
Browse files Browse the repository at this point in the history
  • Loading branch information
anakin87 authored Mar 27, 2024
1 parent 48b68c4 commit c0a9c59
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 10 deletions.
6 changes: 3 additions & 3 deletions integrations/google-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ os.environ["GOOGLE_API_KEY"] = "YOUR-GOOGLE-API-KEY"

gemini_generator = GoogleAIGeminiGenerator(model="gemini-pro")
result = gemini_generator.run(parts = ["What is assemblage in art?"])
print(result["answers"][0])
print(result["replies"][0])
```

Output:
Expand Down Expand Up @@ -103,7 +103,7 @@ os.environ["GOOGLE_API_KEY"] = "YOUR-GOOGLE-API-KEY"

gemini_generator = GoogleAIGeminiGenerator(model="gemini-pro-vision")
result = gemini_generator.run(parts = ["What can you tell me about these robots?", *images])
for answer in result["answers"]:
for answer in result["replies"]:
print(answer)
```

Expand Down Expand Up @@ -189,7 +189,7 @@ os.environ["GOOGLE_API_KEY"] = "YOUR-GOOGLE-API-KEY"
gemini_generator = GoogleAIGeminiGenerator(model="gemini-pro")
result = gemini_generator.run("Write a code for calculating fibonacci numbers in JavaScript")
print(result["answers"][0])
print(result["replies"][0])
```

Output:
Expand Down
14 changes: 7 additions & 7 deletions integrations/google-vertex-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ from haystack_integrations.components.generators.google_vertex import VertexAIGe

gemini_generator = VertexAIGeminiGenerator(model="gemini-pro", project_id=project_id)
result = gemini_generator.run(parts = ["What is assemblage in art?"])
print(result["answers"][0])
print(result["replies"][0])
```
Output:
```shell
Expand Down Expand Up @@ -101,7 +101,7 @@ images = [
]
gemini_generator = VertexAIGeminiGenerator(model="gemini-pro-vision", project_id=project_id)
result = gemini_generator.run(parts = ["What can you tell me about these robots?", *images])
for answer in result["answers"]:
for answer in result["replies"]:
print(answer)
```
Output:
Expand Down Expand Up @@ -130,7 +130,7 @@ palm_llm_result = palm_llm.run(
Text: Google Pixel 7, 5G network, 8GB RAM, Tensor G2 processor, 128GB of storage, Lemongrass
JSON:
""")
print(palm_llm_result["answers"][0])
print(palm_llm_result["replies"][0])
```

### Codey API Models
Expand All @@ -144,7 +144,7 @@ from haystack_integrations.components.generators.google_vertex import VertexAICo
codey_llm = VertexAICodeGenerator(model="code-bison", project_id=project_id)
codey_llm_result = codey_llm.run("Write a code for calculating fibonacci numbers in JavaScript")
print(codey_llm_result["answers"][0])
print(codey_llm_result["replies"][0])
```
Here'a an example of using `code-gecko` model for **code completion**:
Expand All @@ -159,7 +159,7 @@ codey_llm_result = codey_llm.run("""function fibonacci(n) {
return n;
}
""")
print(codey_llm_result["answers"][0])
print(codey_llm_result["replies"][0])
```

### Imagen API models
Expand Down Expand Up @@ -201,7 +201,7 @@ print(image_captioner_result["captions"])

**Visual Question Answering (VQA) with `imagetext`**

To answers questions about an image, initialize a VertexAIImageQA with the `imagetext` model and `project_id`. Then, you can run it with the `image` and the `question`:
To answer questions about an image, initialize a VertexAIImageQA with the `imagetext` model and `project_id`. Then, you can run it with the `image` and the `question`:

```python
from haystack.dataclasses.byte_stream import ByteStream
Expand All @@ -213,5 +213,5 @@ image = ByteStream.from_file_path("output.png") # you can use the generated imag
question = "what's the color of the furniture?"
visual_qa_result = visual_qa.run(image=image,question=question)
print(visual_qa_result["answers"])
print(visual_qa_result["replies"])
```

0 comments on commit c0a9c59

Please sign in to comment.