Skip to content

Commit

Permalink
Enhancing Report Generation by Adding Story-teller Ai based on Ai flag (
Browse files Browse the repository at this point in the history
#2816)

This PR enhances the report generation functionality by
addingStory-teller Ai based on Ai flag. This PR is a continuation and
enhancement of the previous PR:
Adding a button to generate a report for a workflow: #2770 
Enhancing Report Generation by adding Operator Results: #2792 
Enhancing Report Generation by Adding Operator Json and Comments Section
: #2807
Ai Flag: #2818 and
#2808

**New Methods:**
checkAiAssistantEnabled():
Validates whether the AI Assistant feature is enabled by checking the
availability of the required API key. This function ensures that
subsequent AI-based functionalities are executed only when the AI
Assistant is available.

generateComment(operatorInfo: any):
Generates insightful comments for each operator using OpenAI’s GPT
model, tailored for a highly educated audience but one that might not
have deep statistical knowledge. The comments are plain text, enhancing
the overall readability and value of the report.

generateSummaryComment(operatorInfo: any):
Produces a concise, insightful summary comment that highlights key
findings, trends, and areas of improvement across the workflow, focusing
particularly on UDFs. This function is crucial for providing a
comprehensive understanding of the workflow to users.

**Enhanced Methods:**
retrieveOperatorInfoReport(operatorId: string, allResults: { operatorId:
string; html: string }[]):
Implements the checkAiAssistantEnabled, generateComment, and
generateSummaryComment functions to enrich the operator information
section in the generated reports. The function now adds a "Toggle
Detail" button beneath each operator, allowing users to expand and view
the operator’s corresponding JSON, formatted using a JSON viewer.
Additionally, a comments section is added below each operator, enabling
users to leave or view AI-generated comments.

generateReportAsHtml(workflowSnapshot: string, allResults: string[],
workflowName: string):
Generates a comprehensive HTML file containing the workflow snapshot,
all operator results, operator details, and comments. This method
integrates AI-generated comments and a summary section at the end of the
report. It also introduces a "Download Workflow JSON" button, allowing
users to download the entire workflow JSON file directly from the
report.

**Operation Process:** 
Click the button below to generate the report with detailed operator
results and the workflow snapshot.

![image](https://github.com/user-attachments/assets/4c2fccf7-c931-4e9c-a2d2-a6bcc9d640a1)

To turn on the ai feature, you need to modify the following sections in
application.udf

![image](https://github.com/user-attachments/assets/d88755af-c86e-4eb3-a717-59a69b74200e)

For example, to turn on the openai:

![image](https://github.com/user-attachments/assets/588de609-806e-46f8-abea-6a08030243fb)


Here is a part of the example report.

![image](https://github.com/user-attachments/assets/3911a558-497f-45bc-a948-1a7a3a671a85)

![image](https://github.com/user-attachments/assets/f8fa5284-5a5b-4d70-a18b-283a2b3c612f)
  • Loading branch information
xudongwu-0 authored and PurelyBlank committed Dec 4, 2024
1 parent f4fee34 commit be41260
Show file tree
Hide file tree
Showing 3 changed files with 487 additions and 250 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,8 @@ import edu.uci.ics.texera.web.resource.aiassistant.AiAssistantManager
import io.dropwizard.auth.Auth
import javax.annotation.security.RolesAllowed
import javax.ws.rs._
import javax.ws.rs.core.Response
import javax.ws.rs.core.{MediaType, Response}
import javax.ws.rs.Consumes
import javax.ws.rs.core.MediaType
import play.api.libs.json.Json
import kong.unirest.Unirest
import java.util.Base64
Expand Down Expand Up @@ -39,6 +38,52 @@ class AIAssistantResource {
@Path("/isenabled")
def isAIAssistantEnable: String = isEnabled

/**
* A way to send prompts to open ai
* @param prompt The input prompt for the OpenAI model.
* @param user The authenticated session user.
* @return A response containing the generated comment from OpenAI or an error message.
*/
@POST
@Path("/openai")
@Consumes(Array(MediaType.APPLICATION_JSON))
def sendPromptToOpenAIApi(prompt: String, @Auth user: SessionUser): Response = {
// Prepare the final prompt by escaping necessary characters
// Escape backslashes and double quotes in the prompt to prevent breaking the JSON format
val finalPrompt = prompt.replace("\\", "\\\\").replace("\"", "\\\"")

// Create the JSON request body
val requestBody =
s"""
|{
| "model": "gpt-4o",
| "messages": [{"role": "user", "content": "$finalPrompt"}],
| "max_tokens": 1000
|}
""".stripMargin

try {
// Send the request to the OpenAI API using Unirest
val response = Unirest
.post("https://api.openai.com/v1/chat/completions")
.header("Authorization", s"Bearer ${AiAssistantManager.accountKey}")
.header("Content-Type", "application/json")
.body(requestBody)
.asJson()

// Return the response from the API
Response.status(response.getStatus).entity(response.getBody.toString).build()
} catch {
// Handle exceptions and return an error response
case e: Exception =>
e.printStackTrace()
Response
.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity("Error occur when requesting the OpenAI API")
.build()
}
}

/**
* To get the type annotation suggestion from OpenAI
*/
Expand Down
100 changes: 100 additions & 0 deletions core/gui/src/app/workspace/service/ai-analyst/ai-analyst.service.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
// Define a response type for OpenAI API
interface OpenAIResponse {
choices: {
message: {
content: string;
};
}[];
}

import { Injectable } from "@angular/core";
import { HttpClient } from "@angular/common/http";
import { firstValueFrom, of, catchError, Observable } from "rxjs";
import { map } from "rxjs/operators";
import { WorkflowActionService } from "../workflow-graph/model/workflow-action.service";
import { AppSettings } from "../../../common/app-setting";

const AI_ASSISTANT_API_BASE_URL = `${AppSettings.getApiEndpoint()}`;
const api_Url_Is_Enabled = `${AI_ASSISTANT_API_BASE_URL}/aiassistant/isenabled`;
const api_Url_Openai = `${AI_ASSISTANT_API_BASE_URL}/aiassistant/openai`;

@Injectable({
providedIn: "root",
})
/**
* This class `AiAnalystService` is responsible for integrating with the AI Assistant feature to generate insightful comments
* based on the provided prompts. It is mainly used for generating automated feedback or explanations for workflow components
*/
export class AiAnalystService {
private isAIAssistantEnabled: boolean | null = null;
constructor(
private http: HttpClient,
public workflowActionService: WorkflowActionService
) {}
/**
* Checks if the AI Assistant feature is enabled by sending a request to the API.
*
* @returns {Promise<boolean>} A promise that resolves to a boolean indicating whether the AI Assistant is enabled.
* Returns `false` if the request fails or the response is undefined.
*/
public isOpenAIEnabled(): Observable<boolean> {
if (this.isAIAssistantEnabled !== null) {
return of(this.isAIAssistantEnabled);
}

return this.http.get(api_Url_Is_Enabled, { responseType: "text" }).pipe(
map(response => {
const isEnabled = response === "OpenAI";
return isEnabled;
}),
catchError(() => of(false))
);
}

/**
* Generates an insightful feedback for the given input prompt by utilizing the AI Assistant service.
*
* @param {string} inputPrompt - The operator information in JSON format, which will be used to generate the comment.
* @returns {Promise<string>} A promise that resolves to a string containing the generated comment or an error message
* if the generation fails or the AI Assistant is not enabled.
*/
public sendPromptToOpenAI(inputPrompt: string): Observable<string> {
const prompt = inputPrompt;

// Create an observable to handle the single request
return new Observable<string>(observer => {
this.isOpenAIEnabled().subscribe(
(AIEnabled: boolean) => {
if (!AIEnabled) {
observer.next(""); // If AI Assistant is not enabled, return an empty string
observer.complete();
} else {
// Perform the HTTP request without retries
this.http
.post<OpenAIResponse>(api_Url_Openai, { prompt })
.pipe(
map(response => {
const content = response.choices[0]?.message?.content.trim() || "";
return content;
})
)
.subscribe({
next: content => {
observer.next(content); // Return the response content if successful
observer.complete();
},
error: () => {
observer.next(""); // If there's an error, return an empty string
observer.complete();
},
});
}
},
() => {
observer.next(""); // If AI Assistant status check fails, return an empty string
observer.complete();
}
);
});
}
}
Loading

0 comments on commit be41260

Please sign in to comment.