You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently wrote a Python script that leverages GPT4o via Argonne's OpenAI service, Argo, to improve our documentation. The script focuses on fixing grammar and formatting issues while also improving the overall clarity of the content. Using this script, I have submitted PRs (#504, #562, #564) with updated documentation for review. I am creating this issue so we can have some of the more general discussion here rather than in the PRs.
Are there any concerns about the use of LLMs?
Are there any specific improvements or changes to the script and the prompts you’d suggest?
Is it worth automating this process further (e.g., integrating it into a GitHub action for automated checks)?
Should I continue submitting PRs in this way for the rest of the documentation?
Are the PR sizes appropriate?
For 3, I already wrote a script, but it requires an OpenAI API key. Would ALCF provide one for this service or should we try free models that can run on GitHub CI servers? Alternative is to have a mirror of the repo on GitLab and run GitLab action on ALCF machines with Argo access. Is this a viable approach?
Prompt
Your task is to:
1. Identify and correct any grammatical errors.
2. Check for and fix any broken links.
3. Address any formatting issues.
7. Do not modify anchors within headers.
8. Provide a brief explanation of the changes made.
9. If no changes are necessary, respond with "The page reads great, no changes required."
10. If any change is required your response should include
1. The revised content of the markdown file.
2. An explanation of your changes, after adding this separator {separator}.
The text was updated successfully, but these errors were encountered:
I recently wrote a Python script that leverages GPT4o via Argonne's OpenAI service, Argo, to improve our documentation. The script focuses on fixing grammar and formatting issues while also improving the overall clarity of the content. Using this script, I have submitted PRs (#504, #562, #564) with updated documentation for review. I am creating this issue so we can have some of the more general discussion here rather than in the PRs.
For 3, I already wrote a script, but it requires an OpenAI API key. Would ALCF provide one for this service or should we try free models that can run on GitHub CI servers? Alternative is to have a mirror of the repo on GitLab and run GitLab action on ALCF machines with Argo access. Is this a viable approach?
Prompt
The text was updated successfully, but these errors were encountered: