-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writing to text #191
Comments
I think you would need to feed the text file into the agent as well - right now it doesn't know what the text file contains. Can you try loading the text file & adding it to the agent in the input_string section? This may also be something where just pure python would work best for taking a text file, breaking it up into sections & only inserting the new prompt if it doesn't already exist... for deterministic stuff like that LLMs may not be the best tool. I really gotta get on that CodeExecution node. :) |
I loaded the txt in to the agent, now I have to wait till my prompts are done, will tell you as soon as I queue new ones |
If you are going to make a code execution node, a .csv file would be handy as an output aswell |
It works, it's logging all the prompts one after another |
Still a small problem though, when I hit that limit where the LLM loses quota (Gemini 1.5), it overwrites my prompt file with a blank file only containing the error "429 Resource has been exhausted (e.g. check quota).", luckily I backed most of them up, and I can do that before I hit that limit but letting it run overnight unsupervised, made me lose a few prompts. Glad I still had the old way up so they are still in another txt file. I am going to try it now with my local LLama_3.2_vision which doesn't require a quota to see if it can understand what the agents need to do, and if that keeps working without overwriting. (I would not mind giving me the error but don't erase all that previous work) |
to write the metadata to a PNG? possibly! @griptapeOsipa , would your metadata tool be able to do that? write the prompt that generated it into an image? |
*Can, and *effectively might be different answers here. It can most definitely write text into PNG image data. That said, the current version can only write to existing fields not create new ones, and fields seem to have built-in character limits that are not all consistent. A long prompt stored in the wrong place might truncate. But, the best thing to do is try! |
okay - so just to clarify what we want to do here.. are you wanting a node to add a line to a csv? |
btw - with the new python run nodes, you can do something like this: where you pass it a json text with "name" and "prompt" as the two fields - and it'll save it to a file. This particular script will overwrite an existing row with the same name, or create a new row with a new name. Here's the workflow: |
Hi it's me again,
I tried to make an archiver for image prompts
I gave it these rules to follow:
"Only add new prompts
Clearly mark the end of the prompts with a empty line, followed
by a single line of 100 Underscores then another empty line
Do not overwrite or delete the previous prompts only add to the list.
If the prompt is new Add it after the previous prompt and end mark, in the same txt file
If the prompt already exist, do not add it again."
But it keeps overwriting the prompt that it had, instead of adding it to the next free line.
Am I doing something wrong? Did I forget a rule? Or is it not meant to work this way?
The text was updated successfully, but these errors were encountered: