These are just tests on how well different generative AI models can perform, when given access to a short/ long term memory, a reflection function and a user/ assistant profile. This is just an experiment. It's quite ressource intensive to run, because each prompt actually needs to generate 5 answers. Be careful if you modify this and run it with a non local model.
Currently this model only works in the terminal. Maybe I'll implement a WebUI someday.
This model makes use of the text-generation-webui from ooba.