As the prototype of Auxilary Neural Engine slowly moves towards completion and the stream/launch date is getting closer, this repository was created. This repository contains all stream videos and logs produced by Energy-Chan, the prototype character test, during stream, as well as written papers.
[A 3D model of Energy-Chan being made in Blender by mzen17, subject to change to more detailed one]
Energy-Chan serves as the front and testing metric for ATHR Lab's AXNE. ATHR holds core 6 principles for all human-recreation models.
We establish hard coded metrics to ensure our model is improving. These are tests/benchmarks custom designed, used on previous models and models from other vendors. Youtube is one of Energy-Chan's metrics, providing ATHR with access to feedback/reviews.
Our model must be able to store and retrieve data in a human-like manner. To do this, we developed a complex system HLGraphRAG, inspired by Microsoft's GraphRAG, using weighted retrival graphs, allowing for high performance recall.
Our model is capable of moving, conducting actions such as waving a hand when they want to or conducting executions on a computer.
Our model is capable of keeping track of a world, having real-time updates to its information to make the best and personalized choices with low response times.
With every event/stream, our model gains new knowledge from the world and learns something new. Improving its score on tests is expected, intuition is not.
The model has low rates of erroneous data. It's output matches the personality of the character, and it makes sense in a real world context.
The first stream (YT) is planned to be somewhere in January 2025. This may be pushed back depending on development of the model. I intend to stream biweekly, subject to my availability. If there is a busy week or month, there may not be a stream.
The stream is estimated to follow this breakdown:
- 30 minutes of implementation/work
- 20 minutes of testing
- 10 minutes of just driving the model around with life (random content like a clash of clans attack, a CS lecture, a MLBB match, etc)
As the model progresses, it is likely that the implementation time decreases and the model driving time increases.
I will update this repo with a link sometime later.
What is the different between this model and other current-day AI Youtube content such as Vedal & Neuro?
The difference is that Energy-Chan is primarily a research prototype that has entertainment as a metric. In addition, there are a few components of Energy-Chan not found in Neuro, such as 3D-spatial movement and the memory engine (HLGraphRAG) used for high performance recall.
Unfortunately, I have opted not to disclose the code for Auxilary Neural Engine. Energy-Chan is essentially proprietary, albeit this may change in the next few years or so.