- Project Description
- Project Structure
- Requirements
- Installation
- Running
- Known issues and future works
- Contributors
The goal of this project is to create a planner using Prolog logic inference and the probabilistic reasoning offered by Problog.
The main folder is:
prolog_project
it contains the ROS node (motion node and planner node)- scripts: Contains the two node plus the utilities
- msg: Contains the
.msg
file for ROS communication
block_world.pl
is the prolog file, the core of this project.
python_node_poc.py
is a simple proof of concept for the pyswip wrapper for prolog
For installing the requirements I suggest to follow the Installation section.
The requirements can be found inside the requiremens.txt
file inside the llm_kb_generation
folder.
For the prolog only version you will only need the SWI Prolog interpeter.
I reccomend to use Ubuntu 20.04 (I used it for developing the project)
Run pip3
to install the requirements:
python3 -m pip install -U -r llm_kb_gen/requirements.txt
- For testing the prolog only version at first install the prolog interpreter SWI Prolog. Installation is the following:
sudo apt-add-repository ppa:swi-prolog/stable sudo apt update sudo apt install swi-prolog
- Clone the project wherever you want
git clone https://github.com/davidedema/prolog_planner.git
- Load the file with the swipl interpeter
cd ~/prolog_planner swipl block_world.pl
Conversation data between a user and an assistant is stored in YAML files. To make this conversational data usable for fine-tuning purposes—improving the abilities of such language models—it needs to be converted into the JSONL as a means of ensuring compatibility and integration with fine-tuning.
YAML to JSONL Conversion for GPT Fine-Tuning
Fine-tuning large language models like GPT entails well-formatted data in a suitable format. YAML is a human-readable and structured format, while GPT models utilize the JSONL format for fine-tuning. Thus, It is necessary to convert YAML files into JSONL in order to ensure compatibility.
Execution: Run the converter script, specifying the input YAML file(s) and desired output directory:
python3 dataset_generator.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3>
To shuffle the data during conversion:
python3 dataset_generator.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3> -s true
You can run the knowledge creation by calling the python script gpt_convo.py
. It uses few-shots learning to teach the LLM how to respond. The examples are in the few-shots.yaml
file, but other files can be added by using hte -y/--yaml-files
arguments:
python3 llm_kb_gen/gtp_convo.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3>
If not YAML file is passed, the default one will be used.
Notice that the structure of the YAML file should be:
entries:
system_msg:
role:
content:
convo:
0:
Q:
role:
content:
A:
role:
content:
1:
Q:
role:
content:
A:
role:
content:
In order to create a pillar use the pillar/7
rule. This needs 7 parameters in input:
- x: x coord for the pillar generation
- y: y coord for the pillar generation
- z: z coord for the pillar generation
- High: Pillar High
- Width: Pillar width
- Depth: Pillar depth
- Actions: Our "output" variable
It will return in the output variable the plan that the robot has to execute in order to perform the pillar creation
For example, let's create the pillar with height = 0.1 at (1, 0, 0)
pillar(1,0,0,0.1,0.05,0.05,A).
PN: In prolog every instruction finish with the dot '.'.
After the instruction we will see an output like this:
?- pillar(1,0,0,0.1,0.05,0.05,A).
A = [rotate(b1, 0.27, -0.26, 0.685, 1), move(b1, 0.27, -0.26, 0.685, 1, 0, 0), move(b2, 0.41, -0.26, 0.685, 1, 0, -0.05), link(b2, b1)] .
We can see the freshly created pillar with the instruction listing(block/13).
- The blocks do not stack in simulation (they jitter) -> Solved
- Get the blocks info with machine learning methods (e.g. neuro problog)
- Optimize the makespan selecting the blocks that are faster to build
- Enrico Saccon: [email protected]
- Ahmet Tikna: [email protected]
- Syed Ali Usama: [email protected]
- Davide De Martini: [email protected]
- Edoardo Lamon: [email protected]
- Marco Roveri: [email protected]
- Luigi Palopoli: [email protected]