Skip to content

Using Transformer-based models for solving HP protein folding problem.

Notifications You must be signed in to change notification settings

SakuraHoshizaki/Transformer_HP_3d

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing Reinforcement Learning in 3-Dimensional Hydrophobic-Polar Protein Folding Model with Attention-Based Layers

This repository contains the implementation and experimental results for the paper "Enhancing Reinforcement Learning in 3-Dimensional Hydrophobic-Polar Protein Folding Model with Attention-Based Layers". The study explores the use of attention-based mechanisms to improve reinforcement learning strategies for solving the HP protein folding problem in 3D.


Repository Structure

  • Attn_DQN_3d_hp.py
    This file contains the implementation of the attention-enhanced Deep Q-Network (DQN) used in the experiments.
    Note: Parameters, including the protein sequence, need to be adjusted directly within the script as there is currently no command-line interface.

  • Best_Runs/
    This folder contains:

    • Training Logs: Detailed logs for all training sessions.
    • Best Results: Optimal solutions. These results include all the best solutions (with identical energies but varying structures) in the best_result/ subfolder. Coordinates are saved in best_result/best_results_log.csv

Sample Best Results

Here are some examples of the best results obtained during the experiments:

Length: 20, Energy = -11

Length 20 Structure

Length: 24, Energy = -13

Length 24 Structure

Length: 60, Energy = -49

Length 60 Structure

Length: 60, Energy = -49 (Alternative Structure)

Length 60 Structure


Install the dependencies using:

  pip install -r requirements.txt

About

Using Transformer-based models for solving HP protein folding problem.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages