You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My update:
As I don't have the completed dataset, I guess the original issue comes from below reasons:
npy_file_new(human_dataset).npy has 22217 data
Current available human data is only 4444+1111=5555
Above causes the below problem. Please feel free to correct me. Thanks.
Original issue:
I am running this project on google colab. This might not be an issue, but I don't know how to solve it.
There is a problem showing as : IndexError: list index out of range.
The part of result as: GCNN Loaded Training on 4444 samples..... 15657 first prot is /content/gdrive/MyDrive/PPI_GNN/PPI_GNN/human_features/processed/3AIH.pt [] 15657 Second prot is /content/gdrive/MyDrive/PPI_GNN/PPI_GNN/human_features/processed/1DEV.pt Traceback (most recent call last): File "train.py", line 97, in <module> train(model, device, trainloader, optimizer, epoch+1) File "train.py", line 45, in train for count,(prot_1, prot_2, label) in enumerate(trainloader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py", line 471, in __getitem__ return self.dataset[self.indices[idx]] File "/content/gdrive/MyDrive/PPI_GNN/PPI_GNN/data_prepare.py", line 41, in __getitem__ prot_1 = torch.load(glob.glob(prot_1)[0]) IndexError: list index out of range
The error comes from the code: def __getitem__(self, index): prot_1 = os.path.join(self.processed_dir, self.protein_1[index]+".pt") print(index) print(f'first prot is {prot_1}') print(glob.glob('prot_1')) prot_2 = os.path.join(self.processed_dir, self.protein_2[index]+".pt") print(index) print(f'Second prot is {prot_2}') prot_1 = torch.load(glob.glob(prot_1)[0]) print(f'Here lies {glob.glob(prot_2)}') prot_2 = torch.load(glob.glob(prot_2)[0]) print(torch.tensor(self.label[index])) return prot_1, prot_2, torch.tensor(self.label[index])
It seems that glob.glob('prot_1') is null. How to solve this problem?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Dear Sir/Madam,
My update:
As I don't have the completed dataset, I guess the original issue comes from below reasons:
Above causes the below problem. Please feel free to correct me. Thanks.
Original issue:
I am running this project on google colab. This might not be an issue, but I don't know how to solve it.
There is a problem showing as : IndexError: list index out of range.
The part of result as:
GCNN Loaded Training on 4444 samples..... 15657 first prot is /content/gdrive/MyDrive/PPI_GNN/PPI_GNN/human_features/processed/3AIH.pt [] 15657 Second prot is /content/gdrive/MyDrive/PPI_GNN/PPI_GNN/human_features/processed/1DEV.pt Traceback (most recent call last): File "train.py", line 97, in <module> train(model, device, trainloader, optimizer, epoch+1) File "train.py", line 45, in train for count,(prot_1, prot_2, label) in enumerate(trainloader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py", line 471, in __getitem__ return self.dataset[self.indices[idx]] File "/content/gdrive/MyDrive/PPI_GNN/PPI_GNN/data_prepare.py", line 41, in __getitem__ prot_1 = torch.load(glob.glob(prot_1)[0]) IndexError: list index out of range
The error comes from the code:
def __getitem__(self, index): prot_1 = os.path.join(self.processed_dir, self.protein_1[index]+".pt") print(index) print(f'first prot is {prot_1}') print(glob.glob('prot_1')) prot_2 = os.path.join(self.processed_dir, self.protein_2[index]+".pt") print(index) print(f'Second prot is {prot_2}') prot_1 = torch.load(glob.glob(prot_1)[0]) print(f'Here lies {glob.glob(prot_2)}') prot_2 = torch.load(glob.glob(prot_2)[0]) print(torch.tensor(self.label[index])) return prot_1, prot_2, torch.tensor(self.label[index])
It seems that glob.glob('prot_1') is null. How to solve this problem?
Thanks in advance.
The text was updated successfully, but these errors were encountered: