Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sqlalchemy: unable to open database file #339

Open
seelikat opened this issue Mar 5, 2021 · 4 comments
Open

sqlalchemy: unable to open database file #339

seelikat opened this issue Mar 5, 2021 · 4 comments

Comments

@seelikat
Copy link

seelikat commented Mar 5, 2021

Describe the bug
I've solved a number of other issues while trying to work with this module, but now got stuck with an sqlalchemy error (unable to open database file) that I can't get past because I'm not even sure at what point it happens.

Could you let me know what I am doing wrong, please? (I've had to use nibetaseries from docker since the pip version tried to install a number of dependencies, of which pandas failed to build. )

The stacktrace is attached. I would really like to use nibetaseries.

To Reproduce

docker run -it --rm -v /my/data/dir/fmriprep:/bids_dir \
                     -v /my/out/dir/betaseries:/out_dir  \
                     -v /my/out/dir/betaseries/tmp:/work_dir \
                     hbclab/nibetaseries:v0.6.0 \
                     nibs -c white_matter csf cosine01 cosine02 cosine03 cosine04 cosine05 cosine06 cosine07 \
                           --participant-label 01 \
                           -w /work_dir \
                           /bids_dir \
                           fmriprep \
                           /out_dir \
                           --nthreads 32 \
                           --estimator lss \
                           --hrf-model 'glover' \
                           participant

OS
Ubuntu

nibetaseries version
v0.6.0

stacktrace.txt

@PeerHerholz
Copy link
Contributor

Ahoi hoi @kateiyas,

thx a lot for reporting this issue and sorry for the late response.

Hm, never seen this one before, looks wild. The only thing I can think about ATM is related to permission. Do you have read/write permission within the local paths you mount? Also looping in the grandmaster @jdkent.

@seelikat
Copy link
Author

seelikat commented Mar 10, 2021

@PeerHerholz

Thank you for your response. I have rechecked whether it could be related to permissions, but I have rw rights so this can't be the case.

Did you see that the error seems to be occur when creating a BIDSLayout via pybids? It might be related to pybids after all, maybe to BIDS validity. I am working with fmriprep data inside bids/derivatives/fmriprep (where derivatives/fmriprep is appended to the bids directory path by nibetaseries). This derivatives/fmriprep directory has the same structure as the raw directory (which should be BIDS-valid), but does not contain all the metadata (which I would expect to be read from bids/raw). Could the problem reside here? (just guessing)

@PeerHerholz
Copy link
Contributor

Hi @kateiyas,

as last time: sorry for the late response.

Ah yeah, that sounds like something we should definitely check. Sorry, I completely missed this before. Could you maybe elaborate more on your data and the structure it's in? nibetaseries wants/expects a BIDS raw dataset being mapped to bids_dir and as you mentioned builds the path to the preprocessed data via the respective identifier, here fmriprep. Additionally, as you mentioned as well, everything should be BIDS-valid. What's gathered from the raw data is meta-data from the json sidecar files and paradigm information (onsets, trial type, duration). Together with the preprocessed func files from derivatives/, this information will be used to define and estimate the model(s). So, assuming your data looks like this (for one participant only):

my_data/
  dataset_desciption.json
  derivatives/
         fmriprep/
               pipeline_description.json
               sub-01/
                    anat/
                    func/
  participants.json
  participants.tsv
   sub-01/
       anat/
       func/

then the following nibetaseries command should work:

docker run -it --rm  -v my_data:/bids_dir \
                     -v my_data/derivatives:/out_dir  \
                     -v my_data/tmp:/work_dir \
                     hbclab/nibetaseries:v0.6.0 \
                     nibs -c white_matter csf cosine01 cosine02 cosine03 cosine04 cosine05 cosine06 cosine07 \
                           --participant-label 01 \
                           -w /work_dir \
                           /bids_dir \
                           fmriprep \
                           /out_dir \
                           --nthreads 32 \
                           --estimator lss \
                           --hrf-model 'glover' \
                           participant

Overall, I'm not super sure if it's really BIDS/pybids related as those should lead to more informative errors than the one you're receiving IIRC. However, I was wrong before, hehe.... I hope I understood you correctly and could explain my pointers sufficiently, if not I'm very sorry and please don't hesitate to ask.

HTH, cheers, Peer

@Tsjitsjikow
Copy link

Hi @kateiyas and @PeerHerholz,

Recently I ran into the same sqlalchemy error (unable to open database file). Though the reasons why it occurred may be different for me, I figured that sharing my experience and what ended up working may bring you closer towards a solution.

My setup is a HPC environment (Ubuntu) in which I have my own VM to process data on a shared data drive. I have been using the python module of nibetaseries v0.6.0.

One cause of the error seemed to be related to repeated troubleshooting/ failed attempts, causing the database files work_dir/dbcache.sqlite and fmriprep/fMRIprep.sqlite to get messy. Another issue was that sometimes nibetaseriescouldn't read/write to the database, as I would get the following error: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error. I thought this could be related to the data drive being busy reading/writing the actual data.

The following steps ended up working for me:

  1. Run nibetaseries on a new bids folder (no previous databases)
  2. Move the newly created databases to a nibs_work folder in the home folder of the VM (i.e. no longer on the data drive)
  3. Create symlinks to the moved databases in the original locations (work_dir/dbcache.sqlite and fmriprep/fMRIprep.sqlite)
  4. Clear the databases (remove all entries)
  5. Re-run nibetaseries

After this, I could successfully run nibetaseries on the entire dataset.

Not sure if the above is helpful, hopefully it gives a clue as to what could be the issue here.

Cheers,

Leo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants