-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
automatic vectorization and new plots #7
Open
dowdlelt
wants to merge
16
commits into
cvnlab:master
Choose a base branch
from
dowdlelt:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Adding a comment so I can learn a bit about pull requests
Update GLMdenoisedata.m
getting up to date
If the 4D data is provided in an XYZxTime format, with no zeroes, that is the data was masked and then all the zeros removed the dramatically reduce memory usage: data1_vec = reshape(data1, [],t_dim); mask_ind = mask(:,:,:,1) ~= 0; data1_vec_reduced = data1_vec(mask_ind,:); and the user provides the original mask image in opt.reconmask, this will now reconstruct the 3d images from the reduced vector at the very end, thus generating all of the pretty plots for quite data evaluation. Also, this will now plot the timecourses of the PCs beneath the PC maps - though it is not as elegant as Kendrick's method.
Added a bit about the opt.reconmask to the help section and changed the subplot dimensions so that the PC weight map is larger relative to the PC regressor plot.
If a logical mask is provided in the opt.reconmask field, the code will now vecotorize the data prior to the heavy lifting, and then, using the earlier commit, put things back together. This is cheaper than buying more ram.
Made the PC figures slightly prettier, and tested the auto vectorizer on my own data. Appears to work, but further validation is needed.
This reverts commit 687fccc.
***WIll break other options that use masks (HRFfitmask, etc)*** Also, uploaded example data by mistake. removed.
If the user provides other mask in opt (hrffitmask, brainexclude, pcR2cutoffmask) then the code should now vectorize those as well - but I haven't tested it extensively.
Git isn't hard...its just that I don't know what I'm doing. Now, this will change add in a few more loops to vectorize masks if they are provided. As I mentioned previously, not extensively tested.
Auto vec merge
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Been meaning to commit this for a while. I'm not familiar with the whole pull request process, but I figure I need to start somewhere.
I've added a new opt, 'opt.reconmask', which allows the user to provide a binary mask (with the same dimensions as their 3D data) that includes the brain only voxels (i.e. excludes air, perhaps some skull). Depending on the volume reduction, which can be substantial for whole brain data, this can provide substantial speed ups and also conserves RAM.
In addition, I added plots of the PC timecourses below the PC figures, because I just like looking at more of my data. The recon mask is also used to convert the data back to X x Y x Z format for figure creation.
I am not at all a proficient MATLAB user - so I imagine my commits are inelegant, but as far as my testing goes, I don't believe I have broken anything (fingers crossed).