-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extract brain tissue from gtex gene count #5
Comments
hi, even if I am not the author of this package, I think this is a more general question. You have to download the sample attribute manifest file, find which sample ids belong to the brain tissues, and then extract those samples from your gene expression file. |
Hi, |
I don't know if you've looked into the netZooR package that @marouenbg mentioned (I didn't know that support had transitioned to netZooR until I read this comment today), but yes @Xenophong is correct that you need the sample attribute data. For version 8 it would be: You can then look at column SMTSD and grepl for "Brain" (at least that is what I did), and then match the SAMPIDs in the phenodata to the column names of the gene reads data. I just downloaded the files from GTEx to local and read them in that way. Most of this is directly from yarn::downloadGTEx but I had to modify some things.
|
Hi @gmillerscripps , |
Hi @gmillerscripps and al, Marouen |
Hello,
I have downloaded GTEx rna-seq data from this site:
wget https://storage.googleapis.com/gtex_analysis_v8/rna_seq_data/GTEx_Analysis_2017-06-05_v8_RNASeQCv1.1.9_gene_reads.gct.gz
I want to extract only brain tissue gene expression data from this file.
I checked yarn package and does it has any function that can perform this function?
checkTissuesToMerge(obj, majorGroups, minorGroups, filterFun = NULL,plotFlag = TRUE, ...)
But this function will merge tissue based on gene expression file.
I just specifically want to extract brain tissue gene expression data only.
Can you please provide me some guidance on how to proceed with this?
Thank you
The text was updated successfully, but these errors were encountered: