-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug report] - ArrowNotImplementedError: Support for codec 'snappy' not built #212
Comments
I tried the following code snippet and it ran fine:
here's the output I obtain:
Could you please confirm you are using the same |
Thanks, Gaelle. When I run the piece of code you shared above, with the eid you have, I get the following error:
This is the exact code I copy and pasted:
Now, if I immediately run just the last line again
|
I just tried the following, but still ran into the same problem; perhaps it may provide some additional insight? I removed the FlatIron\danlab\ folder from my local computer and I ran
|
So after some analysis it comes from an environment issue, regarding the latest pyarrow conda package:
ContinuumIO/anaconda-issues#12164 The error message was cryptic because of an Except clause without printout in brainbox. This was fixed. The package seems to be installed from pypi in the default channel. So far we tried forcing the conda-forge channel as follows: Awaiting feedback |
So just uninstalling and re-installing pyarrow works. |
I am having similar problem. I followed the suggestion of uninstalling and re-installing pyarrows but the issue persists. Below is the error message I received. Any suggestion. Thanks OSError Traceback (most recent call last) ~\anaconda3\lib\site-packages\pandas\io\parquet.py in read_parquet(path, engine, columns, use_nullable_dtypes, **kwargs) ~\anaconda3\lib\site-packages\pandas\io\parquet.py in read(self, path, columns, use_nullable_dtypes, storage_options, **kwargs) ~\anaconda3\lib\site-packages\pyarrow\parquet.py in read_table(source, columns, use_threads, metadata, use_pandas_metadata, memory_map, read_dictionary, filesystem, filters, buffer_size, partitioning, use_legacy_dataset, ignore_prefixes) ~\anaconda3\lib\site-packages\pyarrow\parquet.py in read(self, columns, use_threads, use_pandas_metadata) ~\anaconda3\lib\site-packages\pyarrow_dataset.pyx in pyarrow._dataset.Dataset.to_table() ~\anaconda3\lib\site-packages\pyarrow_dataset.pyx in pyarrow._dataset.Scanner.to_table() ~\anaconda3\lib\site-packages\pyarrow\error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~\anaconda3\lib\site-packages\pyarrow\error.pxi in pyarrow.lib.check_status() OSError: NotImplemented: Support for codec 'snappy' not built |
I can confirm. Uninstalling and re-installing didn't help unfortunately. Same stacktrace as oluwafemi2016, Version pyarrow after re-install: 4.0.1 |
Did you two install via cona, conda-forge or pip? |
It's the re-install via conda that didn't resolve the problem. |
@oluwafemi2016 Did this work for you too? |
I had the same issue, also tried conda install pyarrow and the conda forge, and it failed. But with "python -m pip install pyarrow" it works fine now, thanks |
It seems that I'm not able to load clusters for any of the repeated site recordings. Perhaps this issue may be related to another posted issue here #175 , but I'm not sure.
Specifically, when I try to run the brain_region_ephys_variability_between_labs.py script in the paper-reproducible-ephys repository, I get a key error, because the DataFrame "rep_site" is empty. After looking into this, it seems the problem is in the following line:
spikes, clusters, channels = bbone.load_spike_sorting_with_channel(eid, one=one)
When I run this line for any of the repeated site recordings, I get messages such as this one, resulting in an empty "clusters" variable:
I appreciate any help on what's causing this issue and how to resolve it. Thank you.
The text was updated successfully, but these errors were encountered: