-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install on Linux #1
Comments
There are no differences in installation process except of CUDA dlls, you may wish to install linux versions of cuda following https://www.tensorflow.org/versions/r1.2/install/install_linux under "NVIDIA requirements to run TensorFlow with GPU support" section. If you decide to use cpu version, you may skip this. Not really all requirements.txt are required, i just did "pip freeze", including tons of my other installed libraries, then cleaned up some lines manually. All the rest should be in pip repo and easily installed on Linux or Mac with "pip install". If some lib still says "it's unavailable", you may try to skip it, may be it's not really needed. Only a few main key libraries are really needed, such as numpy/tensorflow/rtlsdr/keras, that's all exists for all of operating systems zoo. |
I have Tensorflow-gpu and a complete set of typically used libraries on Ubuntu 16.04 which works for pretty much anything machine learning related that needs TF. It did not work on this occasion. I rebuilt a new virtual environment with the requirements-text list. I still does not work and throws the same error so two different set-ups throw the same error. I don't think it is the RTL-SDR in my case either it seems to work. Tensorflow does not like reading the 1200000 byte length and throws error code -8. So I just used the sample rate as is, that has now lead to new errors with the numpy array reshape function. I am just going to need to plod through, but the predict_scan.py is not working on Ubuntu as of today. I have a 12Gb GPU so its not out of memory. If anyone has any fresh insights that would be welcome. But I suspect some major rework on predict_scan.py is required at a guess to satisfy my particular case. FYI the requirements-text: many of the file pointers are ahead in release numbers for linux so its asks for versions not yet released, just remove the version requirements where it breaks(all the ==x.x.x) and it will pull the latest available linux release. Some files are also windows only, just comment them out. |
Nightmare on Elm Street! DO NOT install requirements.txt if you have Anaconda already installed! As I was in a virtual environment after installing the requirements.txt, it wasn't apparent that it had 'smoked' Anaconda completely. The conda command refused to work and gave errors about being installed by pip. It was as I discovered irreversible. I have to destroy all my virtual environments and completely delete Anaconda owing to a bug with openssl so I could not write over the old install at all. I had a war with tensorflow-gpu and CUDA which I resolved eventually. I have now tensorflow-gpu running on 1.5.0 and CUDA running 9.0. Still it produced the same error. Apart from scipy and pyrtlsdr I think it didn't need much else to get it back to where I was previously. What a day! |
I have tried to install it on Ubuntu 16.04 x64 and stuck with the same error -8 while reading from rtlsdr. Probably it should work somewhere, as spoken in pyrtlsdr bugtracker, but not with my dongle.
|
Its not a dongle thing as far as I can see. I am going through to see where it breaks. Before this its failed after the arguments. The dongle itself is a one shot thing it must be closed and opened again from what I have seen slowing things down. It could be pyrtlsdr, its on the list. I will read the issues. Thanks for getting back at least we see the same issue thats a start :) |
Did the same with latest Kali Linux, just with some modifications:
And got the same: So I'm pretty sure it's pyrtlsdr/my dongle error, because gqrx/gnuradio works well on both Ubuntu and Kali. |
ok. I have a HackRF unit as well but I only got the gear this week so I have had a steep learning curve ahead. I could try that unit but bending the code might take some time. I note the error we have is an 'OS' error not 'IO' error as reported in pyrtlsdr issues, at least that's what I read so far. I did an rtl_test it seems fine. I even got it to produce audio, but when it comes to using python and the RTL that's where the wheels are falling off using Linux. |
Finally I figured it out. Rtl-sdr driver under Linux won't return read_bytes(value), if the value is not divides to powers of 2 (or multiple to read buffer size) or so. Rewriting read_samples() a bit - solves the problem and then it works well.
I have noticed, that the predicted signal class at frequencies was shown as 'tv', not as correct 'wfm'. It is ok here :), due to folders structure. I am recommending not to use outdated first version, but prefer keras one. It should run also well, just do not forget:
|
Fabulous! I have mine working on the GPU at last. Thanks for the update. It had to be a data thing. I got nothing at first so I changed the antenna for something better. My high gain antenna will need moving, the number cruncher is too far from that, its raining, its probably not happening today. Running predict its absolutely convinced that every station it has received on the FM band is DMR which I interpret as Digital Microwave Radio, if thats what you mean, it will not be at ~100Mhz. I have some 'real' receivers I will take a listen as see what it has got mixed up on. But I suspect I will need to train this to local conditions, lets see how that goes. Keras is installed! |
But it's still finding wfm stations on it's correct frequencies, right? :) Yes, it's better to train new model using your local antenna etc. Ok, here are some quick steps to run keras version now: First, edit read_samples() function inside of [prepare_data.py] (to avoid driver errors under Linux): replace
with
Then scroll down [prepare_data.py] to find
lines. Edit all of them, set frequencies and labels according to your local ether, save the file and then run it. It should collect some training samples. Sample .npy files should be 200,080 bytes each. Before running [train_keras.py] you also need to edit a bit. Edit read_samples() function same way like we did with [prepare_data.py] :). Then scroll down file till the end, to find
lines. Also edit all of them - type there also frequencies and correct labels, just use DIFFERENT frequencies, that are not stored in [prepare_data.py] - trained model should be evaluated using new data from frequencies, that neural network never seen before. |
I more or less did that I got it to create the .npy files, but I will do your method verbatim. I hit a snag. Given the time to preprocess, could tell me what the len(*.npy) should look like at the np.shape of *.npy. I had the training blow a gasket trying to reshape it to (128,2). It was saying it was a list of 54468 or something like that size not a numpy array during training. I took a look at the *.npy files they were complex number arrays and the iq_samples generated by preprocess were floats. But the *.npy were only half the size at around 25xxx bytes I don't have a baseline to compare fo what size and shape I should have but I know it will be 2D into keras. I will groom this again tomorrow with the above, we are out of sync in time zones, I suspect its on the second round now so I won't disrupt it. ....Cheers |
I have been around in circles. I thought I had an SNR issue when I started training my models. It was all over the place. I have wripped all the training and testing files I did out and put yours back in. It seems there was only 1 *.npy file in each folder, I was creating dozens of them in my training for some reason. So now I am studying your original output again, to see what I can do about SNR or if its a FM format issue I think our RDS carrier is different to the U.S model in any case there must be minor differences or the prepare.py has placed the WFM files in the DMR folder on your end as it has located all the local FM stations but labeled them as DMR. There are stations on each freq I checked, so it seems the label is wrong. Still checking to see what is going on. |
ok This is what I have before I forget:
|
step-by-step
after fit().
Dataset.py:94 error tells me, that you has a very little amount of training samples (less than 16). I repeat, you need a lot of, for example, 0.5-1 Gigabyte per each class, which is equal to 2000-4000 .npy files per class. Each npy file contains 12500 I and 12500 Q values interleaved, stored as 64-bit floats. Also, neural network does not cares about RDS or anything that are modulated inside, as we didn't extracting such features, network just learns for "projected shape of cutted signal". And yes, SNR is very important, especially while working with signal from rtl-sdr dongles, I recommend to always set dongle gain to 'auto', before you completely understand of what's going on in the software. |
ah right I see. It makes sense sure I have the model, but I assumed I had all the original *.npy too. So the missing ingredient is all the *.npy files. At no time has the GPU broken a sweat doing this stuff at all, super quick. Ok I will run preprocess tomorrow blow out the folders and check point. I just need to find some more interesting stuff the feed it. Cheers |
Oh, that's the great deal of work. After the whole things ran successfully, shrink bandwidth and classify different FSK/BPSK`s. Classify multiple signals presence at time. Feed CW samples to LSTM network and decode Morse. Decode voice from samples. :) Decode patterns from stars. Automaticaly choose the best modulation mode while communicating digitally... |
My 40m dipole is upstairs, that does well on WWV and HF but I am not lugging all of this upstairs. In any case I want to bend the code so I can do more QAM demod with the hackRF unit in the ISM bands. I could use the RSA306 but Tektronix never produced much info a driver of the IQ - too hard basket. So once I am comfortable with this RTL I will try and get the other working. I did voice using Wavenet I was feeding DTMF into it as a MNIST style test. Strange results not what I would have expected. But this is more predictable. |
I modified here on train_keras def read_samples(freq): and here only check(88700000, "wfm") and it just tests everything as 'tetra' 88.7 tetra 99.95344877243042 still no check point files generated. I must have missed something. I changed prepare it created all the *.npy files ok and train_keras trained ok for 50 iterations it looked fine. Not sure what I missed as it not throwing errors. I am not sure what the role of data2 does btw there maybe an issue in there, it references iq_samples but nothing from the RTL_SDR specifically i could see. |
This is from prepare_data line 44 + collect_samples(940000000, "tetra") These are the directories in training _data and testing_data there's 750 files in each directory e.g. ~/cnn-rtlsdr/training_data/wfm Now clearly thats not 2000 thats all it created so I must need to change collect_samples? |
Testing_data folder is not really needed for keras version, as it takes 30% from training_data to use as testing_data. What are keras latest statistic values after training? loss, acc, val_loss and val_acc? Did you set ppm value correctly? Sharing my training data, that should give: loss: 0.01, acc: 0.9971, val_loss: 0.0193 and val_acc: 0.9961. Just did a fresh train (on windows) and it has correctly predicted fresh signals from ether. My tv carrier is SECAM so it would differ, but network should predict tetra & wfm correctly on your ether. |
.....and the answer is...h5py! A casualty of requirements.txt disaster where I had to rebuild the system from scratch thus losing many libraries I had built in over time. So where does that leave us?
So its a mystery during training that keras_train come up with half decent answers but train.py said its all tetra. But I can't run predict on the h5py model as is without changes to truly run keras out on its own. |
I removed compile and fit and run it, its a bit all over the place with predictions. SECAM is an old analog european standard we have digital TVB, also gsm is gone here I think, its wideband 3g or 4g so I am sampling multi carrier or cdma wideband QAM systems not analog. So I will hit it with more samples today which takes hours to do. I am not sure I have the gain right. I have a preamp on the rtf-sdr it may be causing intermod on some signals, hard to say with digital sounds ok on fm. So if that fails to predict accurately I will just concentrate on getting fm right as a baseline, then attempt to adjust for QAM based signals. The decimation maybe to harsh with signals spanning more than 5 Mhz, I may need more sample bandwidth. |
Hi |
Those are just info and deprecation warnings. It may not have detected a signal at all. First I would do a sanity check as follows on the command line: rtl_fm -f 92.7e6 -M wbfm -s 240000 -r 48000 - | aplay -r 48k -f S16_LE Change 92.7e6 for a local station you have. Do you hear sound? If so proceed next. I can see it has PLL warnings so it should be ok. If not sound card issues not our problem with raw samples but you should get the radio working its essential. Check you have 'aplay' check 'stack overflow' for more details. I would insure you have h5py installed. Use 'pip freeze' on the cmd line, look for h5py==2.7.0. Are you using Python 3.6? I can't speak for 2.7. can't say it will or won't work but given the number of print statements in code you would hit some errors very quickly. Did you install Tensorflow for Python 3+, I see its cpu only no issue with that, certainly easier. You could be running Py 3.6 on the wrong tensorflow. Fire-up python interpreter and issue 'import tensorflow'. Did that work? If not, sort that out. I would did a 'pip freeze' check against requirements.txt. Also you may want to use 'conda list' and see what anaconda installed. That should keep you busy for a while. Also I have a weak signal I would take the RTL stick off 'auto' in the prepare_data.py code and replace with 60 which as far as I can tell is max gain (line 42 in my code). Try that! |
I see python 3.6 being used, signal.decimate() tells this. After the check, ensuring that rtl-sdr works well with other applications, and if cnn-rtlsdr still ends silently, try one of my ways to install environment on a clean Ubuntu or Kali, edit the function, and check again, it really should detect at least wfm at it's places. Otherwise, I don't know more what could help, may be inserting debug print( ) after each block of code to see what's happening line by line. |
So I have followed all the steps, when I run Python_predicts this is where I am stuck.. ~/cnn-rtlsdr$ sudo python3 predict_scan.py --start 850000000 --stop 860000000 --threshold 0.9955 the sdr works as I have ran it through GQRX but I cant seem to get past this..any help please? |
It has just found nothing in ether. Commercial FM is usually at 88-108 MHz. Try with --start 88000000 --stop 108000000 It is much better to use fresh keras version of neural network. |
Awesome, thank you for your help. So with Keras will it be able to reconize dmr. Also when I run it I get this error. etached kernel driver thank you for your help |
Commands for fresh installed Ubuntu 18.04.1 LTS, desktop version:
Detached kernel driver Which is pretty ok, just showing dmr instead of correct wfm label due to training_data/ folder contents. |
And to continue to keras version:
To use developer data:
or, to use own data
Now we are ready to run keras version:
|
I followed the instructions but it doesn't work, getting Illegal instruction. please help. Thanks |
Hi, root@udd-K73E:/home/udd/cnn-rtlsdr# python3 predict_scan.py I googled the issue and found that I should install an older version so I installed tensorflow1.4 but still getting errors... root@udd-K73E:/home/udd/cnn-rtlsdr# python3 predict_scan.py would appreciate some help. thanks newbee |
Hi What remains to be sorted out for it to work under Ubuntu 16.04/18.04? |
It would be great to have some instructions how to install this on Linux.
There are some DLLs mentioned in the instructions and also I think some stuff from requirements.txt might not apply to Linux.
The text was updated successfully, but these errors were encountered: