-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quick start on a local machine with CPU instead of GPU? #1
Comments
I've added a Dockerfile to the repo for running Marian with CPUs only. In order to use it with TransIns, you have to adapt some things:
I can't say if TransIns runs on a laptop. Feel free to try. |
Hi @jsteffen, Thank you so much for the putting the Dockerfile together. I can confirm that I was able to run it and deploy it on a MacAir 2015 (4 Gigs RAM). It worked with very simple files containing one line of text. However, I ran into issues when I wanted to translate larger files. The reason was simply the machine resources were exhausted. I was able though to find a solution and run it on a Slurm cluster node with Enroot. If you think, it could be helpful to have it as a quick start guide for others who happen to run it with container runtime other than docker as Enroot, I could gladly publish it as a pull request. Thank you very much again! |
Hi @MoeNeuron, |
Hi :)
First of all, thank you very much for the useful tool.
Since it is stated in different sites in the
README.md
to have (powerful) GPUs. I am wondering, if it would be possible to configure the Marin NMT to run on CPU instead for translations purposes. I think it is generally possible in case of Marian NMT, but I want to ask if one can achieve with few changes in this repository. If not, would it still be feasible to attempt that and deploy Transins on a personal laptop (i.e. Mac Air 20XX)? I have to ask, since I still face a few of issues in the compilation step which in its turn takes quite few hours.Thanks again :)
The text was updated successfully, but these errors were encountered: