Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quick start on a local machine with CPU instead of GPU? #1

Open
MoeBensu opened this issue Apr 28, 2022 · 3 comments
Open

quick start on a local machine with CPU instead of GPU? #1

MoeBensu opened this issue Apr 28, 2022 · 3 comments

Comments

@MoeBensu
Copy link

Hi :)

First of all, thank you very much for the useful tool.

Since it is stated in different sites in the README.md to have (powerful) GPUs. I am wondering, if it would be possible to configure the Marin NMT to run on CPU instead for translations purposes. I think it is generally possible in case of Marian NMT, but I want to ask if one can achieve with few changes in this repository. If not, would it still be feasible to attempt that and deploy Transins on a personal laptop (i.e. Mac Air 20XX)? I have to ask, since I still face a few of issues in the compilation step which in its turn takes quite few hours.

Thanks again :)

@jsteffen
Copy link
Collaborator

jsteffen commented May 3, 2022

I've added a Dockerfile to the repo for running Marian with CPUs only. In order to use it with TransIns, you have to adapt some things:

  • For each translation direction in docker-compose.yml, change the image to the CPU-only version. Also remove the deploy section so that no GPUs are requested for the Docker containers.

  • In order to increase translation speed, make sure that Marian is using more than a single CPU core by adding cpu-threads: 4 to each Marian config.yml in src/main/resources/<lang-dir>. Please note that in my tests, I could not use more than 4 threads.

  • If you plan on using the web interface, increase the number of polling retries to avoid timeouts. See here.
    You might want to change from 10 to 100.

I can't say if TransIns runs on a laptop. Feel free to try.

@MoeBensu
Copy link
Author

MoeBensu commented Jun 8, 2022

Hi @jsteffen,

Thank you so much for the putting the Dockerfile together. I can confirm that I was able to run it and deploy it on a MacAir 2015 (4 Gigs RAM). It worked with very simple files containing one line of text. However, I ran into issues when I wanted to translate larger files. The reason was simply the machine resources were exhausted.

I was able though to find a solution and run it on a Slurm cluster node with Enroot. If you think, it could be helpful to have it as a quick start guide for others who happen to run it with container runtime other than docker as Enroot, I could gladly publish it as a pull request.

Thank you very much again!

@jsteffen
Copy link
Collaborator

jsteffen commented Jun 9, 2022

Hi @MoeNeuron,
good to know that the CPU Dockerfile works.
You're welcome to do a pull request for your Slurm/enroot solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants