Language Detection Library for Java
<dependency>
<groupId>com.optimaize.languagedetector</groupId>
<artifactId>language-detector</artifactId>
<version>0.5</version>
</dependency>
- af Afrikaans
- an Aragonese
- ar Arabic
- ast Asturian
- be Belarusian
- br Breton
- ca Catalan
- bg Bulgarian
- bn Bengali
- cs Czech
- cy Welsh
- da Danish
- de German
- el Greek
- en English
- es Spanish
- et Estonian
- eu Basque
- fa Persian
- fi Finnish
- fr French
- ga Irish
- gl Galician
- gu Gujarati
- he Hebrew
- hi Hindi
- hr Croatian
- ht Haitian
- hu Hungarian
- id Indonesian
- is Icelandic
- it Italian
- ja Japanese
- km Khmer
- kn Kannada
- ko Korean
- lt Lithuanian
- lv Latvian
- mk Macedonian
- ml Malayalam
- mr Marathi
- ms Malay
- mt Maltese
- ne Nepali
- nl Dutch
- no Norwegian
- oc Occitan
- pa Punjabi
- pl Polish
- pt Portuguese
- ro Romanian
- ru Russian
- sk Slovak
- sl Slovene
- so Somali
- sq Albanian
- sr Serbian
- sv Swedish
- sw Swahili
- ta Tamil
- te Telugu
- th Thai
- tl Tagalog
- tr Turkish
- uk Ukrainian
- ur Urdu
- vi Vietnamese
- yi Yiddish
- zh-cn Simplified Chinese
- zh-tw Traditional Chinese
User danielnaber has made available a profile for Esperanto on his website, see open tasks.
You can create a language profile for your own language easily. See https://github.com/optimaize/language-detector/blob/master/src/main/resources/README.md
The software uses language profiles which were created based on common text for each language. N-grams http://en.wikipedia.org/wiki/N-gram were then extracted from that text, and that's what is stored in the profiles.
When trying to figure out in what language a certain text is written, the program goes through the same process: It creates the same kind of n-grams of the input text. Then it compares the relative frequency of them, and finds the language that matches best.
This software does not work as well when the input text to analyze is short, or unclean. For example tweets.
When a text is written in multiple languages, the default algorithm of this software is not appropriate. You can try to split the text (by sentence or paragraph) and detect the individual parts. Running the language guesser on the whole text will just tell you the language that is most dominant, in the best case.
This software cannot handle it well when the input text is in none of the expected (and supported) languages. For example if you only load the language profiles from English and German, but the text is written in French, the program may pick the more likely one, or say it doesn't know. (An improvement would be to clearly detect that it's unlikely one of the supported languages.)
If you are looking for a language detector / language guesser library in Java, this seems to be the best open source library you can get at this time. If it doesn't need to be Java, you may want to take a look at https://code.google.com/p/cld2/
//load all languages:
List<LanguageProfile> languageProfiles = new LanguageProfileReader().readAllBuiltIn();
//build language detector:
LanguageDetector languageDetector = LanguageDetectorBuilder.create(NgramExtractors.standard())
.withProfiles(languageProfiles)
.build();
//create a text object factory
TextObjectFactory textObjectFactory = CommonTextObjectFactories.forDetectingOnLargeText();
//query:
TextObject textObject = textObjectFactory.forText("my text");
Optional<String> lang = languageDetector.detect(textObject);
//create text object factory:
TextObjectFactory textObjectFactory = CommonTextObjectFactories.forIndexingCleanText();
//load your training text:
TextObject inputText = textObjectFactory.create()
.append("this is my")
.append("training text")
//create the profile:
LanguageProfile languageProfile = new LanguageProfileBuilder("en")
.ngramExtractor(NgramExtractors.standard())
.minimalFrequency(5) //adjust please
.addText(inputText)
.build();
//store it to disk if you like:
new LanguageProfileWriter().writeToDirectory(languageProfile, "c:/foo/bar");
For the profile name, use he ISO 639-1 language code if there is one, otherwise the ISO 639-3 code.
The training text should be rather clean; it is a good idea to remove parts written in other languages (like English phrases, or Latin script content in a Cyrillic text for example). Some also like to remove proper nouns like (international) place names in case there are too many. It's up to you how far you go. As a general rule, the cleaner the text is, the better is its profile. If you scrape text from Wikipedia then please only use the main content, without the left side navigation etc.
The profile size should be similar to the existing profiles for practical reasons. To compute the likeliness for an identified language, the index size is put in relation, therefore a language with a larger profile won't have a higher probability to be chosen.
Please contribute your new language profile to this project. The file can be added to the languages folder, and then referenced in the BuiltInLanguages class. Or else open a ticket, and provide a download link.
Also, it's a good idea to put the original text along with the modifying (cleaning) code into a new project on GitHub. This gives others the possibility to improve on your work. Or maybe even use the training text in other, non-Java software.
If your language is not supported yet, then you can provide clean "training text", that is, common text written in your language. The text should be fairly long (a couple of pages at the very least). If you can provide that, please open a ticket.
If your language is supported already, but not identified clearly all the time, you can still provide such training text. We might then be able to improve detection for your language.
If you're a programmer, dig in the source and see what you can improve. Check the open tasks.
This is a fork from https://code.google.com/p/lang-guess/ (forked on 2014-02-27) which itself is a fork of the original project https://code.google.com/p/language-detection/
- Made results for short text consistent, no random n-gram selection for short text (configurable).
- Configurable when to remove ASCII (foreign script). Old code did it when ascii was less than 1/3 of the content, and only for ASCII.
- New n-gram generation. Faster, and flexible (filter, space-padding). Previously it was hardcoded to 1, 2 and 3-grams, and it had hardcoded which n-grams were ignored.
- LanguageDetector is now safe to use multi-threaded.
- Clear code to safely load profiles and use them, no state in static fields.
- Easier to generate your own language profiles based on training text, and to load and store them.
- Feature to weight prefix and suffix n-grams higher.
- Updated to use Java 7 for compilation, and for syntax. It's 2015, and 7/8 are the only officially supported version by Oracle.
- Code quality improvements:
- Returning interfaces instead of implementations (List instead of ArrayList etc)
- String .equals instead of ==
- Replaced StringBuffer with StringBuilder
- Renamed classes for clarity
- Made classes immutable, and thus thread safe
- Made fields private, using accessors
- Clear null reference concept:
- using IntelliJ's @Nullable and @NotNull annotations
- using Guava's Optional
- Added JavaDoc, fixed typos
- Added interfaces
- More tests. Thanks to the refactorings, code is now testable that was too much embedded before.
- Removed the "seed" completely (for the Random() number generator, I don't see the use). UPDATE: now I do, there's an open task to re-add this.
- Updated all Maven dependency versions
- Replaced last lib dependency with Maven (jsonic)
Apache2 license, just like the work from which this is derived. (I had temporarily changed it to LGPLv3, but that change was invalid and therefore reverted.)
The software works well, there are things that can be improved. Check the Issues list.
The original project hasn't seen any commit in a while. The issue list is growing. The news page says for 2012 that it now has Maven support, but there is no pom in git. There is a release in Maven see http://mvnrepository.com/artifact/com.cybozu.labs/langdetect/1.1-20120112 for version 1.1-20120112 but not in git. So I don't know what's going on there.
The lang-guess fork saw quite some commits in 2011 and up to march 2012, then nothing anymore. It uses Maven.
The 2 projects are not in sync, it looks like they did not integrate changes from each other anymore.
Both are on Google Code, I believe that GitHub is a much better place for contributing.
My goals were to bring the code up to current standards, and to update it for Java 7. So I quickly noticed that I have to touch pretty much all code. And with the status of the other two projects, I figured that I better start my own. This ensures that my work is published to the public.
An adapted version of this is used by the http://www.NameAPI.org server.
https://www.languagetool.org/ is a proof-reading software for LibreOffice/OpenOffice, for the Desktop and for Firefox.
Apache 2 (business friendly)
- Started the project and built most of the functionality. Provided the language profiles.
- Project is at https://code.google.com/p/language-detection/
- Forked to https://github.com/optimaize/language-detector from Francois on 2014-02-27
- Rewrote most of the code
- Added JavaDoc
- See changes above, or check the GitHub commit history
- Forked to https://code.google.com/p/lang-guess/ from Shuyo's original project.
- Maven integration
- Forked to https://github.com/rmtheis/language-detection from Shuyo's original project.
- Added 16 more language profiles
- Features not (yet) integrated here:
- profiles stored as Java code
- Maven multi-module project to reduce size for Android apps
The project is in Maven central http://search.maven.org/#artifactdetails%7Ccom.optimaize.languagedetector%7Clanguage-detector%7C0.4%7Cjar this is the latest version:
<dependency>
<groupId>com.optimaize.languagedetector</groupId>
<artifactId>language-detector</artifactId>
<version>0.5</version>
</dependency>