Human Skill * Artificial Intelligence

At Our core
We work with freelancers from all around the globe, all sharing our passion for:


we go the extra mile for low density languages, and another mile for profanity


we value human wisdom more than machine intelligence


we assure our deliveries through human evaluation, meta data and statistics


we create, validate and fix training data and test data for NLP projects

How we do it

Our customers are inventing, building and improving self-learning machines. We help our customers to fuel these AI engines, and to measure how intelligent they actually are.

We turn crowd knowhow into datasets NLP researchers can use. We work with our customers to tailor fit the tools we develop for them and that our freelance team is using.

Typically we help to improve sentence segmentation, domain specific terminology, monolingual and bilingual training sets. We create feature sets and we measure progress in accuracy and fluency of the translation output of NMT systems. We build tools that allow humans to verify and fix what happens inside the neural networks producing translations or spoken text.

Human Alignment Annotation

Linguists evaluate the validity of sentence alignments from your automated alignment algorithms.

Domain Root Inventory

List websites that match the description of a given (narrow) vertical domain.

Landing Page Labeling

Detect the pages in websites that are frequently updated

Bilingual Segmentation Tagging

Map a set of tagged source sentences to the segmentation breakers/non-breakers in your mother tongue

Contrastive Translation Evaluation

Evaluate the source term and label 2 to 4 contrastive translations. Occasionally we have a non-contrastive evaluation job as well. Translators are also invited to add their own translation variant, if needed.

Fails Quality Tagging

Evaluators analyze a translated sentence, identify and classify all errors in the translation, and markup the issues in source (missing tokens) or target (quality fails).

Binary Translation Evaluation

Two or more freelancers score the same dataset using a binary scale; This job is used to measure the quality of competing engines and to measure the evolution of engines over time. The definition of good is simple and straightforward; for bad, the instructions contain some clear examples.

Collective Dictionary Translation

Many freelancers work together on translating the same dictionary; there is some artificial overlap so we can see how much variation and disagreement there is.

what people say

Excellent agency to work with; interesting projects, competent staff, extremely quick payments. I also feel that this agency really values my expertise as a native speaker of the target language.
Rachel James
Iulia, Raluca and Gert are very diligent and answer any demands and feedbacks about their tools and projects. Our partnership is years long already and hopefully will endure for many more to come!
Germano Matias
Raluca is very diligent and easy to work with. She is the best project manager and I can't wait to work again with Raluca and other Datamundi staff Gert, and Iulia!
Gomeju Taye
Great employer, wonderfully classified and clear jobs. Looking forward to work again.
Ahmad Suhaib
Datamundi is a professional, ethical linguistic platform, and it has great project managers like Gert, Iulia, Florina who help and guide without fail. working with this company is a great experience.
Gujjula Dattathreya

If you want to join us ...

... this is what you should know:

All our jobs are done on an online Job Portal, through which our freelancers can generate their invoice. They need a good connection to the internet, and a screen with a resolution of at least 1024 * 800 pixels. They should use Google Chrome and work on Windows, MacOS or Linux. They cannot work on a mobile phone (Android or iOS). We take pride in fair and fast payments. Our Job Portal facilitates this. We pay via SEPA bank wire payments, PayPal, XOOM, Skrill & Neteller.

We work with freelancers only. Together we create data that researchers can use. To enrich the customer data, we measure the time our freelancers spend on the job in detail. For some work types we also measure mouse miles and the corrections on given text and created text. This meta data is used:

  • to compare the effort done by the evaluators;
  • to verify the predicted value of the job is fair;
  • to trigger quality control;
  • to verify how stable the evaluations are over time.

For all jobs, a good to perfect level of written and spoken English is mandatory.

We are always looking for people who have good knowledge of insulting language, sexist & racist terms and idiomatic expressions, slurs…

Professional translators and other language professionals are welcome to join our team. We’ll interview and test you so we know what work types you can handle.

Join our team

If you would like to be part of the international team of language specialists taking paid NLP research jobs, please register here.

When we have a job in a language you support, we’ll contact you. If you’re only looking for a translation job, please don’t fill in the form. We rarely have regular translation jobs.