Human Skill * Artificial Intelligence
At Our core
We work with freelancers from all around the globe, all sharing our passion for:
Language
we go the extra mile for low density languages, and another mile for profanity
Technology
we value human wisdom more than machine intelligence
Quality
we assure our deliveries through human evaluation, meta data and statistics
Data
we create, validate and fix training data and test data for NLP projects
How we do it
Our customers are inventing, building and improving self-learning machines. We help our customers to fuel these AI engines, and to measure how intelligent they actually are.
We turn crowd knowhow into datasets NLP researchers can use. We work with our customers to tailor fit the tools we develop for them and that our freelance team is using.
Typically we help to improve sentence segmentation, domain specific terminology, monolingual and bilingual training sets. We create feature sets and we measure progress in accuracy and fluency of the translation output of NMT systems. We build tools that allow humans to verify and fix what happens inside the neural networks producing translations or spoken text.
Human Alignment Annotation
Linguists evaluate the validity of sentence alignments from your automated alignment algorithms.
Domain Root Inventory
Landing Page Labeling
Detect the pages in websites that are frequently updated
Bilingual Segmentation Tagging
Map a set of tagged source sentences to the segmentation breakers/non-breakers in your mother tongue
Contrastive Translation Evaluation
Fails Quality Tagging
Evaluators analyze a translated sentence, identify and classify all errors in the translation, and markup the issues in source (missing tokens) or target (quality fails).
Binary Translation Evaluation
Two or more freelancers score the same dataset using a binary scale; This job is used to measure the quality of competing engines and to measure the evolution of engines over time. The definition of good is simple and straightforward; for bad, the instructions contain some clear examples.
Collective Dictionary Translation
what people say
If you want to join us ...
... this is what you should know:
All our jobs are done on an online Job Portal, through which our freelancers can generate their invoice. They need a good connection to the internet, and a screen with a resolution of at least 1024 * 800 pixels. They should use Google Chrome and work on Windows, MacOS or Linux. They cannot work on a mobile phone (Android or iOS). We take pride in fair and fast payments. Our Job Portal facilitates this. We pay via SEPA bank wire payments, PayPal, XOOM, Skrill & Neteller.
We work with freelancers only. Together we create data that researchers can use. To enrich the customer data, we measure the time our freelancers spend on the job in detail. For some work types we also measure mouse miles and the corrections on given text and created text. This meta data is used:
- to compare the effort done by the evaluators;
- to verify the predicted value of the job is fair;
- to trigger quality control;
- to verify how stable the evaluations are over time.
For all jobs, a good to perfect level of written and spoken English is mandatory.
We are always looking for people who have good knowledge of insulting language, sexist & racist terms and idiomatic expressions, slurs…
Professional translators and other language professionals are welcome to join our team. We’ll interview and test you so we know what work types you can handle.
Join our team
If you would like to be part of the international team of language specialists taking paid NLP research jobs, please register here.
When we have a job in a language you support, we’ll contact you. If you’re only looking for a translation job, please don’t fill in the form. We rarely have regular translation jobs.