src - source language to translate from dest - destination language to translate to text - a piece of text you want to translate email - frengly account email, entered during registration (for older accounts please use username) password - frengly account password outformat - format of the response [xml/json], not mandatory - default is xml
We are proud to announce a new feel-the-engine contest.
With your help we would like to enrich our engine with human translations.
Among the users with the highest number of quality translations, we will select candidates for top prizes
- $200 for 1st position
- $100 for 2nd position
- $50 for 3rd position
The winners will be announced and notified by 24th of March 2014.
Do I have to translate all the input texts?
No. If you are not sure about the translation - skip it using 'next' button. Furthermore you should skip texts with low usability level. Algorithm rates high translations that statistically occur more frequently in common use.
I've put the biggest number of translations, can i be sure to win?
No. Although the number of translations is the most important factor, our algorithm evaluates also other factors: quality,community feedback and usability of your translations.
How can i check my progress?
In order to monitor your progress and compare your current status with others,
you can visit Rank page, where you will find statistics, translation counters and other useful information.
How the prizes will be delivered
They will be paid to your paypal account. After receiving the notification email, you will have 7 days to send us back your paypal account details.
We reserve the right to make changes and updates to this contest anytime. All such updates and changes shall come into effect as soon as these are posted on this web page. Your participation in the contest expresses your implied consent to the acceptance of any modifications to the contest, its modification or termination.
MTQI - Machine Translation Quality Index
Frengly.com engine uses its own method to measure translation quality called Machine Translation Quality Index.
The calculation is really simple:
The formula above is a sum of segments evaluations divided by number of words.
Segments are blocks of words (usually not longer than 4) in the source text.
Let's see the example (EN→FR):
So how exactly did we come up with the score?
Try to hover your mouse on the output translation, you will notice that every segment has been rated with a score.
And here we have reached the most important part - segment rating. Here it is:
- 1 for every word inside a human translated segment
- 1 for every word inside machine translated segment of size > 1
- 0.80 for a machine translated segment of size 1 (word) which length is < 4
- 0.75 for a machine translated segment of size 1 (word) which length is ≥ 4
- 0 for missing translation
What is the logic behind this scoring?
Marks defined in the scoring grid above have been defined commonly for all languages as an average values.
It implies certain accuracy margin that usually ranges from 2% to 10%.
The margin is an acceptable result of compromise between accuracy and process simplification.
Two remarks can be concluded from the scoring grid: first - MTQI promotes the human translations over machine translations
(this is natural for hybrid translation engines), and the second - when evaluating single word translations (segment of a size 1),
longer words have better score, as the engine assumes that short words are usually pronouns and prepositions,
which should be translated along with other words.
The prime idea behind the MTQI methodology is to make sure that top scores are reserved for accurate translations only,
so as the translator engine is able to monitor its quality and progressively improve by removing weak points.