Menu Sign In Contact FAQ
Banner
Welcome to our forums

Deepl translator

Has anyone tried it?

https://www.deepl.com/translator

Google translate is really bad sometimes…

It isn’t a browser plug-in, and like google translate it sends the text to some remote computer for the translation. To use it you highlight the text and press control-c twice.

Not sure I would have chosen that key combination but you need to be of a certain age to appreciate the funny side

Administrator
Shoreham EGKA, United Kingdom

We use it extensively at work, where we have frequently updated content which needs to be presented in 5 languages, fast. We used to have it done by humans, then humans correcting a Google translate, and now Deepl. We’ve found the mistakes are usually no worse than what a non native but reasonably fluent writer would make, especially if you’re careful about the initial text. It has cut the time for that stage in our process from about 5 hours to 1 second.

EGTF, LFTF

Peter wrote:

Has anyone tried it?

https://www.deepl.com/translator

I just did with passages from a newspaper article about politics and from an air accident investigation, both in German. It is amazingly good, but occasionally makes basic mistakes like translating German “von der Kanzlerin persönlich” to “by the Chancellor himself” rather than “herself”. Also some very minor mistakes in nuances like translating “zu sinken” with “to sink” rather than “to descend”.

But I’ll certainly use this web site in the future rather than Google translate when translating one of the languages it supports. Thanks for the tip!

ESKC (Uppsala/Sundbro), Sweden

They also have an API, which you can connect to the MS Office suite.

EGTF, LFTF

As a professional translator with 35 years’ experience, I am critical of all machine translation – Google, Deepl, whatever. It has all but supplanted cheap human translation, especially when you only need the gist of the source text, but it is prone to making dangerous mistakes from time to time. Worse yet, these mistakes are often counterintuitive from the human point of view, and the better machine translation becomes on average, the more it lulls the reader into trusting it and dropping the guard for possible mistranslations. It’s fine if you use it for the daily news (and in fact, such translations usually turn out very good indeed), but if you are translating something like clinical trial documentation, it becomes risky. The problem with machine translation is that it cannot understand the source text, it can only fake this understanding. Like a mediocre student, it would solve standard problems easily but fail on unusual ones. Also, its quality is inherently limited by the quality of the samples it is trained on, and most parallel texts available for training are far from perfect. From time to time, I am asked to edit machine translation output allegedly generated by dedicated translation engines trained on the specific subject. I take a look… and refuse, because it is indeed fairly easy to make such texts look nice, but fixing them properly takes more time than translating the source text from scratch. One exception in favour of machine translation is highly standardised texts – for example, many answers to a questionnaire, or a set of manuals for MS WIndows. However, even for such texts, you need a huge training base – my educated guess is on the order of 1 million words of high-quality translation, which is quite expensive, so it only makes sense for huge projects.

Last Edited by Ultranomad at 27 Feb 19:55
LKBU (near Prague), Czech Republic

Ultranomad wrote:

The problem with machine translation is that it cannot understand the source text, it can only fake this understanding.

When work on machine translation begun in the 60s-70s, the idea was indeed that the machine should – in some sense – understand the source text. This was considered necessary for correct translations. It turned out to be too difficult and eventually statistical methods with no understanding took over.

ESKC (Uppsala/Sundbro), Sweden

In the 1960s, in the pioneering days of “AI”, Winograd, etc, lots of things were “just around the corner”… but never arrived, and probably most of them are still not even anywhere near knowing how to solve them. The stuff that involves an “understanding the of the world” seems to be more or less totally elusive. For a practical example nobody has any idea how to make a totally self driving car and they could not do it even with all the computer power in the world.

There used to be another professional translator on EuroGA and she said that any good translator will only ever translate into their mother tongue.

In the early 70s, when I was c. 15 i.e. 3 years after arriving in England, I got a part time home job to translate some HP plotter service manuals into Czech. They were component-level descriptions of the hardware and really intricate. Even though I could see exactly what the circuit did, and knew the Czech words for all the bits, it would take me hours just to do one page. Having to type on a manual typewriter didn’t help Accurate technical translation can be really hard work and I am sure a machine translator of today would make a complete hash of it.

Google Translate has been good for reading foreign language websites and its integration into Chrome is great – when it works. Presumably Deepl are not able to thus integrate theirs otherwise they would have done it.

Administrator
Shoreham EGKA, United Kingdom

Airborne_Again wrote:

When work on machine translation begun in the 60s-70s, the idea was indeed that the machine should – in some sense – understand the source text. This was considered necessary for correct translations. It turned out to be too difficult and eventually statistical methods with no understanding took over.

Yes true but with one caveat, at the time the statistical methods did not work even with all money that was thrown on automatic translation in the Cold War era, it just did not go like Saturn V to the Moon the analytics are the same today (Deep Learning = Neural Network = Multi-Layer Linear Regression) what has changes is amount of data & compute, by scanning 6 billion translated articles in wikipedia, the machine can translate the same as an average human does (can restrict this to specific topics as long as there is enough data left)

Now the funny bit: the machine can understand how teenagers speak online better than their grand mother (you can blame this on 40 billion comments and 6 billion tagged content and pictures per day )

Same story happened in cryptography, best techniques don’t require an understanding of the text

Paris/Essex, France/UK, United Kingdom
8 Posts
Sign in to add your message

Back to Top