Filmgsindl invented EVE, as they experienced the difficulties of live captions during customer events.
The objective was not only to reduce the oversized costs related to travel and external expenses of interpreters, stenographs and hardware, but to find a better digital solution as the quality of live captions related to humans often shows limitations.
Thus, EVE does not only help support organizations or companies like MICROSOFT to hit Accessibility Standards and lower costs, EVE also is an additional medium. The digital service captures every spoken word and shares a transcript (PDF version) directly after the speech for further actions, like articles, event film subtitles and SEO.
That content makes events, speeches and video-libraries completely searchable and can improve reputation and image, related to the digital footprint. Nowadays everybody posts every thought on twitter and shares pictures on instagram. But now it is the time to digitalize the spoken word.
In order to guarantee the quality of the text output, it is possible to use one or more online correctors. These editors can improve the quality even more, as it is possible to correct the text live, from anywhere. EVE learns constantly through machine learning. The basic language model is optimized constantly, and its results improve accordingly. EVE also memorizes the corrections and individual dictionaries can be uploaded to teach EVE vernacular.
Thomas Papadhimas: “It is 2019, and thus long overdue to offer a digital service which automatically generates live captions of videos, events and lectures, etc. The service is easy to use with common platforms and devices, independent from OS and cost efficient. Globally many people rely on captions, but subtitles are rare. EVE will change that and make the world a better place, as inclusion is not negotiable”
So far EVE works in English and German but the live machine translation for other languages is already availble in a BETA Version. An updated feature appears on the roadmap which improves recognition based on the learnings of the human corrector as well. More details will be shared soon.