|
From: Ivan V. <iv...@gm...> - 2014-11-13 13:37:16
|
Thank you very much Nick, but we must work with Transcriber 1.5.1, no choice. Is possible for a developer to get the same feature in Transcriber 1.5.1?. Thank you. On Thu, Nov 13, 2014 at 3:33 PM, Nick Thieberger <th...@un...> wrote: > Elan has a silence recogniser and also can be linked to WebMAUS to do > forced alignment of text and audio. > > http://tla.mpi.nl/tools/tla-tools/elan/download/ > > On 13 November 2014 14:26, Ivan Vanney <iv...@gm...> wrote: > >> Hi Claude, thank you very much for your answer. I understand,I dont >> expect automatic audio processing,but automatic help,for example, if a >> transcriber cuts the segment while there is voice, instead in a silence >> period, cutting part of the voice, is not possible to get the program to >> move the segment some milimeters automatically, leaving the separation in a >> silence period rather than while speakers are talking?. >> I dont mean to recognize the words, just to help making the transcription >> better, by detecting sound and silence, or two different channels of >> voice.Are you one of Transcriber 1.5.1 developers? >> Thank you again,regards. >> >> >> >> >> On Fri, Nov 7, 2014 at 11:08 AM, Claude Barras <Cla...@li...> >> wrote: >> >>> Le 06/11/2014 17:25, Ivan Vanney a écrit : >>> >>>> Hi dear, im new in this list. I work for a company which uses >>>> Transcriber 1.5.1 for transcription. We use specific rules like for >>>> example, we dont transcribe overlapping but apply a [tag] instead, but 100% >>>> of accuracy is impossible with human transcribers and sometimes a >>>> transcribers doesnt apply the tag and instead transcribes. >>>> Is there a way the program may recognize when there are two noises or >>>> voices automatically?, we would hire a developer to carry this out. >>>> Thank you,kindest regards. >>>> >>>> Hi, >>> >>> Transcriber wasn't designed for performing automatic audio processing, >>> but instead to help human annotation (in order to develop automatic >>> analysis systems using statistical machine learning) - however nothing >>> prevents from using an automatic pre-processing and feed it as initial >>> annotation. >>> >>> Concerning overlapping speech or noises, it is an interesting but quite >>> difficult problem, especially if you can't rely on multiple microphones for >>> beamforming and source separation. It is far from a solved research problem >>> even if some solution do exists - look eg. at >>> http://bass-db.gforge.inria.fr/fasst/ . I doubt that currently an >>> automatic detection would perform better than a human one, but it may help >>> focusing on some sections: I am interested if ever you have a feedback on >>> that! >>> >>> Best regards, >>> Claude Barras >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Comprehensive Server Monitoring with Site24x7. >> Monitor 10 servers for $9/Month. >> Get alerted through email, SMS, voice calls or mobile push notifications. >> Take corrective actions from your mobile device. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk >> _______________________________________________ >> Trans-devel mailing list >> Tra...@li... >> https://lists.sourceforge.net/lists/listinfo/trans-devel >> >> > |