User Activity

  • Modified a comment on a wiki page on JoBimText

    Hi Amrit, sorry for the late response. Which commands did you execute? And did you try to run the software using the VM or do you have an Hadoop cluster? And which input data did you use? Best, Martin

  • Posted a comment on a wiki page on JoBimText

    Hi Amrit, sorry for the late response. Which commands did you execute? And did you try to run the software using the VM or do you have an Hadoop cluster? Best, Martin

  • Posted a comment on a wiki page on JoBimText

    Dear Amrit, for computing, you would require a Hadoop cluster. Furthermore, I would advise to use the more recent documentation from the KONVENS tutorial: https://sites.google.com/site/konvens2016jobimtexttutorial/ Furthermore, for Hadoop computations you do not need the virtual machine as describes (this is just for testing), but just the Hadoop cluster and you might also use the recent jobimtext version: https://sourceforge.net/projects/jobimtext/files/jobimtext_pipeline_0.1.2.tar.gz/download Best,...

  • Posted a comment on a wiki page on JoBimText

    Dear Amrit, I assume you are getting this error, as the dt-file is compressed. You need to decompress the wikipedia_stanford*.gz file (gunzip wikipedia...) and then start the command again. This will generate the different senses for each word. What is the purpose with "generating clustered file" for normal set of sentence? If you want to computed the senses for document collection, you have to compute a DT and then use this DT for the sense computation with Chinese Whispers. Best, Martin

  • Posted a comment on a wiki page on JoBimText

    that's great news! For the pattern file I would use one of the English General Domain, e.g.: http://panchenko.me/data/joint/taxi/res/resources/en_pm.csv.gz Of course it will not contain ALL types of patterns, but I guess it might contain enough patterns to have a generally good coverage. Please also check that the format is correct (see post above). Best, Martin

  • Posted a comment on a wiki page on JoBimText

    Hi Amrit, so if you want to have a more recent documentation you can find it in the slide decks of our tutorial: https://sites.google.com/site/jobimtexttutorial/resources There is a full example of all steps (with some hadoop VM). You can execute most commands if you have a hadoop cluster with the most recent source code on sourceforge. regarding your issues: there seems to be some issue with your patterns.txt and senses.txt file. Check the following: senses.txt: the information is separated by tab...

  • Posted a comment on a wiki page on JoBimText

    Hi Amrit, to problem in the comand is the asterisk (*) without quotes. Running the command as following should work: sh holing_operation.sh ../splitted/ "*" output.txt extractor_relation.xml MaltParser Best, Martin

  • Modified a wiki page on TopicTiling

    Home

View All

Personal Data

Username:
riedlma
Joined:
2013-02-22 08:44:34

Projects

This is a list of open source software projects that Martin Riedl is associated with:

  • CoocViewer Viewer for co-occurrences and positional co-occurrences Last Updated:
  • Project Logo JoBimText Linking Language to Knowledge with Distributional Semantics Last Updated:
  • LexSub A Lexical Substitution Framework Last Updated:
  • TopicTiling TopicTiling a text segmentation algorithm using LDA Last Updated:

Personal Tools