Download Latest Version Chinese.zip (8.8 MB)
Email in envelope

Get an email when there's a new version of grammarscope-syntaxnet

Home
Name Modified Size InfoDownloads / Week
2 2025-06-10
1.0 2023-09-19
tools 2019-02-19
README.md 2019-02-20 5.7 kB
logo.png 2019-02-18 9.2 kB
Totals: 6 Items   15.0 kB 0

GrammarScope

GrammarScope SyntaxNet

What it does

Parses sentences and analyses their syntactic structures (in the form of labelled dependencies). Based on SyntaxNet machine learning framework.

Screencast

Key words

Keys: SyntaxNet, Tensorflow, dependency parser, ML, Machine learning, NLU, natural language understanding, AI, neural network, Universal dependencies.

How to load you own models

Scripts are provided here that save, freeze, format, pack the data from SyntaxNet-provided parseysaurus models.

  • tools.tar.xz (the scripts)
  • syntaxnet_with_tensorflow-0.2-cp27-cp27mu-linux_x86_64.whl (the tensorflow+syntaxnet runtime)

WARNING: the standard pip syntaxnet-with-tensorflow is an older version and won't do.

What the scripts do:

  • strip unnecessary training data
  • rename variable
  • transform the model to Tensorflow-standard saved model
  • freeze this model
  • tweak it so as to add entry points directing to assets directory
  • package it

Layout

  • prepare.sh
  • reset_models.sh
  • make_models.sh
  • make_models.sh
  • support python files
  • reference-models (copy the training models here)
    • Chinese
      • checkpoint
      • parser_spec.textproto
      • tables + maps
      • segmenter
        • checkpoint.meta
        • checkpoint.index
        • checkpoint.data-00000-of-00001
        • spec.textproto
        • tables + maps
    • English-LinES
    • English-ParTUT
    • ...
  • sentences (copy your samples here, one sentence per line, one file par language)
    • Chinese
    • English
    • French
    • ...
  • models (created by reset_models)
    • Chinese
      • checkpoint.meta
      • checkpoint.index
      • checkpoint.data-00000-of-00001
      • parser_spec.textproto
      • tables + maps
      • segmenter
        • checkpoint.meta
        • checkpoint.index
        • checkpoint.data-00000-of-00001
        • spec.textproto
        • tables + maps
    • English-LinES
    • English-ParTUT
    • ...
  • out-models (created by make_models)
    • Chinese (created, the unzipped content)
      • export_conll2017
      • frozen
      • tweaked
      • zipped
    • English-LinES
    • ...
  • download (created)
    • content (created as a TOC of downloadable files)
    • Chinese.zip (created, packed model)
    • Chinese.zip.md5 (created, checksum of previous)
    • English-LinES.zip
    • English-LinES.zip.md5
    • ...

Here are the difference steps.

  1. Prepare. This will install a) system packages needed by tensorflow, b) syntaxnet and a matching tensorflow version as user packages in ~/.local/lib/python2.7/site-packages

    ./prepare.sh

  2. Reset. Install your models in reference-models and to make a work copy in the models dir, run

    ./reset-models

  3. Make. Make and package: this will rename the variables, strip training data, freeze, tweak the model

    ./make_models.sh

  4. Distrib. Make a downloadable distrib

    ./dist_make.sh

Zipped pack

The packed model file should contain:

  • parser
    • graph.pb (actual frozen model)
  • segmenter
    • graph.pb (actual frozen model)
  • assets.extra (map and table assets)
    • resources
      • component_0_char_ltsm
        • resource_0_word-map
          • part_0
        • resource_1_tag-map
          • part_0
        • resource_2_tag-to-category
          • part_0
        • resource_3_lcword-map
          • part_0
        • resource_4_category-map
          • part_0
        • resource_5_char-map
          • part_0
        • resource_6_char-ngram-map
          • part_0
        • resource_7_label-map
          • part_0
        • resource_8_prefix-table
          • part_0
        • resource_9_suffix-table
          • part_0
        • resource_10_known-word-map
          • part_0
      • component_0_lookahead
        • resource_0_word-map
          • part_0
        • resource_1_tag-map
          • part_0
        • resource_2_tag-to-category
          • part_0
        • resource_3_lcword-map
          • part_0
        • resource_4_category-map
          • part_0
        • resource_5_char-map
          • part_0
        • resource_6_char-ngram-map
          • part_0
        • resource_7_label-map
          • part_0
        • resource_8_prefix-table
          • part_0
        • resource_9_suffix-table
          • part_0
        • resource_10_known-word-map
          • part_0
  • samples (sample sentences)
  • model (meta data)
  • Urdu (language, name varies)
  • Urdu-udtb (language model, name varies)
  • md5sum.txt (md5 check sum of files)

How to download

Run local webserver in the download directory that contains the packages and the content file:

python3 -m http.server 1313

or use ./webserver.sh

In the app, change Settings | Download | Model source to http://somehost:1313 where somehost is the name of the host that stores the packages.

In the app menu, choose Model | Download

The list of available files should appear in a dialog box. Choose one. Proceed to download.

When downloading completes successfully, press Deploy.

Source: README.md, updated 2019-02-20