Menu

g2p-seq2seq error when building model from a dictionary

Help
2018-02-03
2018-06-06
  • Bryan Adaare

    Bryan Adaare - 2018-02-03

    I get the following error when I try to train g2p-seq2seq using a dictionary I wrote for a language. However, it works fine when I use a french or english dictionary.

    g2p-seq2seq --train '/home/yinx/Desktop/Thoth/Dict stuff/twiDict' --model '/home/yinx/Desktop/Thoth/Dict stuff/model'
    Preparing G2P data
    Loading vocabularies from /home/yinx/Desktop/Thoth/Dict stuff/model
    Reading development and training data.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    Creating model with parameters:
    Learning rate: 0.5
    LR decay factor: 0.99
    Max gradient norm: 5.0
    Batch size: 64
    Size of layer: 64
    Number of layers: 2
    Steps per checkpoint: 200
    Max steps: 0
    Optimizer: sgd

    Created model with fresh parameters.
    global step 200 learning rate 0.5000 step-time 0.26 perplexity 4.92
    eval: perplexity 2.51
    global step 400 learning rate 0.5000 step-time 0.21 perplexity 1.81
    eval: perplexity 2.78
    No improvement over last 1 times. Training will stop after -1iterations if no improvement was seen.
    Training done.
    Traceback (most recent call last):
    File "/usr/local/bin/g2p-seq2seq", line 11, in <module>
    load_entry_point('g2p-seq2seq==5.0.0a0', 'console_scripts', 'g2p-seq2seq')()
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/app.py", line 80, in main
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/g2p.py", line 278, in train
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/g2p.py", line 72, in load_decode_model
    RuntimeError: Model not found in /home/yinx/Desktop/Thoth/Dict stuff/model
    g2p-seq2seq --train '/home/yinx/Desktop/Thoth/Dict stuff/twiDict' --model '/home/yinx/Desktop/Thoth/Dict stuff/model'
    Preparing G2P data
    Loading vocabularies from /home/yinx/Desktop/Thoth/Dict stuff/model
    Reading development and training data.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    Creating model with parameters:
    Learning rate: 0.5
    LR decay factor: 0.99
    Max gradient norm: 5.0
    Batch size: 64
    Size of layer: 64
    Number of layers: 2
    Steps per checkpoint: 200
    Max steps: 0
    Optimizer: sgd

    Created model with fresh parameters.
    global step 200 learning rate 0.5000 step-time 0.26 perplexity 4.92
    eval: perplexity 2.51
    global step 400 learning rate 0.5000 step-time 0.21 perplexity 1.81
    eval: perplexity 2.78
    No improvement over last 1 times. Training will stop after -1iterations if no improvement was seen.
    Training done.
    Traceback (most recent call last):
    File "/usr/local/bin/g2p-seq2seq", line 11, in <module>
    load_entry_point('g2p-seq2seq==5.0.0a0', 'console_scripts', 'g2p-seq2seq')()
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/app.py", line 80, in main
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/g2p.py", line 278, in train
    File "build/bdist.linux-x86_64/egg/g2p_seq2seq/g2p.py", line 72, in load_decode_model
    RuntimeError: Model not found in /home/yinx/Desktop/Thoth/Dict stuff/model

     

    Last edit: Bryan Adaare 2018-02-03
  • Bryan Adaare

    Bryan Adaare - 2018-02-03

    Fixed the issue. Dictionary size was too small for the particular language

     
  • roya

    roya - 2018-06-06

    Are you working in ubontu?
    Does g2p_seq2seq work in windows too?

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.