Menu

Tree [5445a9] main /
 History

HTTPS access


File Date Author Commit
 examples 2022-03-30 antononcube antononcube [060f9d] feat:First version of the Classification workfl...
 lib 2022-03-30 antononcube antononcube [060f9d] feat:First version of the Classification workfl...
 t 2022-03-26 antononcube antononcube [b6fced] test:Enumerated tests.
 .gitignore 2022-03-16 Anton Antonov Anton Antonov [164599] Initial commit
 LICENSE 2022-03-16 Anton Antonov Anton Antonov [164599] Initial commit
 META6.json 2022-03-30 antononcube antononcube [060f9d] feat:First version of the Classification workfl...
 README-work.md 2022-04-02 antononcube antononcube [153653] docs:Complete set of demo examples.
 README.md 2022-04-02 antononcube antononcube [5445a9] docs:Generated code for all examples.

Read Me

Raku DSL::Bulgarian

In brief

This Raku package facilitates the specification computational workflows using
natural language commands in Bulgarian.

Using the Domain Specific Languages (DSLs) executable code is generated for different
programming languages: Julia, Python, R, Raku, Wolfram Language.

Translation to other natural languages is also done: English, Korean, Russian, Spanish.


Data query (wrangling) workflows

use DSL::English::DataQueryWorkflows;

my $command = '
зареди данните iris;
вземи елементите от 1 до 120;
групирай с колоната Species;
покажи размерите
';
for <English Python::pandas Raku::Reshapers Russian> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToDataQueryWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# load the data table: "iris"
# take elements from 1 to 120
# group by the columns: Species
# show the count(s)
# ============================================================
# Python::pandas
# ------------------------------------------------------------
# obj = example_dataset('iris')
# obj.iloc[1-1:120]
# obj = obj.groupby(["Species"])
# print(obj.size())
# ============================================================
# Raku::Reshapers
# ------------------------------------------------------------
# my $obj = example-dataset('iris') ;
# $obj = $obj[ (1 - 1) ... (120 - 1 ) ] ;
# $obj = group-by( $obj, "Species") ;
# say "counts: ", $obj>>.elems
# ============================================================
# Russian
# ------------------------------------------------------------
# загрузить таблицу: "iris"
# взять элементы с 1 по 120
# групировать с колонками: Species
# показать число

use DSL::English::RecommenderWorkflows;

my $command = '
създай чрез dfTitanic;
препоръчай със профила "male" и "died";
покажи текущата лентова стойност
';

for <English Python::SMRMon R::SMRMon Russian> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToRecommenderWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# create with data table: dfTitanic
# recommend with the profile: ["male", "died"]
# echo the pipeline value
# ============================================================
# Python::SMRMon
# ------------------------------------------------------------
# obj = SparseMatrixRecommender().create_from_wide_form( data = dfTitanic).recommend_by_profile( profile = ["male", "died"]).echo_value()
# ============================================================
# R::SMRMon
# ------------------------------------------------------------
# SMRMonCreate( data = dfTitanic) %>%
# SMRMonRecommendByProfile( profile = c("male", "died")) %>%
# SMRMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# создать с таблицу: dfTitanic
# рекомендуй с профилю: ["male", "died"]
# показать текущее значение ленту

Latent Semantic Analysis

use DSL::English::LatentSemanticAnalysisWorkflows;

my $command = '
създай със textHamlet;
направи документ-термин матрица със автоматични стоп думи;
приложи LSI функциите IDF, TermFrequency, и Cosine;
извади 12 теми чрез NNMF и максимален брой стъпки 12;
покажи таблица  на темите с 12 термина;
покажи текущата лентова стойност
';

for <English Python::LSAMon R::LSAMon Russian> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToLatentSemanticAnalysisWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# create LSA object with the data: textHamlet
# make the document-term matrix with the parameters: use the stop words: NULL
# apply the latent semantic analysis (LSI) functions: global weight function : "IDF", local weight function : "None", normalizer function : "Cosine"
# extract 12 topics using the parameters: method : Non-Negative Matrix Factorization (NNMF), max number of steps : 12
# show topics table using the parameters: numberOfTerms = 12)
# show the pipeline value
# ============================================================
# Python::LSAMon
# ------------------------------------------------------------
# (LatentSemanticAnalyzer(textHamlet)
#    .make_document_term_matrix( stop_words = None)
#    .apply_term_weight_functions(global_weight_func = "IDF", local_weight_func = "None", normalizer_func = "Cosine")
#    .extract_topics(number_of_topics = 12, method = "NNMF", max_steps = 12)
#    .echo_topics_table(numberOfTerms = 12)
#    .echo_value())
# ============================================================
# R::LSAMon
# ------------------------------------------------------------
# LSAMonUnit(textHamlet) %>%
# LSAMonMakeDocumentTermMatrix( stopWords = NULL) %>%
# LSAMonApplyTermWeightFunctions(globalWeightFunction = "IDF", localWeightFunction = "None", normalizerFunction = "Cosine") %>%
# LSAMonExtractTopics( numberOfTopics = 12, method = "NNMF",  maxSteps = 12) %>%
# LSAMonEchoTopicsTable(numberOfTerms = 12) %>%
# LSAMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# создать латентный семантический анализатор с данных: textHamlet
# сделать матрицу документов-терминов с параметрами: стоп-слова: null
# применять функции латентного семантического индексирования (LSI): глобальная весовая функция: "IDF", локальная весовая функция: "None", нормализующая функция: "Cosine"
# извлечь 12 тем с параметрами: метод: Разложение Неотрицательных Матричных Факторов (NNMF), максимальное число шагов: 12
# показать таблицу темы по параметрам: numberOfTerms = 12
# показать текущее значение конвейера

Quantile Regression Workflows

use DSL::English::QuantileRegressionWorkflows;

my $command = '
създай с dfTemperatureData;
премахни липсващите стойности;
покажи данново обобщение;
премащабирай двете оси;
изчисли квантилна регресия с 20 възела и вероятности от 0.1 до 0.9 със стъпка 0.1;
покажи диаграма с дати;
покажи чертеж на абсолютните грешки;
покажи текущата лентова стойност
';

for <English R::QRMon Russian WL::QRMon> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToQuantileRegressionWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# create quantile regression object with the data: dfTemperatureData
# delete missing values
# show data summary
# rescale: over both regressor and value axes
# compute quantile regression with parameters: degrees of freedom (knots): 20, automatic probabilities
# show plot with parameters: use date axis
# show plot of relative errors
# show the pipeline value
# ============================================================
# R::QRMon
# ------------------------------------------------------------
# QRMonUnit( data = dfTemperatureData) %>%
# QRMonDeleteMissing() %>%
# QRMonEchoDataSummary() %>%
# QRMonRescale(regressorAxisQ = TRUE, valueAxisQ = TRUE) %>%
# QRMonQuantileRegression(df = 20, probabilities = seq(0.1, 0.9, 0.1)) %>%
# QRMonPlot( datePlotQ = TRUE) %>%
# QRMonErrorsPlot( relativeErrorsQ = TRUE) %>%
# QRMonEchoValue()
# ============================================================
# Russian
# ------------------------------------------------------------
# создать объект квантильной регрессии с данными: dfTemperatureData
# удалить пропущенные значения
# показать сводку данных
# перемасштабировать: по осям регрессии и значений
# рассчитать квантильную регрессию с параметрами: степени свободы (узлы): 20, автоматическими вероятностями
# показать диаграмму с параметрами: использованием оси дат
# показать диаграму на относительных ошибок
# показать текущее значение конвейера
# ============================================================
# WL::QRMon
# ------------------------------------------------------------
# QRMonUnit[dfTemperatureData] \[DoubleLongRightArrow]
# QRMonDeleteMissing[] \[DoubleLongRightArrow]
# QRMonEchoDataSummary[] \[DoubleLongRightArrow]
# QRMonRescale["Axes"->{True, True}] \[DoubleLongRightArrow]
# QRMonQuantileRegression["Knots" -> 20, "Probabilities" -> Range[0.1, 0.9, 0.1]] \[DoubleLongRightArrow]
# QRMonDateListPlot[] \[DoubleLongRightArrow]
# QRMonErrorPlots[ "RelativeErrors" -> True] \[DoubleLongRightArrow]
# QRMonEchoValue[]

Classification workflows

use DSL::English::ClassificationWorkflows;

my $command = '
използвай dfTitanic;
раздели данните с цепещо съотношение 0.82;
направи gradient boosted trees класификатор;
';

for <English Russian WL::ClCon> -> $t {
    say '=' x 60, "\n", $t, "\n", '-' x 60;
    say ToClassificationWorkflowCode($command, $t, language => 'Bulgarian', format => 'code');
}
# ============================================================
# English
# ------------------------------------------------------------
# use the data: dfTitanic 
# split into training and testing data with the proportion 0.82 
# train classifier with method: gradient boosted trees
# ============================================================
# Russian
# ------------------------------------------------------------
# использовать данные: dfTitanic 
# разделить данные на пропорцию 0.82 
# обучить классификатор методом: gradient boosted trees
# ============================================================
# WL::ClCon
# ------------------------------------------------------------
# ClConUnit[ dfTitanic ] \[DoubleLongRightArrow]
# ClConSplitData[ 0.82 ] \[DoubleLongRightArrow]
# ClConMakeClassifier[  ]

Implementation notes

The rules in the file
"DataQueryPhrases.rakumod"
are derived from file
"DataQueryPhrases-template"
using the package
"Grammar::TokenProcessing"
, [AAp2].

In order to have Bulgarian commands parsed and interpreted into code the steps taken were
split into four phases:

  1. Utilities preparation
  2. Bulgarian words and phrases addition and preparation
  3. Preliminary functionality experiments
  4. Packages code refactoring

Utilities preparation

Since the beginning of the work on translation of the computational DSLs into programming code
it was clear that some the required code transformations have to be automated.

While doing the preparation work -- and in general, while the DSL-translation work matured --
it became clear that there are several directives to follow:

  1. Make and use Command Line Interface (CLI) scripts that do code transformation or generation.

  2. Adhere to of the Eric Raymond's 17 Unix Rules, [Wk1]:

  3. Make data complicated when required, not the program
  4. Write abstract programs that generate code instead of writing code by hand

In order to facilitate the "from Bulgarian" project the package "Grammar::TokenProcessing", [AAp3],
was "finalized." The initial versions of that package were used from the very beginning of the
DSLs grammar development in order to facilitate handling of misspellings.

(Current) recipe

This sub-section lists the steps for endowing a certain already developed workflows DSL package
with Bulgarian translations.

Denote we DSL workflows we focus on as DOMAIN (workflows.)
For example, DOMAIN can stand for DataQueryWorkflows, or RecommenderWorkflows.

Remark: In the recipe steps below DOMAIN would be
DataQueryWorkflows

It is assumed that:

  • DOMAIN in English are already developed.

  • Since both English and Bulgarian are analytical, non-agglutinative languages "just" replacing
    English words with Bulgarian words in DOMAIN would produce good enough parsers of Bulgarian.

Here are the steps:

  1. Add global Bulgarian words (optional)

  2. Add Bulgarian words and phrases in the
    DSL::Shared file
    "Roles/Bulgarian/CommonSpeechParts-template".

  3. Generate the file
    Roles/Bulgarian/CommonSpeechParts.rakumod
    using the CLI script
    AddFuzzyMatching

  4. Consider translating, changing, or refactoring global files, like,
    Roles/English/TimeIntervalSpec.rakumod

  5. Translate DOMAIN English words and phrases into Bulgarian

  6. Take the file
    DOMAIN/Grammar/DOMAIN-template
    and translate its words into Bulgarian

  7. Add the corresponding files into DSL::Bulgarian, [AAp1].

  8. Use the DOMAIN/Grammarish.rakumod role.

    • The English DOMAIN package should have such rule. If do not do the corresponding code refactoring.
  9. Test with implemented DOMAIN languages.

  10. See the example grammar and role in
    DataQueryWorkflows in DSL::Bulgarian.


References

Articles

[AA1] Anton Antonov,
"Introduction to data wrangling with Raku",
(2021),
RakuForPrediction at WordPress.

[Wk1] Wikipedia entry,
UNIX-philosophy rules.

Packages

[AAp1] Anton Antonov,
DSL::Bulgarian, Raku package,
(2022),
GitHub/antononcube.

[AAp2] Anton Antonov,
DSL::Shared, Raku package,
(2018-2022),
GitHub/antononcube.

[AAp3] Anton Antonov,
Grammar::TokenProcessing, Raku project
(2022),
GitHub/antononcube.

[AAp4] Anton Antonov,
DSL::English::DataQueryWorkflows, Raku package,
(2022),
GitHub/antononcube.

[AAp5] Anton Antonov,
DSL::English::DataQueryWorkflows, Raku package,
(2020-2022),
GitHub/antononcube.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.