A version of the Tashkeela Arabic diacritized text dataset cleaned from the non-Arabic content and the undiacritized text, then divided into training, development, and testing sets.
The cleaning process includes removing the XML tags and strange symbols, as well as fixing diacritics errors. After that, the tokenization is performed while focusing on the extraction of the Arabic words. The result is a space-separated tokens file, where the words and the numbers are separated