#120 Workload suggestions


I have a very large Pauker file to which I add English words I did not knew before (about 3500 cards currently). The plan is to use and extend this file indefinitely. There is one issue. When opening the file after a longer period of inactivity, it sometimes happen there are too much cards that need rehearsal. This is because cards in multiple trays had nearly the same expiration date.

To solve the above workload overload issue, it would be nice if Pauker suggested how many new cards I have to learn to have the most optimal workload spread in the future.

This are the values that Pauker needs to take into account:
* historical learning statistics of the file (e.g. how many cards the user learns each day, how many cards the user forgets,...)
* when will the cards in all trays expire?
* when will the cards of today expire?

A small example:
* today is Monday
* user learns 50 cards/day
* user can repeat 2 cards as fast as she can learn 1 new card
* user forgets 20% of the cards in tray 1
* user forgets 10% of the cards in tray 2
* there currently are 50 cards in tray 1 which will all expire on Tuesday (tomorrow)
* there are cards in tray 2, 30 expire on Tuesday

==> suggestion on Monday: only learn 20 cards today! (The user already will have to repeat 80 cards on Tuesday. This stands for learning 40 cards as the statistics show this is the user's speed. 50 is the number of cards the user usually learns each day. 50-40=10=20/2) + if the user learns more cards than this suggestion, the statistic "user learns 50 cards/day" will be updated to a new average)

Note: this basic example does not take into account the future. The 20 cards may cause overload in the future. The system also needs to take into account how many of these cards will proceed to a next tray (80% in this example) and check if the planned expiration date of these 16 cards will not cause workload issues in the future.


Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks