From: Siddharth K. <sid...@gm...> - 2011-07-09 14:09:02
|
Hi everyone, An update how far I have got: * I have made a separate branch 'bbn' in tuxmath's git repo. where I am committing all the changes. * As for the progress, I have a Bayesian Network library which supports to create, set initial probabilities and allows inference on bayesian networks. It uses Pearl's message-passing algorithm<http://en.wikipedia.org/wiki/Belief_propagation>for inferencing. Initially, I wanted to use an existing library with a compatible license to handle the inference part, but I had hard luck finding C implementations and so decided to write one myself. * My current implementation supports only tree topology, which is good enough when I don't include global nodes in a topic cluster. Refer to this figure <http://gscbbn.files.wordpress.com/2011/05/topic-cluster.png> for a topic cluster's structure. So, while I work on implementing the inference part for singly-connected networks, I can start testing the BBN in the game. There are also a couple of things which I need help and suggestions with: * I have added the new source files in src/bayesian. How do I include them in the build? I have not worked with GNU autotools and cmake before. * Earlier in this thread, we discussed about modelling acceptable failure/challenge for a user and the solution was something on the lines of the system giving user recommended options for lessons based on difficulty. On the back-end part, I take this into account by making a global node for "challenge level" [Figure<http://gscbbn.files.wordpress.com/2011/06/global_nodes.png>]. On the interface part, this would require changes. I have some initial thoughts on that and am write it in a separate mail. As the mid-term evaluations are only a week now, my priority will be to add documentation and error checks to the existing code, and update the blog. Thanks, Siddharth On Wed, Jun 1, 2011 at 3:13 AM, Tim Holy <ho...@wu...> wrote: > Hi Siddarth, > > I'd also add that perhaps it might be best to confine this to "training > academy." The intent behind the arcade games is that everyone is on equal > footing, so getting the high score for your class represents an "absolute" > achievement rather than "you did very well given what we expected of you" > (i.e., what the Bayesian algorithm decided to set as problems). But for the > training academy, the latter is indeed probably more appropriate . > > Best, > --Tim > > > On Tuesday, May 31, 2011 11:55:44 am David Bruce wrote: > > Hi Siddarth, > > > > > Also, I would like if the question generation happens for every wave > > > instead of happening once in a game. > > > > The question list is generated at the beginning of the game. The > > waves affect things like the background image and the speed of the > > comets, but not the questions. The question-generation code is in > > mathcards.[ch], which doesn't know anything about waves. > > > > Depending on a few config file variables (see e.g. ~/.tuxmath/options > > or any of the files under data/missions/), answered or missed > > questions are handled differently (I apologize about some of the > > variable names, which now seem rather confusing) > > > > PLAY_THROUGH_LIST (boolean) - if set to 1 ("true"), correctly-answered > > questions are removed from the list, so that the game ends in victory > > when there are no remaining questions. If set to 0 ("false"), the > > questions are reinserted in a random question and the game goes on > > indefinitely ("arcade mode"). > > > > REPEAT_WRONGS (boolean) - controls the equivalent behavior for > > questions the player misses (i.e. the ones that hit the igloos). > > However, for this one, 1 means the questions are reinserted, and 0 > > means they are discarded, the opposite of the behavior for > > PLAY_THROUGH_LIST. This is set to 1 for all of the bundled lessons. > > > > COPIES_REPEATED_WRONGS (int) controls how many copies of the missed > > question get put back in. For example, if this is set to > > 2, if a player misses "3 + 3 = ?", he/she will have to answer it twice > > correctly after that to "win" the game. This provides a minimal form > > of adaptive learning, but it could be extended to include questions > > closely related to the question that was missed. > > > > > The reason is to account for learner's > > > performance of the past waves for question generation of the present > > > wave. This wouldn't affect a Math command training lesson, but on tux > > > missions and for arcade games, this can make a difference in the > > > question selection. > > > > All very valid goals! It seems clear that our current scheme isn't > > sufficiently flexible to do everything needed for your project. If > > you are going to have "intra-game/inter-wave" feedback, we need to add > > some functions to mathcards to support generating and inserting > > additional questions during the game. > > > > Best, > |