I hope you can't toss the axe across the pond (if you can, maybe Ivan can
redirect it to Canada or something).
First, I sympathize with the pain that all of these major changes have
caused you (and others). Let's address the two issues you mention.
The stimulus changes in my opinion were an absolute necessity. What we had
before worked more by sheer fluke more than what we have now. The problems
with the stimuli now only apply to those areas that haven't been brought
up-to-date with the new stimuli changes. The LCD stimuli changes that you
first reported fell into that category as well as the 12CE51X issues
you're reporting now. I'll gladly help fix this problem.
The trace stuff is broken in areas at the moment. When you single step,
then the most recent trace info is just flat wrong. However, if you type
the trace command, you should see an accurate representation of the trace
Without sounding too defensive, let me explain why I rewrote the Trace
processing algorithms. There were two fundamental issues with the old
trace mechanism. First, all trace types were statically defined. This made
it difficult for modules to trace their operations. Everything they traced
had to fall into the generic MODULE1 or MODULE2 trace types. In addition,
a requirement I have for gpsim is that it support multiple processors.
With statically declared trace objects, it's impossible to distinguish
which processor was responsible for a specific action.
The other problem with the old trace design is really an inadequacy. I
found while examining trace history, that it would be useful to know a
register's 'from' and 'to' values. In other words, you work back through
the trace buffer and see that 0x42 was written to register 0x20. The
question often arises, "what was there before?" Well, by tracing the value
of a register *before* it is written gives you this information. I suppose
you could trace the value *after* too (like the old trace design did), but
that's inefficient. So instead, the 'to' values are reconstructed by
extracting the state of the processor and working backwards through the
While I admit the current implementation is buggy, I still think the
concepts behind it are sound. So while sip our tea, I'll I.V. the coffee.
BTW, I'm spending anywhere from 30 to 60 hours a week working on gpsim
these days. Most of what I'm doing is fixing faulty infrastructure. This
is as difficult as adding mass transit to an American city. The reason I'm
doing this is because of my day job - I've added a whole other set of
proprietary microprocessors to gpsim (they exist in dynamically loadable
modules). This effort has pointed out numerous weaknesses in gpsim's
design. The good news is that my company has agreed to pay me to basically
get gpsim to a state where it is useful in a professional environment. So
while gpsim matures from a fancy toy into a really professional tool,
there will be transitional growing pains (bugs and more bugs). But
eventually, you'll definitely learn to appreciate the enhancements.
Here is just brief list of what gpsim has now:
- context debugging
- macro support
- three state logic for registers
- command line expressions
- socket interface (e.g. for remote debugging)
- improved stimuli
- improved tracing
- Windows support
There are other things like converting the gui from C-style to C++,
redesigning the simulator engine API, making the simulation engine
stand-alone, supporting variable width register objects, etc. etc. that
you just don't see with casual use.
At the moment some things really suck about gpsim. As time goes on,
they'll suck less and less...
Hang in there a little longer!