Menu

#16 Non-mandatory calls

open
nobody
5
2002-10-16
2002-10-16
R. Lemos
No

For true XP compliance, the tests must be black boxes;
this way one can code the test first, and then make the
test pass coding the tested object.

But most tests made with mockobjects made by
mockmaker are white boxes, since the one who writes
the tests must know in which order the tested object will
call the mock methods and, worse, that they will ever be
called. This is not good.

I propose methods to setup the state of the mock object
in such a way that the tester supply some sort of a map
(maybe not actually a java.util.Map) saying 'if the
arguments are these, then return that', and the calls to
the mock method are not mandatory, neither in-order
calls.

Discussion

  • Matthew Cooke

    Matthew Cooke - 2002-10-20

    Logged In: YES
    user_id=136463

    I'm not sure I understand. Actually I'm *sure* I don't
    understand :)

    I think you might have to give us a concrete example. In Xp
    unit tests are purely testing how an object interacts with
    it's neighbours. In the case of using mockobjects this means
    that if certains mocks feed in data you are expecting other
    mocks to recieve certain calls.

    The mocks are used to feed in data (setting up the test
    really) and the expectations are set to ensure that the
    object being tested is working correctly.

    We regularly use mockmaker generated Mocks to perform
    test-first based unit testing at work.

    We really need an example of what you are talking about. If
    by white box testing you mean that the tester knows what
    objects the target object is going to interact with, then I
    agree. But how else are you going to write a unit test!

    Matt

     
  • R. Lemos

    R. Lemos - 2003-02-24

    Logged In: YES
    user_id=630231

    By black box testing I mean the tester doesn't know in
    advance *how* the testee works; it only knows *what* it
    should do.

    In fact, the one who test must know the interfaces of the
    target object, and fed it data; obviously, must "know what
    objects it is going to interact with".

    My point is: with the mocks generated by mock maker it is
    only possible to code white box tests, because the test
    author must know *how* the target object is going to act.

    ==========
    A concrete example (my english here is really poor, sorry):

    At the university, students register for attendance in some
    classes (this should occur through the university's website).

    The registrar application should block some register requests
    based on some rules; one student cannot register for a class:
    1. if it happens to overlap in time with another class also
    requested by the same student;
    2. if he/she has not attended all of its prerequisites courses;
    3. if he/she was approved before in that course;
    4. many others rules.

    Each register is checked agains each rule, and those
    registers that doesn't hold for a particular rule is marked with
    a label (referring to that rule). A register can have as many
    labels as there are rules.

    The request should only be accepted if it has no labels (= no
    irregularity). Nonetheless, all rules should be checked,
    because we must save this information in a database.

    There is an Inspector object which only purpose is to inspect
    each request and to accept (or to reject) it. It has a method:

    void inspect(RegisterRequest request);

    and the RegisterRequest has some methods, including:

    boolean agreesWithRule(int rule); which returns true, if
    the 'rule' holds for this registry, or false, otherwise;
    void reject (); which rejects this request.

    I am working on an XP project. So we coded the test first
    (this tests how the inspector acts in face of a request that
    doesn't agree with the rule 1):
    1. set up a mock request;
    2. set up the expectancies for the request:
    -> agreesWithRule( 1 ): false
    -> agreesWithRule( 2 ): true
    -> agreesWithRule( 3 ): true
    -> ...other rules....
    -> rejectCalls(): 1
    3. call the target method

    The test obviously fails (the is no implementation for the
    Inspector yet).
    Now it is time to code the Inspector. Since a single
    irregularity results in rejection, I decide to not look over the
    entire list of rules, but until the first one that does not hold;
    when such is found, the request is immediately rejected.

    Now, the test still fails!!!! Why? because we said we are
    expecting a call to 'agreeWithRule' with the parameter '2', but
    the Inspector realized that it should stop and reject sooner, in
    rule 1.

    What's the problem here? All expectancy methods are
    mandatory, I mean, the target object *must* call what the
    mock expect to be called. There is no way to say:

    "I don't expect it to call 'agreesWithRule( 2 )', but just in case,
    return true"

    Of course, I could change the test, and stop the expectancy
    list at the first rule. That is the whole point, this way, I know
    *how* the method works.

    Just think, if the Inspector called agreesWithRule for all the
    rules, or worse, out of order (perhaps we know that rule 4 is
    more common the rule 1, so we check it first), it would still
    behaving correctly, but each implementation needs a
    complete different test setup. This is not good. This is white
    box test.
    ========

    I think there should be ways to setup the mocks with maps,
    saying
    "when the method A is called using these arguments, return
    this";
    "when the method A is called using those arguments, return
    that";

    but which are not mandatory, i.e., the test does not fail if they
    are not called.

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.