Menu

Opmock 2 tutorial for C

pogn

This section presents opmock 2 in a very progressive way. If you are already familiar with TDD and mocking, you may want to jump directly to advanced sections.

Content

Micro-testing : Fizzbuzz!

We want to implement the Fizzbuzz game. The rules are simple:

  • If I get a number which is a multiple of '3' then I answer “FIZZ”
  • If I get a number which is a multiple of '5' then I answer “BUZZ”
  • If I get a number which is both a multiple of '3' and '5' I answer “FIZZBUZZ”
  • If I get a number which is neither a multiple of '3' or '5' I answer “NA”

That's a nice isolated algorithm, simple enough to have no dependencies on additional functions, classes or libraries. We want to write micro tests for it. The best way would be to write these tests first, in a TDD fashion. In this documentation, we will just show both code and tests at the same time, but keep in mind that the tests have been written first, in the typical red-green cycle of TDD.
Note : this example makes use of the unit testing framework coming with Opmock. You can perfectly use another framework and use only the mocking capabilities.

Getting started : the build system

The first thing we need is a working build system. We base our example on make. Users of other environments will have to adapt what follows.
Here's the content of our makefile:

1  CPPFLAGS=-O0  -ggdb
2  OBJECTS = fizzbuzz.o fizzbuzz_test.o main.o opmock.o 
3
4  all: fizzbuzzTest 
5   ./fizzbuzz_test 
6
7  fizzbuzz.o: fizzbuzz.h 
8  fizzbuzz_test.o: fizzbuzz.h 
9  opmock.o: opmock.h 
10
11 fizzbuzzTest: $(OBJECTS) fizzbuzz.h 
12  gcc -o fizzbuzz_test $(OBJECTS) 
13
14 clean: 
15  -rm -f $(OBJECTS) 
16  -rm -f fizzbuzz_test

On line 1 we define some flags for the C compiler. They're not mandatory, but setting the optimizations to level 0 help debugging.
On line 2 we set the list of object files in the build. We have fizzbuzz.o for the code itself, fizzbuzz_test.o for the tests, and opmock.o for the unit testing framework and some support code. This part of the framework is delivered as a single source file, opmock.c (in the folder “support” of the binary distribution). It can be either compiled in each build, or compiled as a library and shared. Your choice.
The 'all' target will at the same time compile a test executable, and run it to get the tests results.
Line 7 to line 12 we give some dependencies rules to recompile C files if the header files they depend on are touched.
On line 14 we define a 'clean' target, removing all objects files.
Now that we have a working build, let's write some tests.

Writing micro tests

Let's have a look to the fizzbuzz_test.c file:

#include "fizzbuzz_test.h"
#include "fizzbuzz.h "
#include "opmock.h" 
#include <stdlib.h> 

void test_fizzbuzz_with_3() 
{ 
  char *res = fizzbuzz(3); 
  OP_ASSERT_EQUAL_CSTRING("FIZZ", res); 
  free(res); 
} 

void test_fizzbuzz_with_5() 
{ 
  char *res = fizzbuzz(5); 
  OP_ASSERT_EQUAL_CSTRING("BUZZ", res); 
  free(res); 
} 

void test_fizzbuzz_with_15() 
{ 
  char *res = fizzbuzz(15); 
  OP_ASSERT_EQUAL_CSTRING("FIZZBUZZ", res); 
  free(res); 
} 

void test_fizzbuzz_many_3() 
{ 
  int i; 
  for(i = 1; i < 1000; i++) { 
    if((i % 3 == 0) && ((i % 5) != 0)) { 
      char *res = fizzbuzz(i); 
      OP_ASSERT_EQUAL_CSTRING("FIZZ", res); 
      free(res); 
    } 
  } 
} 

void test_fizzbuzz_many_5() 
{ 
  int i; 
  for(i = 1; i < 1000; i++) { 
    if((i % 3 != 0) && ((i % 5) == 0)) { 
      char *res = fizzbuzz(i); 
      OP_ASSERT_EQUAL_CSTRING("BUZZ", res); 
      free(res); 
    } 
  } 
} 

void test_fizzbuzz_many_3_and_5() 
{ 
  int i; 
  for(i = 1; i < 1000; i++) { 
    if((i % 3 == 0) && ((i % 5) == 0)) {    
      char *res = fizzbuzz(i); 
      OP_ASSERT_EQUAL_CSTRING("FIZZBUZZ", res); 
      free(res); 
    } 
  } 
}

All tests in this file follow the same simple pattern:

  • A micro test is a function that takes no parameters and returns nothing
  • The function name is free; my personal convention is to name the test test_functionName_testDescription, but you can choose whatever suits you. (note that if you want to use the helper script to generate automatically the list of tests, you shall follow this convention. Have a look to the advanced section).
  • A micro test is made of 3 phases : set input parameters, call the function or class under test, check the result. In addition, if you use mocks, you can have a verify phase where you check that your dependencies were called properly.

Opmock provides you some macros to ease testing, like

OP_ASSERT_EQUAL_CSTRING(expected value, actual value);

The complete list of OP_ASSERT* macros can be found in the file opmock.h. They all take the expected value as first parameter, and the actual value as second parameter.
If a macro fails, it will increment a global error counter, then return from the current scope (a function or a class method). That's the reason why you should use these macros only in void(void) functions.

Writing the implementation

One possible implementation of the fizzbuzz function is:

#include "fizzbuzz.h" 

#include <string.h> 
#include <stdlib.h> 
#include <stdio.h> 

char * fizzbuzz (int i) 
{ 
  char *result = (char *) calloc(1, 20); 

  if (!(i % 3)) strcpy(result, "FIZZ"); 
  if (!(i % 5)) strcat(result, "BUZZ"); 

  if(!strlen(result)) sprintf(result, "%d", i); 
  return result; 
}

A very simple function indeed, but you are more interested in tests than actual code, aren't you?

Calling the tests

When these tests are written, we can run them just by calling the functions, or better, we can register them with opmock. This is done in the main.c file:

#include "opmock.h" 
#include "fizzbuzz_test.h" 

int main(int argc, char *argv[]) 
{ 
  opmock_test_suite_reset(); 
  opmock_register_test(test_fizzbuzz_with_3, "test_fizzbuzz_with_3"); 
  opmock_register_test(test_fizzbuzz_with_5, "test_fizzbuzz_with_5"); 
  opmock_register_test(test_fizzbuzz_with_15, "test_fizzbuzz_with_15"); 
  opmock_register_test(test_fizzbuzz_many_3, "test_fizzbuzz_many_3"); 
  opmock_register_test(test_fizzbuzz_many_5, "test_fizzbuzz_many_5"); 
  opmock_register_test(test_fizzbuzz_many_3_and_5, "test_fizzbuzz_many_3_and_5"); 
  opmock_test_suite_run(); 
  return 0; 
}

All tests you want to run must be registered against opmock using opmock_register_test (provided that you want to use opmock test report). The main.c file can be automatically generated by the refresh_tests.sh script coming with opmock.
Now, if you type:

make

You should get this report:

OK test 'test_fizzbuzz_with_3'
OK test 'test_fizzbuzz_with_5'
OK test 'test_fizzbuzz_with_15'
OK test 'test_fizzbuzz_many_3'
OK test 'test_fizzbuzz_many_5'
OK test 'test_fizzbuzz_many_3_and_5'
OPMOCK : 6 tests run, 0 tests failed.

Note also that you will get a colored output, with red for errors and green when everything is ok.

Using a mock

Why mocking?

Writing micro tests is easy. At least, it is in the simple example given above. However, in real code, you often have to deal with dependencies. For example, you develop something, but you must integrate a 3rd party library. This 3rd party library is not available yet – but you want to start coding against its interface.
Another situation is when you're developing a module, and it depends on an existing module. In the scope of your micro tests, you don't want to bring in all the dependencies of the additional module, because it's too complex and requires an access to a specific hardware which is available only in the lab. You want to cut the dependencies chain as short as possible.

The traditional way of dealing with this is to stub your dependencies. That is, writing a “fake” implementation of the dependency and link with it. This works well, but:

  • It's usually very tedious to write stubs manually
  • With C and C++, you can have only a single stub implementation in the same build. If you want to switch the stub implementation at runtime, you have to fall back to dirty tricks like global variables so that the stub knows how to answer. The only other way would be to have multiple builds, each one with a different stub implementation.
  • Your stub file “life cycle” is separate of your tests life cycle
  • Your stub usually does not record the parameters nor the number of times it was called (this is what a mock would do, to allow a verify phase)

In this situation, automatically generating stubs and/or mocks saves a lot of time. Opmock can do it for you, generating mocks that fully replace the original implementation of the dependency, generated from a header file.

NOTE : if you inline code in your headers, it will not be possible to mock it! You must separate clearly the declaration (in a header) and the definition (in a C or C++ file).

Let's fizzbuzz again

We will be using the same sample project than for micro testing. But this time, let's imagine that there's a sound machine. This sound machine is a combination of software and hardware. The hardware will play some voice recording like “FIZZ”, “BUZZ”, “FIZZBUZZ”. The hardware comes with a software interface so that you can use it.
Unfortunately, this hardware is very costly, and you don't have one at hand. You'll have to stub its public interface to test your code.
You will find all source code for this example in the fizzbuzz_pure_mock folder of the binary distribution.

Mocking the sound machine interface

The sound machine interface is provided. It is very simple as it contains a single function:

#ifndef SOUND_H_ 
#define SOUND_H_ 
int do_sound(char *sound); 
#endif

We want to call this function from our fizzbuzz implementation like this:

#include "fizzbuzz.h" 
#include "sound.h" 

#include <string.h> 
#include <stdlib.h>
#include <stdio.h> 

char * fizzbuzz(int i) 
{ 
  char *result = calloc(1, 20); 

  if (!(i % 3)) strcpy(result, "FIZZ"); 
  if (!(i % 5)) strcat(result, "BUZZ"); 

  if(!strlen(result)) sprintf(result, "%d", i); 

  int res = do_sound(result); 
  if(res != 0) { 
    sprintf(result, "ERROR"); 
  } 
  return result; 
}

To stub or mock this interface, we need to provide its implementation. We can do this by writing it ourself; or by generating it. Let's use opmock for generation.

opmock2 -i sound.h -o . -I/usr/include -I/usr/include/x86_64-linux-gnu -I/usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/lib/gcc/x86_64-linux-gnu/4.7/include-fixed -q

Will produce two files in the current folder:

  • sound_stub.h
  • sound_stub.c

These two files usually don't need to be modified. If you need to modify them, that's probably a bug in opmock!
(You may have though to modify the include path for the original header xxx.h in the file xxx_stub.h. For this, you can use the -q and -p command line options).

Modifying the build

Once you have generated the mock files, you have to include them in your build. We add as well a simple target to call the mock generation phase. You just need to include the derived object files in your build. For full details, have a look to the sample project fizzbuzz_pure_mock.

Using mocks in the tests

Using ExpectAndReturn

If you try to build now, without modifying the existing tests, the tests will run... Or not. The behavior of some tests may look random.
This is because you've not defined any behavior for your mocks yet. The mock is not expecting any calls, so when called it has to return a default value, which is random. If you're lucky, this value will be the one expected in your assertions, and the test will pass.
When generating C mocks, mocking works at the function level. Let's take the first test as an example:

void test_fizzbuzz_with_3()
{ 
  do_sound_ExpectAndReturn ("FIZZ", 0, cmp_cstr);
  char *res = fizzbuzz(3);
  OP_ASSERT_EQUAL_CSTRING("FIZZ", res);
  free(res);
}

Here, we program the behavior of a mock function. When we write

do_sound_ExpectAndReturn("FIZZ", 0, cmp_cstr);

It means : “next time the operation do_sound is called with parameter param of value “FIZZ”, return 0. And please, check that the parameter has the expected value using the comparison function pointed by cmp_cstr”.

The function signature is the same than the original function, plus some additional parameters:

  • The value you want to return when the function is called
  • A list of matcher pointers you use to check input parameters. Matchers are simple C functions that will compare one input value and the expected value when calling a function. Some matchers are provided by opmock for common types. Here, cmp_cstr is a matcher function that will compare 2 C strings and return 0 if they're equal. You can use custom matchers : learn more about them in the dedicated section of this document.

You can make several calls to do_sound_ExpectAndReturn. The expected parameters, return values, and matchers are stored in an internal FIFO call stack for the function. When you actually call the function, the return value is popped from the stack.
If you call the operation too many times (the call stack is empty), the mock will complain about an unexpected call.
If you call the operation with parameters that don't match the recorded parameters, the mock will complain as well.
If you call functions in a wrong order, opmock will capture it as well.
Please note that, because opmock does not do any memory allocation, the call stack has a limited size, currently set to 100 calls. You can still do more than 100 calls (for example in a loop) if you call ExpectAndReturn everytime you're going to call the mocked function, rather than building the full call stack upfront.

Opmock checks all the conditions above. However, it will not fail your test unless you call either the OP_VERIFY or the OP_VERIFY_NO_ORDER macros. These macros will check the recorded errors in the scope of the current test, and fail the test if necessary.

Note : if you use another unit testing framework, you should have a look to the implementation of the OP_VERIFY* macros to see how to implement the verify step.

When you don't want to check parameters

Sometimes, when you call a mocked function, you just want the mock to return a value, but you don't care about the parameters values. But you still have to provide matcher pointers when setting the mock behavior...
Say you've mocked the following function:

int MyFunction(int i, float j, char *val, MyStruct one_struct);

And that when you call the mocked version of the function, you don't want to check the parameters. In this case, just provide NULL matchers:

MyFunction_ExpectAndReturn(1, 0.5, “Hello World”, one_struct, -1, 
            NULL, NULL, NULL, NULL);

By providing a NULL pointer for a matcher, you actually tell Opmock to skip parameter checking when the mock is called.
You can also choose to check some parameters, and ignore others:

MyFunction_ExpectAndReturn(1, 0.5, “Hello World”, one_struct, -1, 
            cmp_int, NULL, cmp_cstr, NULL);

Using Callbacks

Mocking with ExpectAndReturn works well when your function has simple parameters and output values. In real code, you'll often have situations where the mock behavior should be more complex.
For example:

char *do_something(struct complex_struct * struct_pointer, int *error_code);

This operation is supposed to fill in the complex structure pointed by struct_pointer, and to fill in the error_code if something bad happens. Obviously, you can't do this with ExpectAndReturn, because you can return only simple values.
In this case, you can use a callback function. A callback function is actually a stub : a piece of code you write to mimic the behavior of the actual implementation. Opmock2 has an advantage though over traditional stubbing : your stubs are defined in the scope of your tests, and you can easily swap stub implementations during the test.
Let's try this in the scope of the test test_fizzbuzz_with_15.
Your callback function must follow a specific prototype. There's a prototype for each function. This prototype can be found in the generated sound_stub.h file:

typedef int (* OPMOCK_do_sound_CALLBACK)(char *  sound, int calls);

It is exactly the same signature than the function you want to stub, with an additional parameter : int calls. Opmock will fill this parameter with the number of times the callback has been called. This can be useful if you want to adapt the behavior of your stub depending on the call sequence.
Let's implement our callback, inside the test file :

static int test_fizzbuzz_with_15_callback (char *  sound, int calls)
{
  if((strcmp(sound, "FIZZ") == 0) 
     || (strcmp(sound, "BUZZ") == 0)
     || (strcmp(sound, "FIZZBUZZ") == 0))
    return 0;
  return 1;
}

And let's use it in out test:

void test_fizzbuzz_with_15()
{
  do_sound_MockWithCallback (test_fizzbuzz_with_15_callback);
  char *res = fizzbuzz(15);
  OP_ASSERT_EQUAL_CSTRING("FIZZBUZZ", res);
  free(res);
}

Every time we call the fizzbuzz function, our callback will be called. You can use a mix of callbacks and ExpectAndReturn in your tests.

NOTE : callbacks are not stored on the mock call stack. When you switch to callback mode for a function, you reset the mock stack.You can however mix sections of a test where you use only callbacks, and sections where you use only mocks.

Verifying your mock

A function mock records all expected calls to a function. This includes :

  • parameters
  • return value, if there's one
  • call order of operations/functions

After running a test, and after checking your assertions, it can in addition check if your mock was called properly. This is called a “verify” phase.
There's a macro specially for this in the Micro Testing part of Opmock.
Writing:

OP_VERIFY();

At the end of your test it will be enough to check if all the mocks for all the functions were called properly. This macro will exit the test with an error if the verify phase has failed.
If you don't call this macro at the end of your test, the mock errors are still recorded, but the test will not fail.
An additional macro is available:

OP_VERIFY_NO_ORDER();

This macro has the same effect than OP_VERIFY, but will not fail the test if the call sequence of mocks is not correct. For example, if your code was supposed to call a function “lock” then a function “unlock”, but has done the opposite, using OP_VERIFY will fail the test, but using OP_VERIFY_NO_ORDER will not.


Related

Wiki: Home