I have been looking at ulxmlrpc and am impressed with the functionality.
It worked almost out of the box in my environment (MSVC8, STLPort, wxWidgets)
I am considerung to use it in a project.
However, I have a couple of questions:
1) There seems to be some wxWidgets (alias wxWindows) support, i.e. wxString can be used for strings.
Using it, I stumbled across some oddities:
- In ulxmlrpc.h in line 123 there some unconditional #defines (# define __WXMSW__,# define __WXDEBUG__,# define WXUSINGDLL), I don't think that's the way it should be.
- In ulxmlrpc.h line 434 is typedef std::string CppString; even though Cpp8BitString has been set to wxString, it is not used here, but std::string is har-wired
- It seems, an older wxWidgets version was used, the current one is 2.8.7, line 152 refers to 2.3.1
- I don't know the namespace wxwindows that has been used in line 144 and 146, I replaced it with wxString
So, what is the state of the wxWidgets support? It seems to be experimental at this time.
2) Is there a possibility to get notified (server and/or client), if a persistent connection was interrupted, for whatever reason?
'experimental' is probably the correct description :-) I got these lines several years ago and just added them as an option. If you have a better solution, feel free to send it to me :-)
Currently there is no notification when the connection is interrupted (apart from an exception in a subsequent read/write). Currently I have no idea how to implement such a feature in an asynchronous way. Or did I get you wrong? What do you have in mind?
> If you have a better solution, feel free to send it to me :-)
I'm working on it and let you know if I've got something.
I'll maybe also look into a closer integration with wxWidgets. It already has things like a x-platform abstractions for sockets etc.
> Currently I have no idea how to implement such a feature in an asynchronous way. Or did I get you wrong? What do you have in mind?
That's what I meant. There is a simple IPC framework in wxWidgets (wxClient/wxServer) which has this feature, but it uses windows messages, which implies that you need to have a message loop set up. I think, I'll have a closer look at how the socket interruption itself is detected there.
I've tested a bit more, and found a bug concerning the non-Unicode build. There is a problem, that if strings contain any special characters, the xml is not valid utf-8, due to the fact, that they are not converted (that's only in the non-unicode build). The solution is, to use the already existing function asciiToUtf8 everywhere where unicodeToUtf8 is used, but at least in the classes MethodResponse and MethodCall.
Then I have created a class SharedMemConnection, that does what it says: It uses shared memory for communication, so the library can be used for local high speed IPC. The class is based on a sligthly modified version of a nice CodeProject article (http://www.codeproject.com/KB/threads/fast_ipc.aspx).
Drawback: Currently the class only supports Windows :-( and needs to be tuned a bit, but it shouldn't be to hard to port/extent.
Last, a Question:
The line #define ULXR_RECV_BUFFER_SIZE 50
has the comment
// keep rather small, otherwise two messages might be read as a single block
Well, 50 is extremely small. Why is a larger buffer a problem? Isn't this only problematic for persistent connections? Shouldn't the seperation of messages be handled on a different level?
as for the unicode bug, I have to investigate more myself. I prefer the unicode version as it avoids any abiguities, and for that reason I strongly recommend it to anyone.
Maybe I missed it, but shouldn't there be a link to your SharedMem code (or is it closed?) It would be interesting to do some benchmarking here. But I am pretty sure this no big issue on Linux as local networking seems fast already :-)
I don't remember the exact reason for the small buffer size, actually I forget about this issue :-). It probably had to do with a (proprietary) feature to send requests without getting an answer (a bit like UDP packets). The rather small buffer should be no real issue as the operation system buffers anyway. If you like you can extend the size and try yourself with the test apps below ulxmlrpcpp/tests.
just in case you missed: there is an asynchronous feature which creates a new thread after sending the request. The function then returns to the caller whereas the thread handles the response or the failure and exits afterwards. This feature is mainly intended for long running requests but maybe it helps in your case. It might also be an improvement to maintain a thread pool instead of starting them always from scratch.
>Maybe I missed it, but shouldn't there be a link to your SharedMem code (or is it closed?) It would
>be interesting to do some benchmarking here. But I am pretty sure this no big issue on Linux as local
>networking seems fast already :-)
It's not closed, but it's just not done yet. error handling, timeout handling and documentation is not complete yet. I will certainly publish it, when its complete.
I also did some benchmarking on Vista on a 2Ghz dual-core notebook. I used the val1_server/val1_client, since it covers many features. In the first test I also modified the size of the revc buffer to see how it impacts the results:
recv buffer persistent Connection Time
50 no Tcp/IP 1:37
50 yes Tcp/IP 0:07
1024 no Tcp/IP 1:35
1024 yes Tcp/IP 0:06
50 no SharedMem 0:07
50 yes SharedMem 0:07
1024 no SharedMem 0:07
1024 yes SharedMem 0:06
Result 1: Increasing the recv buffer slightly improves the results and didn't show any problems in my (certainly limited) tests.
Result 2: Setting up TCP/IP connections even locally seems to be very expensive, whereas SharedMem has almost no connection cost.
Result 3: Local TCP/IP is really fast. Using persistent connections TCPI/IP performs comparable with SharedMem.
In the next test I wanted to see, what effect the size of the transferred data has. I used only the test check_moderateSizeArrayCheck in val1_client, always used persistent connections and gradually increased the array size from 250 to 2500, 25000, 250000 to 2.5 millions. It turned out that the time for processing and transcoding the array as well on the client side as the server side by far supersedes the transfer time. So, I tried using wbxml to reduce these costs in relation to the pure data transfer. My measurements show that even then SharedMem is NOT significantly faster than local TCPI/IP (around 5%).
The question, if SharedMem should be used finally boils down to the question, if its acceptable to use persistent TCP/IP connections for a specific application. Another issue could be using TCP/IP ports, which is a limited resource in the sense that one has to take care to find a free port. The SharedMem implementation uses unique identifiers for setting up the connection.
- Local TCP/IP is very fast (at least in Vista) and its not necessarily worth the trouble using SharedMem
bad for me :-(
- I think, it could be worthwile to explore if other parts of ulxmlrpc can be improved. I mainly have in mind the data management around the Value class, which seems to involve quite some (unnecessary) copying, which is an issue for large data structures. E.g. using 'refcounted implementations' for structs and arrays could help.
I am not too surprised about the fact that local tcp is almost as fast as shared mem. A good OS knows the shortcut in this case and should just copy some buffers.
The problem with persistent connections: the standard says to always close connections.
If you want to speed up transactions without the need to stay compatible you might think about binary xml. My simple benchmarks showed improvements of around 400%. This is due to the fact that processing xml is rather expensive in short transfers. With binary xml each 'tag' is just a single byte.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.