|
From: dan n. <dne...@ya...> - 2019-08-27 00:12:46
|
Hi Marc,
My comments are inserted into your text in red.
On Monday, August 26, 2019, 2:04:12 PM PDT, Marc Gronle <mar...@it...> wrote:
Hi Dan,
every plugin has its own version string (set in pluginVersion.h). This version is displayed in the plugins toolbox of itom and can also be obtained via python. In the documentation there is a changelog, where is written which version of
the plugin is compiled to which official Windows setup of itom (we only provide Windows setups due to the limited number of developers).
If we break the compatibility of the plugin, we have to change the major version number. Of course the best solution would be to select a new name for the new plugin with incompatible behaviour. This would be an option.However, since the current implementation only has a limited functionality and is partially working, I would say that it is also possible to make a hard break and introduce a good and working plugin using the same name.
OK.
I don't think that we should introduce the same plugin with the version in its name, because the development capacity of the core developers is limited and we won't be able to support more than one version for a longer time.
I am not strongly committed to attaching a version number to the plugin name, it was just a suggestion. However, I am more concerned with the idea of supporting only one plugin version at a time. This is going to be a problem, not only for niDAQmx but all plugins. Eventually, there will be customers who have developed applications using a particular version of a plugin. They will not be happy if you end-of-life (EOL) the plugin when you release a new major version. I have some experience with this problem and I can assure you that you will lose customers.
Perhaps this is the time to ask: what is the target "market" for itom. That is, who do you envision to be its customer base? I can think of a number of possibilities: 1) itom is a teaching tool to be used in course laboratory experiments, but is not intended for serious operational deployment; 2) itom is intended for university use, say by university labs, but not industrial use; 3) itom is targeted at markets 1 and 2, but also at industrial deployment. Its current deployment should give you some idea of what markets it currently satisfies. I would be curious to know what these markets are. If these markets represent all those you would like to service, then knowing what they are would help me understand your development strategy.
Let me give you some background on my career in order to establish I have some experience with deploying software systems. After completing my doctoral dissertation, I spent 3 1/2 years at the Australian National University as a Research Fellow. This gave me experience in a university environment. I then moved to Lawrence Livermore National Laboratory for over 17 years, where I conducted research in computer systems. However, LLNL was not like a university in that the research I conducted was based on experience with operational systems. After the end of the Cold War, funding was dramatically reduced at LLNL and I decided to move to industry. I spent 2 years at Sun Microsystems (purchased later by Oracle) working on product development, specifically the Solaris operating system and Sun's CORBA compliant distributed application programming environment. I then moved to 3Com (purchased later by HP) working for 5 years in its Skunk Works on invention and applied research. I then left 3Com, did some individual consulting for 2 start-ups and then retired.
I apologize if providing this background seems narcissistic. My intention is to not to build my self up, but to provide evidence that I have experience with deploying systems and can help you understand the problems you face in making itom a success.
So from my side: either hardly replacing the current version by a proper new version (same name, major version number change and a hint in the documentation) or keeping the current version as legacy version (which will be disabled per default)and adding the new plugin with a new name (however I don't have a good idea for a new name ;) )
With respect to different NIDAQ-versions, I don't think that it is necessary to simulatenously provide different versions of the plugin. I guess that the interface of NI-DAQmx is quite stable, therefore it is possible to compile the plugin
against let's say 18.x and run it with 19.x.
Retaining a backward compatible interface is necessary but not sufficient for achieving forward compatibility. New versions of software frequently add bugs in the new implementation that break applications using the old implementation. It is generally not a good idea to assume a new library will link and run flawlessly with applications running against an old library, even if the interfaces are compatible (unless the library developers conduct serious regression testing - something that I suspect the NI developers may not do). The level of toleration for this kind of problem is generally related to your target markets. If itom is just a learning tool (e.g., targeting at only market 1 described above) then you can probably get away with broken applications, since a course lab can continue to use old software. However, if you are targeting market2 and especially market 3, you will lose customers if a plugin crashes when using a new version of a library (like the NI-DAQmx library). Unless you thoroughly regression test plugins with both the old and new libraries, you are bound to run into problems.
Of course CMake should be created such that anybody can select the path to any newer version of the NIDAQmx library and the plugin is then compiled against this. This should be sufficient.For a possible Windows setup of itom, one quite new version if NIDAQmx will be installed on the build computer and this version is then used to compile the plugin. Currently we are only using very basic methods of the SDK, therefore
it was no problem to compile the plugin against let's say 18.x and run it with the runtime version of 19.x (this is at least my experience under Windows).
The thing which I don't like very much in the current implementation is the fact, that one has to set the same parameter name again and again in order to add channels. This is not a desired behaviour for itom plugins. I think that
it is much easier to have only one channel parameter with a semicolon-separated list of all channel strings (exactly what the getParam of the channels returns, therefore I would like that setting and getting this parameter has the same meaning).This is what I do in my current implementation of the plugin and it works rather good. Currently I took your plugin and changed it to the single 'channel' parameter, support multiple devices (which works good) and support finite and continuous tasks.Additionally I plan to re-add the configuration dialog.
Sure. I have no problem with this. Do you plan on changing the unit tests to test this new implementation?
To simplify the plugin, one has to select the task type (analog/digital in/out) at initialization. This cannot be changed any more for one instance (I guess that it is not necessary to change this basic task type at any later time; an analog input task will always remain this). This simplifies to programming and reduces the amount of parameters to only 6-8 in total. Maybe we can also use your current implementation (as pull request) to replace the current pluginand then use my modified version as 2nd plugin (or future replacement) which will then support continuous tasks...
Well, this is a major architectural change. I am not opposed to it, but I think some discussion would be useful.
As I understand your vision for the new version of the plugin, it will be possible to attach channels to it that are on different devices. With this new change, a plugin instance will only service one type of ADC activity (e.g., analog input, digital input, analog output). So, effectively, you are factoring the underlying NI C library functionality according to ADC activity type. The current plugin factors the underlying NI C library functionality according to specific devices, supporting different ADC activity types in the same instance.
Let me reveal now (up to this point I have kept this to myself) that I don't think supporting multiple devices per plugin instance is a good idea, for the following reasons:
1.Multiple devices are not homogenous in the services they supply. For example, I have both a PCI-6220 and a PCI-4461 card on my development machine. The 6220 supports 250 KS/s analog input aggregate, while the 4461 supports a maximum of 204.8 KS/s analog input per channel. Since the sample rate is a task parameter, associating these two devices with a single plugin instance means tying one device to the limitations of the other. For example, if the instance controls 4 analog input channels of the 6220 and 2 analog input channels of the 4461, then the maximum rate that could be specified in the task parameters would be 250/4 = 62.5 KS/s, which means the 4461 channels would be severely under sampled.
2. Associating a plugin instance with a device is conceptually clean. It is easy to visualize the behavior of a plugin when it is associated with a single device. If it is associated with multiple devices, the nature of the ADC transfer differs from device to device. For example, the 4461 supports pseudo-differential inputs, while the 6220 does not. The 4461 supports different voltage ranges than the 6220 (4461: +/- .316V, +/- 1V, +/- 3.16V, +/- 10V, +/- 31.6V, and +/- 42.4V; 6220: +/- .2V, +/- 1V, +/- 5V and +/- 10V.) While the Channel Parameters would allow the specification of different voltage ranges, keeping them all straight is a chore. It is far easier to do this if there is one device per plugin instance.
These reasons are just off the top of my head. I am sure I could find others that would support the one device per plugin architecture.
But, I would be open to arguments why one ADC activity per plugin instance is advantageous.
Concerning the callbacks:
For the purpose that you want to achieve, I think that the callback is not required, since applying type casts, slicing or element-wise mathematical calculations to arrays or dataObjects in Python is very fast. The reason is, that both
a dataObject and numpy arrays are programmed in C/C++ and type casts and math operations are fully based on common numeric libraries like BLAS, Atlas or Lapacke. Therefore array operations in Python are very fast. So it should
work, that you call getVal a couple of times a second to obtain the latest chunk of double-data, type cast this data to uint16, apply a moving average filter and save the output to a file. Additionally passing the dataObject from the plugin
to Python is only a shallow copy and not necessarily a memcpy-operation.
Therefore I would suggest to keep the plugin "simple" and generic. If somebody needs to add special calculations inside of the plugin, it is either possible to create an adapted copy of this plugin or to directly use the powerful python package python-nidaqmx from National Instruments, which gives access to all special things of the SDK.
Actually, what I had in mind is implementing an algorithm plugin for the processing and controlling a niDAQmx plugin instance using the DataIOThreadCtrl class. I had no intention of putting the numerical processing code in the niDAQmx plugin implementation.
Regards,
Dan
CheersMarc
Am Mo., 26. Aug. 2019 um 01:22 Uhr schrieb dan nessett <dne...@ya...>:
Hello Marc,
I agree. I think option 2 is the best strategy going forward. This will change the existing niDAQmx plugin so it is incompatible with the one that exists now. This will give me the opportunity to fix the hack that I used to delay setting task parameters until channel parameters are set. This would change the order of the commands in the plugin lifetime to:
create plugin instanceset ChannelParameters (one setParam for each channel in the task)set TaskParameters
startDevice
<execute the necessary commands to get the data. This differs depending on whether finite or continuous input is selected. But,, see my comments below on using a callback function>
stopDevicedelete plugin instance.
Have you given any thought to how itom should handle major versions of plugins? If we adopt option two, there will be 3 versions of niDAQmx: 1) the current version, 2) a new version for which each instance is tied to only one device and implements option 2 semantics, and 3) the version you are working on that supports multiple devices per plugin instance. We could distinguish plugin versions with a naming convention, i.e., niDAQmx_1, niDAQmx_2, and niDAQmx_3. This would also support minor and micro versioning, i.e., niDAQmx_2.2.3. It would also allow us to build versions of the plugin that support different versions of the niDAQmxl C library (right now I am using 18.1, but 19.1 was published in July). Of course that would complicate the CMake files, but that is work that would have substantial benefits.
I have been investigating the use the callback function in reading channels. Here are some thoughts:
1. It turns out that use of the callback function would have benefits not only for continuous, but also finite analog input. For example, I have regularly run phase noise experiments using my PicoScope that required 52 seconds of data per segment. Each of these segments is averaged together to produce a smooth FFT plot. For the experiments I ran, the parameters were: 10,000 samples per second, 52 seconds per segment, 30 segments per FFT plot. The niDAQmx library only supports 64-bit floating point for sampling using real numbers. So each sample is 8 bytes long, meaning an equivalent FFT computation would require: 10,000*52*30*8 bytes of data = 124 GB of data. This obviously won't fit into my computer memory (which only has 1 GB of free memory).
The way the PicoScope handles this is to sample 16-bits and move each segment to the main machine for averaging (the PicoScope is a USB Oscilloscope/Spectrum Analyzer). This reduces the amount of data that the PicoScope must hold in hardware to 4.1 MB of data (per segment). If the niDAQmx plugin was suitably modified, I also could sample at 16-bits and then do the averaging and conversion to floating point in itom. But, this would significantly increase the processing in itom. Since Python is an interpreted language, it is not optimal for high intensity numerical calculations. So, if I wanted to do this experiment using itom and niDAQmx, I would probably want to write the segment averaging and floating point conversion in C or C++.
2. The callback function in niDAQmx executes in the context of the niDAQmx thread. There is an option for using a user-land thread as the context for the callback function, but that only works for Windows, it isn't supported for Linux. Consequently, to keep itom platform independent, the callback must execute in the niDAQmx thread. Now I am not completely certain, but since niDAQmx is driver software, I assume it is using kernel threads. You don't want to tie up a kernel thread with a lot of computation, since this delays critical functions in the kernel. So, the activity of the callback function should be as lightweight as possible, which means simply getting the data out of the driver and into user-land memory. Then the computationally intense activity would be implemented by a user-land (i.e., itom thread).
3. Given the above, what does the itom thread need to do? For the problem outlined above, it needs to average the segments. This requires converting the voltage readings to power units, which means first converting to floating point. Then run the FFT algorithm (along with preliminary filtering and windowing) to get the bins over which averaging occurs. Then, on a per segment basis, adding the bins together and dividing by the number of segments (this can be done on an iterative basis, so you don't have to process all of the segments at once).
This is activity using finite analog input. So, I think I have made at least one case why using the callback function in the niDAQmx C function suite would greatly benefit when finite analog input is selected.
We should discuss this, but if we decide to allow callbacks for both continuous and finite analog input, we can make the interface a bit more symmetric. Both continuous and finite analog input would always use the callback technique. StartDevice and StopDevice would always be used and acquire() become a noop. getVal or copyVal then would simply move the data sitting in the user-land buffers into a dataObject (either using a shallow or deep copy).
Comments?
Regards,
Dan
On Aug 25, 2019, at 9:47 AM, Marc Gronle <mar...@it...> wrote:
Hi Dan,
you asked exactly the same questions that I am asking myself since a while.
I guess all following options have one thing in common: If a continuous input is started, it is the responsibility of the user to regularily request a chunk of already recorded data from the plugin
(and optionally store this chunk to a file, visualize it...). Therefore getVal should be regularily called and the returned dataObject always has a changing number of columns, depending on the
number of samples that have been collected since the last call of getVal. If I understand the NI documentation properly, the task will stop and indicate an error, if the internal buffer (given by
the number of samples in the task configuration) is full and nobody cleared it by calling DAQmxReadAnalogF64 (or something similar).
Currently, we don't have a similar device with continuous acquisition. There is only one other supported AD/DA device from measurement computing, but the continuous acquisitionis not implemented there, too. Therefore, there is no established infrastructure available in itom for this (not yet).
I see the following options:
1. The quick and dirty solution (I don't like it to much):- startDevice / stopDevice are NOOP- acquire will start the continuous task or finite task
- a continuous call to getVal will always obtain the currently available chunk of data in the internal buffer. The task will automatically stop if you don't call getVal again or if you restart the task with another paramerization.
For finite tasks, getVal blocks until all samples are recorded (or timeout)
2. The solution, which fits best to other IO-devices like cameras- starting a task will be moved to startDevice (for cameras, startDevice will always bring a camera into a state, where images can then be taken, each single acquisition is then triggered by 'acquire', once a device is started, not all parameters can be changed - this highly depends on the camera)
- stopping the task will be moved to stopDevice- If the task is continuous, it will be really started by calling startDevice (acquire is Noop then), if the task is finite, acquire has to be called to fire the start trigger- for a finite task, getVal blocks until all samples are recorded (or timeout), for a continuous task one has to regularily call getVal such that the internal NI buffer never gets full (and the task is interrupted then)
3. AD/DA specific solution- startDevice / stopDevice are NOOP- acquire will start a continuous or finite task- there will be a new method in the base class ito::AddInDataIO (e.g. stopAcquisition or simply stop) which will be wrapped by a method with the same name in the python class itom.dataIO which is only implemented in the
case of AD/DA converters which can be called to stop a continuous task- the behaviour of getVal is again the same than in option 1 or 2
Me personally, I would prefer option 2, since startDevice and stopDevice are currently unused for the NIDAQmx plugin, but for cameras, which only provide a continuous data stream and cannot be triggered, like webcams,
startDevice will start the data stream and acquire will only "extract" the latest image in the stream and provide it to the following getVal command. Introducing a new command will be available to the dataIO interface in python,
hence every camera would have this command which is a NOOP then. All in all, my ranking would be option 2, then option 3 and option 1 would only be a workaround.
The NIDAQmx interface also has the option to register a callback function if a certain number of samples have been recorded. In one of my latest emails to you, I talked about the official python package nidaqmx-python and
the example, where one registers a python method as callback. Inside of this callback, the current available chunk of data is obtained (to a numpy-array) and stored to the hard drive. In theory it would also be possible to
register a c-function in the c++ plugin to this callback function and if it is fired emit a Qt signal, defined in the plugin. Since a couple of months it is now possible to connect python methods in itom to such a function. However
you still have to call getVal to really read the data and store it. Therefore I would say that it is often possible to estimate the right time interval when data has been obtained to avoid a full NI-internal buffer. Then it is also
possible to repeatedly call getVal within a loop and a sleep command or to use the python class itom.timer (or similar ones) to register a callback function in python which should be called e.g. every 250ms.
Best regards
Marc
Am Sa., 24. Aug. 2019 um 17:20 Uhr schrieb dan nessett <dne...@ya...>:
Hi Marc,
I am starting to think about how to implement continuous analog input. I think one point you have raised is how to signal the beginning of the input and the end of it. It would be possible to use acquire() to start the input and stopDevice() to end it, but that has the feeling of a hack (overloading existing functions to do something else). Another possibility would be to add startChannel() and stopChannel() methods to the plugin. Is there a way to add plugin specific method calls so they have python wrappers? Are there any other plugins doing this (if so, would you point me to them so I can take a look at the code)?
Cheers,
Dan
D
|