|
From: Marc G. <mar...@it...> - 2019-08-29 07:22:57
|
Hi Dan, this time, I will use a blue color and give you some answers right after your anwers in red color. It this will become too complex, we should separate this into different threads. Am Di., 27. Aug. 2019 um 02:09 Uhr schrieb dan nessett <dne...@ya...>: > Hi Marc, > > My comments are inserted into your text in red. > > On Monday, August 26, 2019, 2:04:12 PM PDT, Marc Gronle < > mar...@it...> wrote: > > > Hi Dan, > > every plugin has its own version string (set in pluginVersion.h). This > version is displayed in the plugins toolbox of itom and can also be > obtained via python. In the documentation there is a changelog, where is > written which version of > the plugin is compiled to which official Windows setup of itom (we only > provide Windows setups due to the limited number of developers). > > If we break the compatibility of the plugin, we have to change the major > version number. Of course the best solution would be to select a new name > for the new plugin with incompatible behaviour. This would be an option. > However, since the current implementation only has a limited functionality > and is partially working, I would say that it is also possible to make a > hard break and introduce a good and working plugin using the same name. > > OK. > > I don't think that we should introduce the same plugin with the version in > its name, because the development capacity of the core developers is > limited and we won't be able to support more than one version for a longer > time. > > I am not strongly committed to attaching a version number to the plugin > name, it was just a suggestion. However, I am more concerned with the idea > of supporting only one plugin version at a time. This is going to be a > problem, not only for niDAQmx but all plugins. Eventually, there will be > customers who have developed applications using a particular version of a > plugin. They will not be happy if you end-of-life (EOL) the plugin when you > release a new major version. I have some experience with this problem and I > can assure you that you will lose customers. > > Perhaps this is the time to ask: what is the target "market" for itom. > That is, who do you envision to be its customer base? I can think of a > number of possibilities: 1) itom is a teaching tool to be used in course > laboratory experiments, but is not intended for serious operational > deployment; 2) itom is intended for university use, say by university labs, > but not industrial use; 3) itom is targeted at markets 1 and 2, but also at > industrial deployment. Its current deployment should give you some idea of > what markets it currently satisfies. I would be curious to know what these > markets are. If these markets represent all those you would like to > service, then knowing what they are would help me understand your > development strategy. > > Let me give you some background on my career in order to establish I have > some experience with deploying software systems. After completing my > doctoral dissertation, I spent 3 1/2 years at the Australian National > University as a Research Fellow. This gave me experience in a > university environment. I then moved to Lawrence Livermore National > Laboratory for over 17 years, where I conducted research in computer > systems. However, LLNL was not like a university in that the research I > conducted was based on experience with operational systems. After the end > of the Cold War, funding was dramatically reduced at LLNL and I decided to > move to industry. I spent 2 years at Sun Microsystems (purchased later by > Oracle) working on product development, specifically the Solaris operating > system and Sun's CORBA compliant distributed application programming > environment. I then moved to 3Com (purchased later by HP) working for 5 > years in its Skunk Works on invention and applied research. I then left > 3Com, did some individual consulting for 2 start-ups and then retired. > > I apologize if providing this background seems narcissistic. My intention > is to not to build my self up, but to provide evidence that I have > experience with deploying systems and can help you understand the problems > you face in making itom a success. > About different plugin versions at the same time: From my experience, it is very difficult to allow loading different versions of the plugin at the same time that links to different versions of the same 3rd party SDK. This is especially the case under Windows. Under Windows it is not very common, that the same library with different versions can be distinguished by the filename. However in order to simultaneously load the same plugin, linked against different versions of a 3rd party library, all these different libraries have to be in the search path and must therefore have different names. Many SDKs depend on an additional driver, where you can sometimes only install exactly one version of a driver. I know camera SDKs where it is even not possible to obtain the version number at compile and/or runtime. We also had camera SDKs, that renamed or deleted from one version to another one without changing a major version of this SDK. This makes it very difficult to always support different versions of the driver. A big problem is, that it is very difficult to test hardware with different versions of its drivers (very time consuming, driver sometimes related to firmware, firmware cannot be downgraded...). All in all, our objective is to allow compiling any plugin as long as possible with different versions of the SDK. This means that interested people should be able to have their desired SDK for a camera or other hardware plugin, indicate the path to their SDK in CMake (of the plugins), compile the plugin against their SDK and run the plugin. Due to our very limited time to develop itom, we only provide itom as Windows setup and continuously update plugins to up-to-date versions of the different manufacturer SDKs. We try to test new SDKs with real hardware, however this is not always simple, since we don't have access to every single device. Linux users and Windows user, who want to compile a plugin with an older version of a 3rd party SDK have to compile itom and the plugins by themself. Of course we wished to provide more, but more is not possible, since most of the development and deployment of setups has to be done in the spare time of a couple of developers (me included). About the "target" market: We started developing itom in 2011 to operate many experimental setups in the labs of the institute of applied optics of university of Stuttgart. Additionally, we used it to evaluate the acquired data. Then we published itom as open source application, such that other interested people can use it for their purposes. However we never had a specific roadmap in mind about the target market. We invite everybody to use it for its purposes, but we always knew, that we will never be able as an institute to provide real service or long-term support for this, such that selling this software was never an option. Therefore the open-source solution with the intention, that we invite anybody to use and improve the software and we would like to give hints or help as long as we can achieve this with our limited capacity or time. Of course we would wish to provide better documentation, better versioning, a clearer roadmap... but nobody of the main developers can spend the necessary time to achieve all these goals. I am working in a company since a couple of years and my colleagues and me we are using itom very often in our daily live to build test systems, to support internal pro-developments by means of data evaluation, visualization... However we don't use itom in real production systems, but only in basic research and test systems and anybody knows about the limitations (e.g. missing long-term support). The big advantage is, that we are able to develop further (private) plugins or improve some other parts of itom (e.g. the very powerful plotting tools to visualize big array and data stacks) in order to quickly solve our problems. > So from my side: either hardly replacing the current version by a proper > new version (same name, major version number change and a hint in the > documentation) or keeping the current version as legacy version (which will > be disabled per default) > and adding the new plugin with a new name (however I don't have a good > idea for a new name ;) ) > With respect to different NIDAQ-versions, I don't think that it is > necessary to simulatenously provide different versions of the plugin. I > guess that the interface of NI-DAQmx is quite stable, therefore it is > possible to compile the plugin > against let's say 18.x and run it with 19.x. > > Retaining a backward compatible interface is necessary but not sufficient > for achieving forward compatibility. New versions of software frequently > add bugs in the new implementation that break applications using the old > implementation. It is generally not a good idea to assume a new library > will link and run flawlessly with applications running against an old > library, even if the interfaces are compatible (unless the library > developers conduct serious regression testing - something that I suspect > the NI developers may not do). The level of toleration for this kind of > problem is generally related to your target markets. If itom is just a > learning tool (e.g., targeting at only market 1 described above) then you > can probably get away with broken applications, since a course lab can > continue to use old software. However, if you are targeting market2 and > especially market 3, you will lose customers if a plugin crashes when using > a new version of a library (like the NI-DAQmx library). Unless > you thoroughly regression test plugins with both the old and new libraries, > you are bound to run into problems. > Again you are totally right with all your comments and concerns. However, the number of base developers and the available spare time of them is very limited, therefore we are currently not able to test all plugins for any compatiblity with older versions. Unfortunately this is impossible. Before the last official setup 3.2.1, some colleagues at the institute spent a couple of weeks to test many plugins with the newest drivers and available hardware that they could find somewhere at the institute. That is all what can currently be afforded. Nevertheless, we try to keep backward compatibility to older drivers in the source code of plugins, such that people who are willing to compile itom by themselfs would be able to compile against other versions of a 3rd party SDK. This is what should be achieved for the NI-plugin as well, such that the CMakeLists.txt should not contain fix pathes and versions but we should try to support a bigger range of versions (for the NI DAQ SDK this is not very difficult. Under Windows it is no problem to compile against the NI DAQmx 14.x up to the current version). > > Of course CMake should be created such that anybody can select the path to > any newer version of the NIDAQmx library and the plugin is then compiled > against this. This should be sufficient. > For a possible Windows setup of itom, one quite new version if NIDAQmx > will be installed on the build computer and this version is then used to > compile the plugin. Currently we are only using very basic methods of the > SDK, therefore > it was no problem to compile the plugin against let's say 18.x and run it > with the runtime version of 19.x (this is at least my experience under > Windows). > > The thing which I don't like very much in the current implementation is > the fact, that one has to set the same parameter name again and again in > order to add channels. This is not a desired behaviour for itom plugins. I > think that > it is much easier to have only one channel parameter with a > semicolon-separated list of all channel strings (exactly what the getParam > of the channels returns, therefore I would like that setting and getting > this parameter has the same meaning). > This is what I do in my current implementation of the plugin and it works > rather good. Currently I took your plugin and changed it to the single > 'channel' parameter, support multiple devices (which works good) and > support finite and continuous tasks. > Additionally I plan to re-add the configuration dialog. > > Sure. I have no problem with this. Do you plan on changing the unit tests > to test this new implementation? > Yes I plan to adapt the unit tests. However I guess that they will be much simpler, since it is much easier if we do not have to support analog, digital and counter tasks at the same time in the same instance of the plugin. > > To simplify the plugin, one has to select the task type (analog/digital > in/out) at initialization. This cannot be changed any more for one instance > (I guess that it is not necessary to change this basic task type at any > later time; an analog input task will always remain this). This simplifies > to programming and reduces the amount of parameters to only 6-8 in total. > Maybe we can also use your current implementation (as pull request) to > replace the current plugin > and then use my modified version as 2nd plugin (or future replacement) > which will then support continuous tasks... > > Well, this is a major architectural change. I am not opposed to it, but I > think some discussion would be useful. > > As I understand your vision for the new version of the plugin, it will be > possible to attach channels to it that are on different devices. With this > new change, a plugin instance will only service one type of ADC activity > (e.g., analog input, digital input, analog output). So, effectively, you > are factoring the underlying NI C library functionality according to ADC > activity type. The current plugin factors the underlying NI C library > functionality according to specific devices, supporting different ADC > activity types in the same instance. > My plan is rather to connect one instance of the plugin to one task. My idea came up, when I played around with the NI MAX application (with simulated devices). There I have noticed, that it is never possible to create one task that consists of a mixture of input/output channels as well as analog/digital/counter channels. Therefore I find it very nice to say, that every task is covered by one instance of the plugin in itom. Officially, NI supports creating one task that contains channels from different devices. Therefore I would support this in the plugin, too. However I already have seen, that some devices do not support this. For instance, if I add such a device as simulated one in my NI MAX application and if I try to add channels from this device in a multi-device task in my new itom NI plugin, there is an error message. Now, I also request the detailed error description of the NI DAQmx driver and this is very powerful. It gives detailed information, why the task cannot be created. In this specific situation, the error tells people that the NI device XYZ is not able to be merged with other devices in one task. I guess that this hint gives enough information to understand the problem. Therefore I don't see a problem to generally support stacking channels from multiple devices in one task. The implementation is not very complex compared to a single task, single device implementation. > > Let me reveal now (up to this point I have kept this to myself) that I > don't think supporting multiple devices per plugin instance is a good > idea, for the following reasons: > I don't want to judge if this a good idea or not. As I described above, I have seen in the NI MAX application, that NI allows creating tasks with channels from different devices. Therefore I would say, that there might be situations when this will be helpful (e.g. if you have different NI grabber cards of the same type in the same computer to increase the number of connectable channels. Since the implementation overhead is very low to single device tasks and since there are meaningful error messages in case that at least one device does not support that behaviour, I would keep on allowing this behaviour (to be close to the features that are officially supported by NI DAQmx). Of course, any user has then to decide if using this feature makes sense in the context of his hardware and requirements. > > 1.Multiple devices are not homogenous in the services they supply. For > example, I have both a PCI-6220 and a PCI-4461 card on my development > machine. The 6220 supports 250 KS/s analog input aggregate, while the 4461 > supports a maximum of 204.8 KS/s analog input per channel. Since the sample > rate is a task parameter, associating these two devices with a single > plugin instance means tying one device to the limitations of the other. For > example, if the instance controls 4 analog input channels of the 6220 and 2 > analog input channels of the 4461, then the maximum rate that could > be specified in the task parameters would be 250/4 = 62.5 KS/s, which means > the 4461 channels would be severely under sampled. > > 2. Associating a plugin instance with a device is conceptually clean. It > is easy to visualize the behavior of a plugin when it is associated with a > single device. If it is associated with multiple devices, the nature of > the ADC transfer differs from device to device. For example, the 4461 > supports pseudo-differential inputs, while the 6220 does not. The 4461 > supports different voltage ranges than the 6220 (4461: +/- .316V, +/- 1V, +/- > 3.16V, +/- 10V, +/- 31.6V, and +/- 42.4V; 6220: +/- .2V, +/- 1V, +/- 5V > and +/- 10V.) While the Channel Parameters would allow the specification > of different voltage ranges, keeping them all straight is a chore. It is > far easier to do this if there is one device per plugin instance. > > These reasons are just off the top of my head. I am sure I could find > others that would support the one device per plugin architecture. > > But, I would be open to arguments why one ADC activity per plugin instance > is advantageous. > > Concerning the callbacks: > For the purpose that you want to achieve, I think that the callback is not > required, since applying type casts, slicing or element-wise mathematical > calculations to arrays or dataObjects in Python is very fast. The reason > is, that both > a dataObject and numpy arrays are programmed in C/C++ and type casts and > math operations are fully based on common numeric libraries like BLAS, > Atlas or Lapacke. Therefore array operations in Python are very fast. So it > should > work, that you call getVal a couple of times a second to obtain the latest > chunk of double-data, type cast this data to uint16, apply a moving average > filter and save the output to a file. Additionally passing the dataObject > from the plugin > to Python is only a shallow copy and not necessarily a memcpy-operation. > Therefore I would suggest to keep the plugin "simple" and generic. If > somebody needs to add special calculations inside of the plugin, it is > either possible to create an adapted copy of this plugin or to directly use > the powerful python package python-nidaqmx from National Instruments, which > gives access to all special things of the SDK. > > Actually, what I had in mind is implementing an algorithm plugin for the > processing and controlling a niDAQmx plugin instance using the > DataIOThreadCtrl class. I had no intention of putting the numerical > processing code in the niDAQmx plugin implementation. > This sounds good. Algorithm plugins can be used to directly control other instances of hardware plugins without having a python script as connecting element between both. You can use the helper class DAtaIOThreadCtrl for this or you can also directly communicate with the hardware instance from an algorithm. For instance, the serialIO plugin is used very often as communication device for other plugins, e.g. there are motor stages where you have to pass a pre-initialized instance of the serialIO plugin to the init-method of the motor stage plugin. Then, the motor stage plugin get the pointer to the instance of serialIO, which can be casted to ito::AddInBase or ito::AddInDataIO, such that all methods from these interfaces can be called from the motor stage plugin. > > Regards, > > Dan > Regards Marc > > Cheers > Marc > > Am Mo., 26. Aug. 2019 um 01:22 Uhr schrieb dan nessett <dne...@ya... > >: > > Hello Marc, > > I agree. I think option 2 is the best strategy going forward. This will > change the existing niDAQmx plugin so it is incompatible with the one that > exists now. This will give me the opportunity to fix the hack that I used > to delay setting task parameters until channel parameters are set. This > would change the order of the commands in the plugin lifetime to: > > create plugin instance > set ChannelParameters (one setParam for each channel in the task) > set TaskParameters > startDevice > > <execute the necessary commands to get the data. This differs depending on > whether finite or continuous input is selected. But,, see my comments below > on using a callback function> > > stopDevice > delete plugin instance. > > Have you given any thought to how itom should handle major versions of > plugins? If we adopt option two, there will be 3 versions of niDAQmx: 1) > the current version, 2) a new version for which each instance is tied to > only one device and implements option 2 semantics, and 3) the version you > are working on that supports multiple devices per plugin instance. We could > distinguish plugin versions with a naming convention, i.e., niDAQmx_1, > niDAQmx_2, and niDAQmx_3. This would also support minor and micro > versioning, i.e., niDAQmx_2.2.3. It would also allow us to build versions > of the plugin that support different versions of the niDAQmxl C library > (right now I am using 18.1, but 19.1 was published in July). Of course that > would complicate the CMake files, but that is work that would have > substantial benefits. > > I have been investigating the use the callback function in reading > channels. Here are some thoughts: > > 1. It turns out that use of the callback function would have benefits not > only for continuous, but also finite analog input. For example, I have > regularly run phase noise experiments using my PicoScope that required 52 > seconds of data per segment. Each of these segments is averaged together to > produce a smooth FFT plot. For the experiments I ran, the parameters were: > 10,000 samples per second, 52 seconds per segment, 30 segments per FFT > plot. The niDAQmx library only supports 64-bit floating point for sampling > using real numbers. So each sample is 8 bytes long, meaning an equivalent > FFT computation would require: 10,000*52*30*8 bytes of data = 124 GB of > data. This obviously won't fit into my computer memory (which only has 1 GB > of free memory). > > The way the PicoScope handles this is to sample 16-bits and move each > segment to the main machine for averaging (the PicoScope is a USB > Oscilloscope/Spectrum Analyzer). This reduces the amount of data that the > PicoScope must hold in hardware to 4.1 MB of data (per segment). If the > niDAQmx plugin was suitably modified, I also could sample at 16-bits and > then do the averaging and conversion to floating point in itom. But, this > would significantly increase the processing in itom. Since Python is an > interpreted language, it is not optimal for high intensity numerical > calculations. So, if I wanted to do this experiment using itom and niDAQmx, > I would probably want to write the segment averaging and floating point > conversion in C or C++. > > 2. The callback function in niDAQmx executes in the context of the niDAQmx > thread. There is an option for using a user-land thread as the context for > the callback function, but that only works for Windows, it isn't supported > for Linux. Consequently, to keep itom platform independent, the callback > must execute in the niDAQmx thread. Now I am not completely certain, but > since niDAQmx is driver software, I assume it is using kernel threads. You > don't want to tie up a kernel thread with a lot of computation, since this > delays critical functions in the kernel. So, the activity of the callback > function should be as lightweight as possible, which means simply getting > the data out of the driver and into user-land memory. Then the > computationally intense activity would be implemented by a user-land (i.e., > itom thread). > > 3. Given the above, what does the itom thread need to do? For the problem > outlined above, it needs to average the segments. This requires converting > the voltage readings to power units, which means first converting to > floating point. Then run the FFT algorithm (along with preliminary > filtering and windowing) to get the bins over which averaging occurs. Then, > on a per segment basis, adding the bins together and dividing by the number > of segments (this can be done on an iterative basis, so you don't have to > process all of the segments at once). > > This is activity using finite analog input. So, I think I have made at > least one case why using the callback function in the niDAQmx C function > suite would greatly benefit when finite analog input is selected. > > We should discuss this, but if we decide to allow callbacks for both > continuous and finite analog input, we can make the interface a bit more > symmetric. Both continuous and finite analog input would always use the > callback technique. StartDevice and StopDevice would always be used and > acquire() become a noop. getVal or copyVal then would simply move the data > sitting in the user-land buffers into a dataObject (either using a shallow > or deep copy). > > Comments? > > Regards, > > Dan > > On Aug 25, 2019, at 9:47 AM, Marc Gronle <mar...@it...> > wrote: > > Hi Dan, > > you asked exactly the same questions that I am asking myself since a while. > > I guess all following options have one thing in common: If a continuous > input is started, it is the responsibility of the user to regularily > request a chunk of already recorded data from the plugin > (and optionally store this chunk to a file, visualize it...). Therefore > getVal should be regularily called and the returned dataObject always has a > changing number of columns, depending on the > number of samples that have been collected since the last call of getVal. > If I understand the NI documentation properly, the task will stop and > indicate an error, if the internal buffer (given by > the number of samples in the task configuration) is full and nobody > cleared it by calling DAQmxReadAnalogF64 (or something similar). > > Currently, we don't have a similar device with continuous acquisition. > There is only one other supported AD/DA device from measurement computing, > but the continuous acquisition > is not implemented there, too. Therefore, there is no established > infrastructure available in itom for this (not yet). > > I see the following options: > > 1. The quick and dirty solution (I don't like it to much): > - startDevice / stopDevice are NOOP > - acquire will start the continuous task or finite task > - a continuous call to getVal will always obtain the currently available > chunk of data in the internal buffer. The task will automatically stop if > you don't call getVal again or if you restart the task with another > paramerization. > For finite tasks, getVal blocks until all samples are recorded (or > timeout) > > 2. The solution, which fits best to other IO-devices like cameras > - starting a task will be moved to startDevice (for cameras, startDevice > will always bring a camera into a state, where images can then be taken, > each single acquisition is then triggered by 'acquire', once a device is > started, not all parameters can be changed - this highly depends on the > camera) > - stopping the task will be moved to stopDevice > - If the task is continuous, it will be really started by calling > startDevice (acquire is Noop then), if the task is finite, acquire has to > be called to fire the start trigger > - for a finite task, getVal blocks until all samples are recorded (or > timeout), for a continuous task one has to regularily call getVal such that > the internal NI buffer never gets full (and the task is interrupted then) > > 3. AD/DA specific solution > - startDevice / stopDevice are NOOP > - acquire will start a continuous or finite task > - there will be a new method in the base class ito::AddInDataIO (e.g. > stopAcquisition or simply stop) which will be wrapped by a method with the > same name in the python class itom.dataIO which is only implemented in the > case of AD/DA converters which can be called to stop a continuous task > - the behaviour of getVal is again the same than in option 1 or 2 > > > Me personally, I would prefer option 2, since startDevice and stopDevice > are currently unused for the NIDAQmx plugin, but for cameras, which only > provide a continuous data stream and cannot be triggered, like webcams, > startDevice will start the data stream and acquire will only "extract" the > latest image in the stream and provide it to the following getVal command. > Introducing a new command will be available to the dataIO interface in > python, > hence every camera would have this command which is a NOOP then. All in > all, my ranking would be option 2, then option 3 and option 1 would only be > a workaround. > > The NIDAQmx interface also has the option to register a callback function > if a certain number of samples have been recorded. In one of my latest > emails to you, I talked about the official python package nidaqmx-python and > the example, where one registers a python method as callback. Inside of > this callback, the current available chunk of data is obtained (to a > numpy-array) and stored to the hard drive. In theory it would also be > possible to > register a c-function in the c++ plugin to this callback function and if > it is fired emit a Qt signal, defined in the plugin. Since a couple of > months it is now possible to connect python methods in itom to such a > function. However > you still have to call getVal to really read the data and store it. > Therefore I would say that it is often possible to estimate the right time > interval when data has been obtained to avoid a full NI-internal buffer. > Then it is also > possible to repeatedly call getVal within a loop and a sleep command or to > use the python class itom.timer (or similar ones) to register a callback > function in python which should be called e.g. every 250ms. > > Best regards > > Marc > > Am Sa., 24. Aug. 2019 um 17:20 Uhr schrieb dan nessett <dne...@ya... > >: > > Hi Marc, > > I am starting to think about how to implement continuous analog input. I > think one point you have raised is how to signal the beginning of the input > and the end of it. It would be possible to use acquire() to start the input > and stopDevice() to end it, but that has the feeling of a hack (overloading > existing functions to do something else). Another possibility would be to > add startChannel() and stopChannel() methods to the plugin. Is there a way > to add plugin specific method calls so they have python wrappers? Are there > any other plugins doing this (if so, would you point me to them so I can > take a look at the code)? > > Cheers, > > Dan > > D > > |