You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
(1) |
Apr
(9) |
May
(20) |
Jun
(1) |
Jul
|
Aug
(8) |
Sep
(8) |
Oct
|
Nov
|
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(25) |
Feb
(1) |
Mar
(14) |
Apr
(12) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(8) |
Nov
(14) |
Dec
(3) |
2012 |
Jan
|
Feb
(13) |
Mar
(17) |
Apr
(32) |
May
(22) |
Jun
(35) |
Jul
(56) |
Aug
(16) |
Sep
(8) |
Oct
(26) |
Nov
(30) |
Dec
(29) |
2013 |
Jan
(23) |
Feb
(19) |
Mar
(9) |
Apr
(39) |
May
(30) |
Jun
(23) |
Jul
(33) |
Aug
(7) |
Sep
(13) |
Oct
(40) |
Nov
(91) |
Dec
(43) |
2014 |
Jan
(59) |
Feb
(37) |
Mar
(28) |
Apr
(43) |
May
(37) |
Jun
(21) |
Jul
(56) |
Aug
(43) |
Sep
(44) |
Oct
(102) |
Nov
(31) |
Dec
(48) |
2015 |
Jan
(111) |
Feb
(114) |
Mar
(36) |
Apr
(59) |
May
(19) |
Jun
(17) |
Jul
(13) |
Aug
(36) |
Sep
(24) |
Oct
(43) |
Nov
(66) |
Dec
(39) |
2016 |
Jan
(41) |
Feb
(33) |
Mar
(21) |
Apr
(54) |
May
(48) |
Jun
(34) |
Jul
(42) |
Aug
(73) |
Sep
(31) |
Oct
(115) |
Nov
(41) |
Dec
(48) |
2017 |
Jan
(31) |
Feb
(32) |
Mar
(23) |
Apr
(20) |
May
(70) |
Jun
(26) |
Jul
(17) |
Aug
(22) |
Sep
(15) |
Oct
(14) |
Nov
(20) |
Dec
(4) |
2018 |
Jan
(45) |
Feb
(27) |
Mar
(16) |
Apr
(54) |
May
(30) |
Jun
(50) |
Jul
(25) |
Aug
(5) |
Sep
(7) |
Oct
(60) |
Nov
(75) |
Dec
(21) |
2019 |
Jan
(18) |
Feb
(14) |
Mar
(17) |
Apr
(15) |
May
(17) |
Jun
(9) |
Jul
(12) |
Aug
(11) |
Sep
(22) |
Oct
(30) |
Nov
(19) |
Dec
(18) |
2020 |
Jan
(29) |
Feb
(12) |
Mar
(54) |
Apr
(51) |
May
(50) |
Jun
(50) |
Jul
(34) |
Aug
(29) |
Sep
(54) |
Oct
(77) |
Nov
(26) |
Dec
(16) |
2021 |
Jan
(71) |
Feb
(22) |
Mar
(63) |
Apr
(15) |
May
(23) |
Jun
(30) |
Jul
(23) |
Aug
(15) |
Sep
(5) |
Oct
(12) |
Nov
(7) |
Dec
(5) |
2022 |
Jan
(44) |
Feb
(33) |
Mar
(16) |
Apr
(5) |
May
(9) |
Jun
(13) |
Jul
(7) |
Aug
(34) |
Sep
(22) |
Oct
(5) |
Nov
(31) |
Dec
(33) |
2023 |
Jan
(15) |
Feb
(3) |
Mar
(9) |
Apr
(20) |
May
(50) |
Jun
(6) |
Jul
(6) |
Aug
(6) |
Sep
(4) |
Oct
(7) |
Nov
(7) |
Dec
(6) |
2024 |
Jan
(8) |
Feb
(10) |
Mar
(8) |
Apr
(2) |
May
|
Jun
(3) |
Jul
(1) |
Aug
(6) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Uwe H. <uw...@he...> - 2016-10-12 02:41:41
|
Hi Rudolf, On Fri, Sep 16, 2016 at 01:39:14PM +0200, Rudolf Reuter wrote: > I have developed a new sigrok decoder - gpib. > The project is hosted on Github, see: > https://github.com/rudi48/sigrok-gpib > > It would be helpful, if somebody could please test it, and give me feedback. Looks good, thanks a lot for working on this. I've done some cleanups and fixes on the decoder itself and then merged it: - Fixed channel names ('id' fields all-lowercase, use GPIB standard naming such as "DIO1" etc.) - All channels are required (non-optional) for now, not entirely sure how many of them are *actually* optional in real-life (for the decoder). Maybe REN? - I dropped various debug prints and such, those don't belong upstream in the PD (but can be useful during development, it's one method to simplify PD development or to debug issues). - Dropped/simplified some random other stuff, mostly unused chunks of the 'parallel' PD this one was based on. Specifically, the CLK pin is now also gone, there's no CLK in GPIB. - There's a few more things that could be simplified or clarified in the PD, I might have another look at some later point at those maybe. - I kept the 'sample_total' for now as a temporary workaround, but we'll eventually have a more generic method to handle this kind of stuff in libsigrokdecode (end of stream). - Both .py files are now (C) Copyright Rudolf Reuter <reu...@ar...>, there's pretty much nothing left of the 'parallel' PD which would warrant having my Copyright lines in the GPIB decoder. I've also added your *.sr example file to our sigrok-dumps repo with a small additional README (and some renaming of the channels for better readability). Finally, I've added a test-case in our sigrok-test repo so we can verify that the PD works as intended, and keeps doing so when we change libsigrokdecode later on. Details here: http://sigrok.org/gitweb/?p=libsigrokdecode.git;a=commit;h=ffd58b683fc200c3bfb96a274dd2bc5c4cea7dcc http://sigrok.org/gitweb/?p=sigrok-dumps.git;a=commit;h=55aca15dd789daa407244173a0fdb52ed099b164 http://sigrok.org/gitweb/?p=sigrok-test.git;a=commit;h=0c804ea53c3057e728f98fa9d4264b449333ee6e If you could provide some more files for sigrok-dumps / sigrok-test with more GPIB traffic in them (different devices, different GPIB commands and such) that would be great! It'll surely prove very useful to test/improve the decoder some more. Cheers, Uwe. -- http://hermann-uwe.de | http://randomprojects.org | http://sigrok.org |
From: Boris S. <bs...@pa...> - 2016-10-11 22:43:46
|
Hi All, I'm new to the list. I've just adopted some FreeBSD ports (libserialport, sigrok and friends) that Uffe Jakobsen have been maintaining. I use FreeBSD since 2.2.x and I'm a FreeBSD ports committer. I own a Rigol DS1000 oscilloscope which I'd like to use with FreeBSD. I did only some tiny probes of software discussed here but I'm impressed. Thank you all involved! And I hope to contribute to your code. -- WBR, Boris Samorodov (bsam) FreeBSD Committer, http://www.FreeBSD.org The Power To Serve |
From: Chris D. <chr...@ho...> - 2016-10-11 18:43:26
|
I like Gehard's suggestion as well. This design allows for PDs and the output module to split the work. Each output module will need to document 1) what metadata it needs and 2) what the PD's binary stream will look like. In most cases, I would expect the PD's binary stream to be very close to the final format (ex: WAV and MIDI). Theoretically, as Gerhard suggested earlier, an output module could go a step further and translate from one format to another similar format (ex: all audio-based PDs output WAV data but the output module can write it as WAV or AU format). Sent from my Verizon, Samsung Galaxy smartphone -------- Original message -------- From: Soeren Apel <so...@ap...> Date: 10/11/16 10:50 AM (GMT-07:00) To: Gerhard Sittig <Ger...@gm...>, sig...@li... Subject: Re: [sigrok-devel] Outputting files from Protocol Decoders I like this approach. What do you guys think about a meta packet that describes the meaning/content of the binary output? With that, the I2S PD could announce sample rate/channels/resolution (before or after starting output, wouldn't really matter) and the sigrok client could pass this meta data to an output module along with the binary data that was captured from the PD. That way, the PD can stream whatever it has, the output module can create an output file in any way it wants and the libsigrok/libsigrokdecode infrastructure can remain virtually unchanged. The icing on the cake would be that a "MIDI" output module (or any other) could easily be made to deal with binary data from different PDs - in contrast to having the need to implement such I/O in every single PD directly. Any thoughts? All the best, -Soeren On Tue, 2016-10-11 at 12:54 +0200, Gerhard Sittig wrote: > [ summary: use an output module, see srzip for prior art ] > > On Tue, Oct 11, 2016 at 06:03 +0000, Chris Dreher wrote: > > > > > > It seems the streaming design of sigrok eliminates a lot of > > file formats. Namely, most file formats with a size field that > > precedes a large variable amount of data can not be supported > > in sigrok. This includes file formats that are 99% > > stream-oriented. For example, MIDI files are 99% compatible > > with sigrok's stream design other than a single 32-bit size > > field in the track chunk. For the MIDI protocol, there are not > > good alternatives to saving data other than in a MIDI file. It > > is unfortunate that such files currently can not be supported > > by sigrok. > > > > That said, I get that sigrok's primary goal is support for > > logic analysis and oscilloscope. Being able to extract and > > save data to a file format is secondary. > > This might be too strict an interpretation. I still feel that > what you want to achieve _is_ possible, just not from within > _decoders_. It's simply that seek operations and manipulation of > previously written data needs to occur in the most appropriate > location, which is not from within a pipe that operates on > strictly linear streams. > > In theory or rather strictly speaking, output modules might > assume mere streaming, too. It's good practice to avoid > unnecessary dependencies when possible. But there is prior art > where some output module has direct access to a file object, and > does manipulate previously written data as more chunks of > information become available. > > > It's true that in the current set of output modules most fill in > an "out" string parameter in their receive() method in straight > forward ways (mere formatting of what they received). Those > strings then are put into an output stream in strict linear ways. > > The CSV output mostly passes on the samples, and later writes > additional gnuplot(1) information that was accumulated in the > process (if the respective feature was enabled). Multiple files > appear to get written by that one output module (optional). > > The "srzip" output module for sigrok's native .sr file format > stands out in that it operates on a ZIP archive that gets created > initially, and gets updated in several steps at several points in > time while the input stream is being received and processed. See > the zip_create() and zip_append() routines. > > So I wouldn't consider it too far a stretch when the WAV (and > later MIDI) output modules insist in accessing a file object > directly, open the file with an initial header which reflects the > available information at the time of file creation, and update > that header as more chunks or samples get appended to the file. > This seems to perfectly be in line with what existing output > modules do already. It's reflecting on a property of the very > format that the user requested for the output. > > For the record: The notion of "internal I/O handling" in output > modules and the associated "output flag" was introduced in commit > 3cd4b381 as of 2015-08-15. > > > Note that I might be slightly off in my reading of the CSV output > module's logic, only had a cursory look. But the srzip output > module certainly does have a test for "filename available?" in > init(), and calls zip_add(), zip_rename(), and zip_replace() > routines which do more than merely append data at the end of a > previously written file. Even if one interprets the ZIP archive > as "just a directory" -- still previously written members get > updated as required. And none of this feels wrong to me, it all > looks appropriate. Just not for a decoder. :) > > > > > > Do people think addressing this issue is worth fixing or is > > time better spent on other issues and features? > > I think that the decoder API doesn't need fixing. When > individual decoder implementations assume conditions that are not > met, then the implementations need fixing but not the design. > > "The issue" goes away if you move the respective manipulation > logic from the decoder to an output module. > > > virtually yours > Gerhard Sittig ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot _______________________________________________ sigrok-devel mailing list sig...@li... https://lists.sourceforge.net/lists/listinfo/sigrok-devel |
From: Brüns, S. <Ste...@rw...> - 2016-10-11 18:28:52
|
On Dienstag, 11. Oktober 2016 19:54:30 CEST Soeren Apel wrote: > Hi Carl-Fredrik, > > > The persistence is more than just esthetics, it allows you to detect > > spurious anomalies since the little short deviation stays on screen > > long enough to be detected, if an oscilloscope doesn’t have > > persistence and at the same time don’t render all frames you could > > miss important information. > > Yeah sure, though using an OpenGL output is not a requirement > for persistence. > > > I understand the hesitation to use OPENGL or OPENGL-ES but > > realistically what device could you think of that doesn’t have h/w > > support for GL-Shaders that would run an oscilloscope application? > > I'm thinking of Android tablets, though I don't really know how > many are out there who lack 3D acceleration. I do however know > that using it drains the battery more than not using it. I am not aware of any halfway recent Android devices which are not capable of open OpenGL ES 2.0, most should be able to do ES 3.0. Running a 3D benchmark at full tilt will of course drain the battery. Using the GPU for stuff it is meant to do will actually save energy. Current GPUs (even embedded) are capable of pushing several GByte/s around and doing calculations on this data. > My personal dislike towards OpenGL is that I'm not aquainted with > t (I want to learn, don't get me wrong, I just don't have the > time to do so) and that I have the gut feeling that it'll be > difficult to obtain pixel-perfect line output. If you do any drawing with OpenGL 1.5 primitives you will have a hard day. If you use shaders, it is actually much nicer. You can just push a buffer with the raw sample values (i.e. an array), and a suitable shader will: 1. draw the line 2. using antialiasing 3. optionally map the trace speed to intensity 4. optionally do persistence Now instead of pushing a full framebuffer (typically 32bit, e.g. 1920x1080 pixels) per frame, you can push only the sample data. The framebuffer above is 8 MByte, at 25fps this is 200MByte/s. The same amount of data equals 200 MSample/s (@8bit). I doubt the current code is capable of drawing this amount of samples in realtime, but I am quite sure even a cheap tablet SoC could chew this data. Kind regards, Stefan |
From: Soeren A. <so...@ap...> - 2016-10-11 17:55:22
|
Hi Carl-Fredrik, > The persistence is more than just esthetics, it allows you to detect > spurious anomalies since the little short deviation stays on screen > long enough to be detected, if an oscilloscope doesn’t have > persistence and at the same time don’t render all frames you could > miss important information. Yeah sure, though using an OpenGL output is not a requirement for persistence. > I understand the hesitation to use OPENGL or OPENGL-ES but > realistically what device could you think of that doesn’t have h/w > support for GL-Shaders that would run an oscilloscope application? I'm thinking of Android tablets, though I don't really know how many are out there who lack 3D acceleration. I do however know that using it drains the battery more than not using it. My personal dislike towards OpenGL is that I'm not aquainted with t (I want to learn, don't get me wrong, I just don't have the time to do so) and that I have the gut feeling that it'll be difficult to obtain pixel-perfect line output. All the best, -Soeren > Regards // CF > > Hi Carl-Fredrik, > > > I think that if you want to start doing oscilloscope functionality > it > > would have to be more realtime? > > Of course, though any (decent) scope attached via USB 2 or better > should be able to produce data faster than we humans can process it, > so drawing every single received frame isn't a hard requirement imo. > > > > Wouldn’t it be an idea to use opengl for rendering in those cases > and > > implement persistence > > in shader language. > > > > I saw this demo persistence demo and wondered if something like > that > > could be implemented > > > > http://m1el.github.io/woscope-how/ > > Well sure, it looks great. To me personally, there are three issues > here, though: > 1) electron beam scopes are beautiful but outdated. To me, > precision is more important than beauty. And we can do beauty in > PV, too, using antialiasing and when desired, interpolation. > > 2) I don't want PV to be limited to devices with GL/shader support. > > 3) drawing static lines/text/glyphs *may* be more complicated when > using a GL view, I don't know. But if it is, the additional effort > isn't something I'm willing to spend time on in the foreseeable > future. > > So yes, looks nice, but imo it won't happen for PV anytime soon, > sorry. > > All the best, > -Soeren > |
From: Soeren A. <so...@ap...> - 2016-10-11 17:48:58
|
I like this approach. What do you guys think about a meta packet that describes the meaning/content of the binary output? With that, the I2S PD could announce sample rate/channels/resolution (before or after starting output, wouldn't really matter) and the sigrok client could pass this meta data to an output module along with the binary data that was captured from the PD. That way, the PD can stream whatever it has, the output module can create an output file in any way it wants and the libsigrok/libsigrokdecode infrastructure can remain virtually unchanged. The icing on the cake would be that a "MIDI" output module (or any other) could easily be made to deal with binary data from different PDs - in contrast to having the need to implement such I/O in every single PD directly. Any thoughts? All the best, -Soeren On Tue, 2016-10-11 at 12:54 +0200, Gerhard Sittig wrote: > [ summary: use an output module, see srzip for prior art ] > > On Tue, Oct 11, 2016 at 06:03 +0000, Chris Dreher wrote: > > > > > > It seems the streaming design of sigrok eliminates a lot of > > file formats. Namely, most file formats with a size field that > > precedes a large variable amount of data can not be supported > > in sigrok. This includes file formats that are 99% > > stream-oriented. For example, MIDI files are 99% compatible > > with sigrok's stream design other than a single 32-bit size > > field in the track chunk. For the MIDI protocol, there are not > > good alternatives to saving data other than in a MIDI file. It > > is unfortunate that such files currently can not be supported > > by sigrok. > > > > That said, I get that sigrok's primary goal is support for > > logic analysis and oscilloscope. Being able to extract and > > save data to a file format is secondary. > > This might be too strict an interpretation. I still feel that > what you want to achieve _is_ possible, just not from within > _decoders_. It's simply that seek operations and manipulation of > previously written data needs to occur in the most appropriate > location, which is not from within a pipe that operates on > strictly linear streams. > > In theory or rather strictly speaking, output modules might > assume mere streaming, too. It's good practice to avoid > unnecessary dependencies when possible. But there is prior art > where some output module has direct access to a file object, and > does manipulate previously written data as more chunks of > information become available. > > > It's true that in the current set of output modules most fill in > an "out" string parameter in their receive() method in straight > forward ways (mere formatting of what they received). Those > strings then are put into an output stream in strict linear ways. > > The CSV output mostly passes on the samples, and later writes > additional gnuplot(1) information that was accumulated in the > process (if the respective feature was enabled). Multiple files > appear to get written by that one output module (optional). > > The "srzip" output module for sigrok's native .sr file format > stands out in that it operates on a ZIP archive that gets created > initially, and gets updated in several steps at several points in > time while the input stream is being received and processed. See > the zip_create() and zip_append() routines. > > So I wouldn't consider it too far a stretch when the WAV (and > later MIDI) output modules insist in accessing a file object > directly, open the file with an initial header which reflects the > available information at the time of file creation, and update > that header as more chunks or samples get appended to the file. > This seems to perfectly be in line with what existing output > modules do already. It's reflecting on a property of the very > format that the user requested for the output. > > For the record: The notion of "internal I/O handling" in output > modules and the associated "output flag" was introduced in commit > 3cd4b381 as of 2015-08-15. > > > Note that I might be slightly off in my reading of the CSV output > module's logic, only had a cursory look. But the srzip output > module certainly does have a test for "filename available?" in > init(), and calls zip_add(), zip_rename(), and zip_replace() > routines which do more than merely append data at the end of a > previously written file. Even if one interprets the ZIP archive > as "just a directory" -- still previously written members get > updated as required. And none of this feels wrong to me, it all > looks appropriate. Just not for a decoder. :) > > > > > > Do people think addressing this issue is worth fixing or is > > time better spent on other issues and features? > > I think that the decoder API doesn't need fixing. When > individual decoder implementations assume conditions that are not > met, then the implementations need fixing but not the design. > > "The issue" goes away if you move the respective manipulation > logic from the decoder to an output module. > > > virtually yours > Gerhard Sittig |
From: Chris D. <chr...@ho...> - 2016-10-11 16:49:25
|
Unfortunately, the MIDI spec does not define any way to use multiple chunks while allowing for proper playback of the original data. There are 3 formats (aka "types") for MIDI files defined in the MIDI spec. Unfortunately, none of them allow for multiple chunks to played back sequentially. Here, the term "chunk" refers to a specific datastructure with the MIDI format (and not just a generic or random group of bytes). Format 0 only has 1 chunk. Format 1 has multiple chunks that are played back simultaneously so it doesn't work either. Format 2 has multiple chunks but it is rarely used and each chunk is for alternative arrangements. Thus, none of these formats are supported. -Chris ________________________________ From: Brüns, Stefan <Ste...@rw...> Sent: Tuesday, October 11, 2016 7:13 AM To: sig...@li... Subject: Re: [sigrok-devel] Outputting files from Protocol Decoders On Dienstag, 11. Oktober 2016 06:03:55 CEST Chris Dreher wrote: > Thanks, I really, really do appreciate the answers. > > > It seems the streaming design of sigrok eliminates a lot of file formats. > Namely, most file formats with a size field that precedes a large variable > amount of data can not be supported in sigrok. This includes file formats > that are 99% stream-oriented. For example, MIDI files are 99% compatible > with sigrok's stream design other than a single 32-bit size field in the > track chunk. For the MIDI protocol, there are not good alternatives to > saving data other than in a MIDI file. It is unfortunate that such files > currently can not be supported by sigrok. MIDI can be supported. There is nothing that stops you from using multiple chunks. If you have a complete event, output it as a chunk. Although this makes the file larger, midi files are comparatively small. Kind regards, Stefan ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot _______________________________________________ sigrok-devel mailing list sig...@li... https://lists.sourceforge.net/lists/listinfo/sigrok-devel sigrok-devel Info Page - SourceForge<https://lists.sourceforge.net/lists/listinfo/sigrok-devel> lists.sourceforge.net To see the collection of prior postings to the list, visit the sigrok-devel Archives. Using sigrok-devel: To post a message to all the list members ... |
From: Brüns, S. <Ste...@rw...> - 2016-10-11 14:49:30
|
On Dienstag, 11. Oktober 2016 06:03:55 CEST Chris Dreher wrote: > Thanks, I really, really do appreciate the answers. > > > It seems the streaming design of sigrok eliminates a lot of file formats. > Namely, most file formats with a size field that precedes a large variable > amount of data can not be supported in sigrok. This includes file formats > that are 99% stream-oriented. For example, MIDI files are 99% compatible > with sigrok's stream design other than a single 32-bit size field in the > track chunk. For the MIDI protocol, there are not good alternatives to > saving data other than in a MIDI file. It is unfortunate that such files > currently can not be supported by sigrok. MIDI can be supported. There is nothing that stops you from using multiple chunks. If you have a complete event, output it as a chunk. Although this makes the file larger, midi files are comparatively small. Kind regards, Stefan |
From: Gerhard S. <Ger...@gm...> - 2016-10-11 10:55:28
|
[ summary: use an output module, see srzip for prior art ] On Tue, Oct 11, 2016 at 06:03 +0000, Chris Dreher wrote: > > It seems the streaming design of sigrok eliminates a lot of > file formats. Namely, most file formats with a size field that > precedes a large variable amount of data can not be supported > in sigrok. This includes file formats that are 99% > stream-oriented. For example, MIDI files are 99% compatible > with sigrok's stream design other than a single 32-bit size > field in the track chunk. For the MIDI protocol, there are not > good alternatives to saving data other than in a MIDI file. It > is unfortunate that such files currently can not be supported > by sigrok. > > That said, I get that sigrok's primary goal is support for > logic analysis and oscilloscope. Being able to extract and > save data to a file format is secondary. This might be too strict an interpretation. I still feel that what you want to achieve _is_ possible, just not from within _decoders_. It's simply that seek operations and manipulation of previously written data needs to occur in the most appropriate location, which is not from within a pipe that operates on strictly linear streams. In theory or rather strictly speaking, output modules might assume mere streaming, too. It's good practice to avoid unnecessary dependencies when possible. But there is prior art where some output module has direct access to a file object, and does manipulate previously written data as more chunks of information become available. It's true that in the current set of output modules most fill in an "out" string parameter in their receive() method in straight forward ways (mere formatting of what they received). Those strings then are put into an output stream in strict linear ways. The CSV output mostly passes on the samples, and later writes additional gnuplot(1) information that was accumulated in the process (if the respective feature was enabled). Multiple files appear to get written by that one output module (optional). The "srzip" output module for sigrok's native .sr file format stands out in that it operates on a ZIP archive that gets created initially, and gets updated in several steps at several points in time while the input stream is being received and processed. See the zip_create() and zip_append() routines. So I wouldn't consider it too far a stretch when the WAV (and later MIDI) output modules insist in accessing a file object directly, open the file with an initial header which reflects the available information at the time of file creation, and update that header as more chunks or samples get appended to the file. This seems to perfectly be in line with what existing output modules do already. It's reflecting on a property of the very format that the user requested for the output. For the record: The notion of "internal I/O handling" in output modules and the associated "output flag" was introduced in commit 3cd4b381 as of 2015-08-15. Note that I might be slightly off in my reading of the CSV output module's logic, only had a cursory look. But the srzip output module certainly does have a test for "filename available?" in init(), and calls zip_add(), zip_rename(), and zip_replace() routines which do more than merely append data at the end of a previously written file. Even if one interprets the ZIP archive as "just a directory" -- still previously written members get updated as required. And none of this feels wrong to me, it all looks appropriate. Just not for a decoder. :) > Do people think addressing this issue is worth fixing or is > time better spent on other issues and features? I think that the decoder API doesn't need fixing. When individual decoder implementations assume conditions that are not met, then the implementations need fixing but not the design. "The issue" goes away if you move the respective manipulation logic from the decoder to an output module. virtually yours Gerhard Sittig -- If you don't understand or are scared by any of the above ask your parents or an adult to help you. |
From: Chris D. <chr...@ho...> - 2016-10-11 06:04:09
|
Thanks, I really, really do appreciate the answers. It seems the streaming design of sigrok eliminates a lot of file formats. Namely, most file formats with a size field that precedes a large variable amount of data can not be supported in sigrok. This includes file formats that are 99% stream-oriented. For example, MIDI files are 99% compatible with sigrok's stream design other than a single 32-bit size field in the track chunk. For the MIDI protocol, there are not good alternatives to saving data other than in a MIDI file. It is unfortunate that such files currently can not be supported by sigrok. That said, I get that sigrok's primary goal is support for logic analysis and oscilloscope. Being able to extract and save data to a file format is secondary. Do people think addressing this issue is worth fixing or is time better spent on other issues and features? -Chris ________________________________ From: Bert Vermeulen <be...@bi...> Sent: Monday, October 10, 2016 2:50 AM To: sig...@li... Subject: Re: [sigrok-devel] Outputting files from Protocol Decoders On 10/10/2016 01:22 AM, Chris Dreher wrote: > Here is a brief summary of the original questions and their status here > (TL;DR). > > > 1. Once put() is called for OUTPUT_BINARY, is there anyway to go back > and change those bytes? No. It is a continuous stream, and the producer of that stream has no knowledge of what receives that stream, by design. > 2. Is it acceptable to buffer most of the file data and then just output > the entire file at the end? No. That violates the principle that everything in sigrok (libsigrok and libsigrokdecode) is a continuous stream. I realize that is not so easy to do with e.g. file formats that simply don't permit this, but in such cases it's perhaps worth thinking whether that file format is a good match for sigrok processing to begin with. It has not been a big problem so far. > 3. Is there a way to know that the end of the sample input stream has > been reached? No. The interface from a frontend to a PD is srd_session_send(), which takes a raw buffer of logic data. If this were to be changed to be more like libsigrok, which sends packets with corresponding packet type, this would be more flexible. For example a packet type similar to libsigrok's SR_DF_END could be used to help here. Another way to do this, though not as elegant as the above solution, would be to add a meta packet type to signal the end of the stream. > 4. (new question) Can put()'s parameters of startsample and endsample be > used to insert data earlier into the file output or are these parameters > ignored because the order of calls to put() determine the order bytes are > output? No. You can not expect any listener to that stream to be able to handle this, nor should you. Don't mean to sound negative, but hey, straight answers :-) -- Bert Vermeulen be...@bi... ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot _______________________________________________ sigrok-devel mailing list sig...@li... https://lists.sourceforge.net/lists/listinfo/sigrok-devel sigrok-devel Info Page - SourceForge<https://lists.sourceforge.net/lists/listinfo/sigrok-devel> lists.sourceforge.net To see the collection of prior postings to the list, visit the sigrok-devel Archives. Using sigrok-devel: To post a message to all the list members ... |
From: Bert V. <be...@bi...> - 2016-10-10 10:11:50
|
On 10/10/2016 01:22 AM, Chris Dreher wrote: > Here is a brief summary of the original questions and their status here > (TL;DR). > > > 1. Once put() is called for OUTPUT_BINARY, is there anyway to go back > and change those bytes? No. It is a continuous stream, and the producer of that stream has no knowledge of what receives that stream, by design. > 2. Is it acceptable to buffer most of the file data and then just output > the entire file at the end? No. That violates the principle that everything in sigrok (libsigrok and libsigrokdecode) is a continuous stream. I realize that is not so easy to do with e.g. file formats that simply don't permit this, but in such cases it's perhaps worth thinking whether that file format is a good match for sigrok processing to begin with. It has not been a big problem so far. > 3. Is there a way to know that the end of the sample input stream has > been reached? No. The interface from a frontend to a PD is srd_session_send(), which takes a raw buffer of logic data. If this were to be changed to be more like libsigrok, which sends packets with corresponding packet type, this would be more flexible. For example a packet type similar to libsigrok's SR_DF_END could be used to help here. Another way to do this, though not as elegant as the above solution, would be to add a meta packet type to signal the end of the stream. > 4. (new question) Can put()'s parameters of startsample and endsample be > used to insert data earlier into the file output or are these parameters > ignored because the order of calls to put() determine the order bytes are > output? No. You can not expect any listener to that stream to be able to handle this, nor should you. Don't mean to sound negative, but hey, straight answers :-) -- Bert Vermeulen be...@bi... |
From: Oleksij R. <li...@re...> - 2016-10-10 05:52:21
|
Hi all, the problem which faced openhantek kind of recently is a new class of cheap softscopes. For example Hantek 6022Bx don't even have a HW trigger. There is no problem to get data, there is a problem to represent the data for the human. It means, to make sigrok usable for this case, soft triggering should be implemented too. Am 10.10.2016 um 07:32 schrieb Soeren Apel: > Hi Carl-Fredrik, > >> I think that if you want to start doing oscilloscope functionality it >> would have to be more realtime? > > Of course, though any (decent) scope attached via USB 2 or better > should be able to produce data faster than we humans can process it, > so drawing every single received frame isn't a hard requirement imo. > > >> Wouldn’t it be an idea to use opengl for rendering in those cases and >> implement persistence >> in shader language. >> >> I saw this demo persistence demo and wondered if something like that >> could be implemented >> >> http://m1el.github.io/woscope-how/ > > Well sure, it looks great. To me personally, there are three issues > here, though: > 1) electron beam scopes are beautiful but outdated. To me, > precision is more important than beauty. And we can do beauty in > PV, too, using antialiasing and when desired, interpolation. > > 2) I don't want PV to be limited to devices with GL/shader support. > > 3) drawing static lines/text/glyphs *may* be more complicated when > using a GL view, I don't know. But if it is, the additional effort > isn't something I'm willing to spend time on in the foreseeable > future. > > So yes, looks nice, but imo it won't happen for PV anytime soon, > sorry. > > All the best, > -Soeren > > >> >> Regards /// Carl >> >> >>> On Oct 9, 2016, at 1:35 PM, Soeren Apel <so...@ap...> wrote: >>> >>> deed that's the plan. When the new libsigrok pipeline architecture >>> materializes, PV won't care as much about whether the data is from >>> a scope or logic analyzer. Until that happens, I'm considering >>> adding >>> a mechanism that allows analog channels to be used as decoder >>> inputs >>> via some simple filtering. Need to finish the architecture/GUI >>> rework >>> I've started first, though. >> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > sigrok-devel mailing list > sig...@li... > https://lists.sourceforge.net/lists/listinfo/sigrok-devel > -- Regards, Oleksij |
From: Carl-Fredrik S. <aud...@gm...> - 2016-10-10 05:52:05
|
The persistence is more than just esthetics, it allows you to detect spurious anomalies since the little short deviation stays on screen long enough to be detected, if an oscilloscope doesn’t have persistence and at the same time don’t render all frames you could miss important information. I understand the hesitation to use OPENGL or OPENGL-ES but realistically what device could you think of that doesn’t have h/w support for GL-Shaders that would run an oscilloscope application? Regards // CF Hi Carl-Fredrik, > I think that if you want to start doing oscilloscope functionality it > would have to be more realtime? Of course, though any (decent) scope attached via USB 2 or better should be able to produce data faster than we humans can process it, so drawing every single received frame isn't a hard requirement imo. > Wouldn’t it be an idea to use opengl for rendering in those cases and > implement persistence > in shader language. > > I saw this demo persistence demo and wondered if something like that > could be implemented > > http://m1el.github.io/woscope-how/ <http://m1el.github.io/woscope-how/> Well sure, it looks great. To me personally, there are three issues here, though: 1) electron beam scopes are beautiful but outdated. To me, precision is more important than beauty. And we can do beauty in PV, too, using antialiasing and when desired, interpolation. 2) I don't want PV to be limited to devices with GL/shader support. 3) drawing static lines/text/glyphs *may* be more complicated when using a GL view, I don't know. But if it is, the additional effort isn't something I'm willing to spend time on in the foreseeable future. So yes, looks nice, but imo it won't happen for PV anytime soon, sorry. All the best, -Soeren |
From: Soeren A. <so...@ap...> - 2016-10-10 05:33:12
|
Hi Carl-Fredrik, > I think that if you want to start doing oscilloscope functionality it > would have to be more realtime? Of course, though any (decent) scope attached via USB 2 or better should be able to produce data faster than we humans can process it, so drawing every single received frame isn't a hard requirement imo. > Wouldn’t it be an idea to use opengl for rendering in those cases and > implement persistence > in shader language. > > I saw this demo persistence demo and wondered if something like that > could be implemented > > http://m1el.github.io/woscope-how/ Well sure, it looks great. To me personally, there are three issues here, though: 1) electron beam scopes are beautiful but outdated. To me, precision is more important than beauty. And we can do beauty in PV, too, using antialiasing and when desired, interpolation. 2) I don't want PV to be limited to devices with GL/shader support. 3) drawing static lines/text/glyphs *may* be more complicated when using a GL view, I don't know. But if it is, the additional effort isn't something I'm willing to spend time on in the foreseeable future. So yes, looks nice, but imo it won't happen for PV anytime soon, sorry. All the best, -Soeren > > Regards /// Carl > > > > On Oct 9, 2016, at 1:35 PM, Soeren Apel <so...@ap...> wrote: > > > > deed that's the plan. When the new libsigrok pipeline architecture > > materializes, PV won't care as much about whether the data is from > > a scope or logic analyzer. Until that happens, I'm considering > > adding > > a mechanism that allows analog channels to be used as decoder > > inputs > > via some simple filtering. Need to finish the architecture/GUI > > rework > > I've started first, though. > |
From: Carl-Fredrik S. <aud...@gm...> - 2016-10-10 05:11:24
|
I think that if you want to start doing oscilloscope functionality it would have to be more realtime? Wouldn’t it be an idea to use opengl for rendering in those cases and implement persistence in shader language. I saw this demo persistence demo and wondered if something like that could be implemented http://m1el.github.io/woscope-how/ <http://m1el.github.io/woscope-how/> Regards /// Carl > On Oct 9, 2016, at 1:35 PM, Soeren Apel <so...@ap...> wrote: > > deed that's the plan. When the new libsigrok pipeline architecture > materializes, PV won't care as much about whether the data is from > a scope or logic analyzer. Until that happens, I'm considering adding > a mechanism that allows analog channels to be used as decoder inputs > via some simple filtering. Need to finish the architecture/GUI rework > I've started first, though. |
From: Chris D. <chr...@ho...> - 2016-10-09 23:23:08
|
Here is a brief summary of the original questions and their status here (TL;DR). 1. Once put() is called for OUTPUT_BINARY, is there anyway to go back and change those bytes? This question remains unanswered. There is a lot of good discussion about how to redesign how sigrok's file generation code could/should work, though this would be better handled on a separate email thread. What I'm looking for is whether bytes can be changed after put() has been called in the current implementation of sigrok's PD API. At this point, I expect that bytes can not be changed after put() has been called. 2. Is it acceptable to buffer most of the file data and then just output the entire file at the end? The answer here is yes and no. Yes, it is ok to buffer data for a considerable amount of time. However, per bugs 292 and 749, the end of the sample stream is currently not known to PDs (this is especially problematic for live data streams). Regarding memory limits, it sounds like there are no memory limits beyond those imposed by the OS (physical or via tools such as ulimit). There are no unusual memory limits, such as those imposed by the Java VM. 3. Is there a way to know that the end of the sample input stream has been reached? No, per bugs 292 and 749 (thanks Gerhard). 4. (new question) Can put()'s parameters of startsample and endsample be used to insert data earlier into the file output or are these parameters ignored because the order of calls to put() determine the order bytes are output? Theoretically, calling put(0, 0, ...) could be used to insert a header at the beginning of file after the rest of the file data has already been output. However, I strongly suspect that the startsample and endsample parameters are ignored for file output. To answer a question by Gerhard below: I am looking at the MIDI file format, specifically the type 0 format for MIDI files. -Chris ________________________________ From: Gerhard Sittig <Ger...@gm...> Sent: Sunday, October 9, 2016 10:02 AM To: sig...@li... Subject: Re: [sigrok-devel] Outputting files from Protocol Decoders This message "got a little longer". Here is a summary for the impatient, more details are inline below. When you need something that supports seek(), make sure you operate on files. When you are a componpent within a pipe, just consume the input stream and generate an output stream. Don't assume random access to previously generated data when you cannot seek on your output (e.g. because it's a stream). A _decoder_ method to communicate "end of input" may be desirable, and could provide some "flush" semantics, but still does not allow modification of previously generated stream content. (And does not resolve the "last sample" issue either.) When there is no or no simple solution, we might be looking at the wrong problem perhaps? Use appropriate data formats if the currently used format breaks/prevents the specific use case. Keep decoders simple in the sense that they consume an input stream and generate an output stream. Don't bother with "file formats" in a decoder, such a requirement might be a strong hint that this is the job for an output module. On Thu, Oct 06, 2016 at 20:30 -0700, Chris Dreher wrote: > > > Date: Thu, 6 Oct 2016 09:23:16 +0200 > > From: Ger...@gm... > > To: sig...@li... > > Subject: Re: [sigrok-devel] Outputting files from Protocol Decoders > > > > On Wed, Oct 05, 2016 at 11:36 -0700, Chris Dreher wrote: > > > > > > In looking at how to output files from protocol decoders, I > > > have the following questions: > > > > > > 1. Once put() is called for OUTPUT_BINARY, is there anyway to > > > go back and change those bytes? This is especially useful for > > > adjusting file headers based on sample inputs the come later in > > > the stream. For example, the i2s PD can output a WAV file. > [ ... ] > > > > This does not solve the general issue. > > Actually, it would solve the general issue. Going back and > changing previously submitted bytes would provide similar > functionality as seek() followed by a write() that most > operating systems provide. We don't disagree at all. The part that you cite referred to the part that you dropped. You might have put this part of my reply into an unintended context, or might have stopped reading too soon. Try the positive interpretation first, before assuming that somebody wants to dismiss you. :) And I wrongly assumed that the specific example that you mentioned in detail would have been your actual motivation for asking in the first place. I'm sorry about that. What I said was that generating WAV files from several chunks does not solve the general issue of manipulating arbitrary data _after_ it was handed to some other component for further processing. The approach of generating the output from several chunks might just help working around the specific issue that you mentioned in particular. (While there still might be issues left, but as you said WAV files were just an example, not the actual and most pressing issue for you.) The most important part that you dropped is my questioning the choice of the WAV file format (that requires seeking in the specific case) in a spot that is not supposed to deal with "file formats" at all. A protocol decoder should not bother, and just assume "stream in, stream out". Regardless of whether the decoder's output happens to get written to a file at some other component in the software topology. An output module _might_ assume having access to an output file, which then allows to seek. The most appropriate approach would be to generate an annotation of "audio samples" from the decoder. And to have output modules take those audio samples, and write them to files in whatever format they please. Add the AC97 decoder (which is on the project's wishlist) and PWM (which exists, and could provide audio signal amplitudes, too), and you see why decoders should not re-invent file format handling. Add an output module for formats other than WAV, and all audio sample sources will benefit in transparent ways. Increase the number of components on either side of the interface (interpret delta-sigma as "some kind of PWM" maybe, decode I2C based audio codec communication like WM chips on FPGA boards or RPi hats, analog input readers, spectrum analyzer output channels, etc) and you see how the "m + n" complexity is preferrable over "m * n" or lack of orthogonal support. Apply this "audio samples" or "transparent data within an annotation of the output stream" approach to whatever kind of data you actually had in mind. Stick with the general idea of "decoders consume and generate streams", and "if file access is assumed, make sure you have a file" (which translates to "should be an output module"). This shall solve the issue you have. Also note that seek(3) is not only a feature of "an OS", but moreso of the object that you manipulate. While you can seek on files, you cannot seek on pipes and sockets. See potentially related Bugzilla items: There is 292 for decoders, reporting a specific symptom. There is 749 which discusses the general issue of how decoders (need to) work. There is 236 for PulseView which states that even input modules may not know the sample count upfront. Capture devices may suffer from the same issue of not knowing how many samples they will provide, until the terminating condition is met after an arbitrary amount of samples was delivered. So yes, there is a limitation in the current implementation that does not allow what you appear to try to achieve. Yet we can see that there are reasons for that limitation, and that resolving the issue may or may not be possible, but certainly is not trivial. It's worth checking whether you are looking at the "right problem" when there is no obvious or no straight forward solution, or when the solution comes with new disadvantages or limitations. Did I miss this? Are output modules "just" other participants in the pipe architecture, and don't (necessarily) have access to an output file? Though it might be acceptable for an output module to "throw up" when its output is not a seekable object (if that can get detected). And of course only in those cases where the output needs to get seeked and re-written. Existing code might already come with such a constraint. > I deleted the remainder of the WAV-specific response since the > question is about how to solve the issue generally. The WAV > output by the i2s PD is just an example of existing code where > an incorrect header is output because the total length of the > data is known. I mistook your mentioning the I2S decoder and WAV file example as a specific motivation in that moment, while there is some general issue behind it but in some further distance. Sorry for getting this wrong. I got it after you told me. > > > 2. Is it acceptable to buffer most of the file data and then > > > just output the entire file at the end? Again, this relates to > > > file headers. Theoretically, a PD could buffer the file data > > > until it reaches the end of the sample inputs, then calculate > > > the size fields, and finally call put() to output the entire > > > file at once. Are there memory limits in python, similar to > > > Java VM memory limits, or is memory only limited by the OS's > > > memory limits? > > > > Have seen decoders buffer data all the time, though they only > > "defer" data until the completion of a frame or transaction. > > Haven't seen deferral for all of the input data yet. > > Are you confirming that python does not have the memory limits > that other languages (ex: Java) has? I've written code in C++ > that defers output but the question is whether this is ok in > python and whether it is acceptable for sigrok's PDs. Actually it's only Java where I ever saw such a "memory setup" feature (which always reminded me of XMS/EMS setup in the old days of DOS). No other interpreted language that I'm aware of has such a thing, so it's actually Java which I'd perceive as being the exception. What applies however are the "regular" limits that are inherent to process management, see ulimit(1) and setrlimit(2) or whatever is the counterpart of your platform of choice. The sigrok(1) process happens to embed the python(3) library which is used to execute the PD(3py) scripts. So you are limited to whatever resources the machine (and its OS configuration) has to offer, and optionally what users/admins have configured to suite their taste. But that's obvious, and does not differ from any other process that you are running. See the sigrok project's coding guidelines, which suggest that allocations of up to 1MB are assumed to always succeed (or fail in other acceptable/appropriate ways), and larger allocations should get checked. Depending on your expected "workload" (typical data volume for a "transaction" in your decoder), you may just accumulate data until you encounter the transaction's ending condition (that's what decoders normally do, as far as I understand it). Or your decoder has the option of "chunking" its output (which could prevent excessive buffering, _if_ the output data format lends itself to chunking). Or you deal with data formats that stream in natural ways, and don't require buffering. There already are reports where decoders fail in the presence of huge amounts of sample data (or specific use patterns in the input data, ISTR one report was on a long duration combined with very high resolution). You write the code for normal use, and cannot do much about excessive or insane amounts of input that exhaust available resources while you are busy operating on the logical content. Even if you may detect situations of resource exhaustion, you may not be able to handle them appropriately anyway. So what's the point of catching them then? In theory you can overrun _any_ decoder which supports a protocol that has the notion of "frames" or "transactions" of arbitrary size. Consider the SPI protocol with its chip select signal. Typical use (displays, sensors, eeproms) may never transfer more than a few hundred bytes most of the time. Still you can drain complete flash chips within a single transfer. Or even refuse to ever release CS at all. Thus you could accumulate megabytes and gigabytes and more before the transfer ends, if it ends at all. As a decoder, you never know the logical content of the input stream upfront. Some are in the comfortable position to just generate an opaque output stream, with no buffering at all or just minimal buffering. Some might work on simple fixed size data items, and some might have the option of "chunking" their output. Always consider the specific environment that you work in. There is no one-size-fits-all and quick answer to that. Do you have a specific other protocol in mind when it's not I2S? What's the itch you try to scratch? Can you use a "streaming data format"? If not then why not (within the decoder)? I'm not rebutting your position, just trying to better understand the problem. > > > 3. Is there a way to know that the end of the sample input > > > stream has been reached? This way, a PD would know that there > > > is no more data and that decoding is done. This would prevent > > > a PD from waiting any further for sample inputs that will never > > > come. > [ ... ] See Bugs 292/749/236. The symptoms are known but it's yet to get determined what a solution is (and maybe what exactly the problem or the scope of the problem is). I feel that regardless of the potentially added decoder method's name (I lean towards "end_of_input" maybe, or "decode_end"), the semantics can only be that of fflush(3). In that way decoders might push data to the output stream that was internally kept for potential performance reasons or generation of chunks where the size of chunks needs to be known upfront. But the flush decoder method will neither solve the "manipulate previously written data" issue, nor the "last sample" issue. Still it's worth considering the flush feature as a future improvement to the decoder API. virtually yours Gerhard Sittig -- If you don't understand or are scared by any of the above ask your parents or an adult to help you. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot Slashdot: News for nerds, stuff that matters<http://sdm.link/slashdot> sdm.link Slashdot: News for nerds, stuff that matters. Timely news source for technology related news with a heavy slant towards Linux and Open Source issues. _______________________________________________ sigrok-devel mailing list sig...@li... https://lists.sourceforge.net/lists/listinfo/sigrok-devel sigrok-devel Info Page - SourceForge<https://lists.sourceforge.net/lists/listinfo/sigrok-devel> lists.sourceforge.net To see the collection of prior postings to the list, visit the sigrok-devel Archives. Using sigrok-devel: To post a message to all the list members ... |
From: Mike M. <mw...@mi...> - 2016-10-09 21:14:03
|
On Sun, Oct 9, 2016 at 3:36 PM Soeren Apel <so...@ap...> wrote: > > The first step would be adding more capture limit types to pulseview. > > They exist in the library and are supported by sigrok-cli, but > > there's no way to use them in pulseview. > What do you mean by capture limit types exactly? > The things you can use in sigrok-cli to limit the amount of data captured: --time, --samples and --frames. --continuous doesn't make a lot of sense the way things are now, but when combined with the ability to start fresh on each frame, give you an oscilloscope-like behavior. I don't seem to be able to use all of these in pulseview. > > Not clear this is needed, but the "Frames" option would probably add > > a lot of useful code. Because the behavior you want is to render each > > frame as it comes in. I've played with GPL and the new csv output > > code and some external scripts to make GPL do this, and it seems to > > work reasonably well. Not clear how hard it'd be to make it work > > properly in pulseview. > GPL = gnu plot? > Yes. I use the "gpl" extension for my gnuplot scripts. Sorry 'bout the confusion. |
From: Soeren A. <so...@ap...> - 2016-10-09 20:36:30
|
Hi Mike, > I've been thinking about using libsigrok for an oscilloscope as well, > and think the better approach is to extend pulseview to be more like > an oscilloscope. Making it a full-blown, multi-channel scope would be > a pita. But I think you can get something really useful out of it. Indeed that's the plan. When the new libsigrok pipeline architecture materializes, PV won't care as much about whether the data is from a scope or logic analyzer. Until that happens, I'm considering adding a mechanism that allows analog channels to be used as decoder inputs via some simple filtering. Need to finish the architecture/GUI rework I've started first, though. > The first step would be adding more capture limit types to pulseview. > They exist in the library and are supported by sigrok-cli, but > there's no way to use them in pulseview. What do you mean by capture limit types exactly? For me, the difference between "streamed data" and "framed data" is simply the way it's displayed (plus some GUI elements to allow manipulation of this display). > Not clear this is needed, but the "Frames" option would probably add > a lot of useful code. Because the behavior you want is to render each > frame as it comes in. I've played with GPL and the new csv output > code and some external scripts to make GPL do this, and it seems to > work reasonably well. Not clear how hard it'd be to make it work > properly in pulseview. GPL = gnu plot? This stuff is something that will be tacked on to the standard trace view in PV, so not a major code change. Just takes time to do it :) > But this requires both C++ and GUI work, neither of which I'm > comfortable with. That's perfectly fine, no worries. If you do what you're good at and if I do what I'm good at then we all win in the end. All the best, -Soeren |
From: Soeren A. <so...@ap...> - 2016-10-09 20:24:10
|
Hi Roman, > Does handling a scope need for so many workarounds? I thought that > this > would be quite easy assuming one uses a library like sigrok? My point was that the workarounds may be integrated into e.g. openhantek on an application level that's outside the driver. Such workarounds would only work because the device used is known and the inner workings of the driver is known, too. When turning the program into a sigrok client, such workarounds would no longer be supported (as they must be contained in the libsigrok driver), making the adaption a little more difficult. > Sure, and I fear I also won't have the time to do that in the near > future. Still it seemed like a reasonable idea to me as there is > some > quite usable osci interfaces but they mainly lack hardware support > and > sigrok supports quite a broad range of hardware. Reasonable for sure, just... developers, developers, developers ;) All the best, -Soeren |
From: Mike M. <mw...@mi...> - 2016-10-09 20:00:47
|
On Sun, Oct 9, 2016 at 2:25 PM Roman Seidl <ro...@gr...> wrote: > Dear Soeren! > Does handling a scope need for so many workarounds? I thought that this > > would be quite easy assuming one uses a library like sigrok? > I've been thinking about using libsigrok for an oscilloscope as well, and think the better approach is to extend pulseview to be more like an oscilloscope. Making it a full-blown, multi-channel scope would be a pita. But I think you can get something really useful out of it. The first step would be adding more capture limit types to pulseview. They exist in the library and are supported by sigrok-cli, but there's no way to use them in pulseview. Not clear this is needed, but the "Frames" option would probably add a lot of useful code. Because the behavior you want is to render each frame as it comes in. I've played with GPL and the new csv output code and some external scripts to make GPL do this, and it seems to work reasonably well. Not clear how hard it'd be to make it work properly in pulseview. But this requires both C++ and GUI work, neither of which I'm comfortable with. |
From: Roman S. <ro...@gr...> - 2016-10-09 19:24:45
|
Dear Soeren! Thanks for the extensive answer. Am 2016-10-09 um 20:01 schrieb Soeren Apel: > For those reasons, the most sensible way to go wouldn't be to > merge but instead to fork those projects and replace their > internal drivers by a generic sigrok interface, turning the > software into another sigrok client. This would certainly > be doable, although it may prove difficult to handle all the > workarounds that can be used when interacting with a known > scope directly. Does handling a scope need for so many workarounds? I thought that this would be quite easy assuming one uses a library like sigrok? > So in the end it's a matter of "is it worth it?" and "who > volunteers to do it?" - so far no one has stepped up, so > it hasn't been done :) Sure, and I fear I also won't have the time to do that in the near future. Still it seemed like a reasonable idea to me as there is some quite usable osci interfaces but they mainly lack hardware support and sigrok supports quite a broad range of hardware. cheers, roman |
From: Gaitan D'A. <gbd...@gm...> - 2016-10-09 19:11:00
|
On Sun, Oct 9, 2016 at 1:06 PM, Gerhard Sittig <Ger...@gm...> wrote: > On Sat, Oct 08, 2016 at 16:14 -0400, Gaitan D'Antoni wrote: > > > > I just built libserialport on Windows 10 and while stepping through the > > code I noticed a bug so I created the following patch: > > > > --- serialport.c 2016-10-08 13:41:23.269484000 -0400 > > +++ serialport1.c 2016-10-08 13:43:10.958994300 -0400 > > @@ -459,8 +459,8 @@ > > char *escaped_port_name; > > COMSTAT status; > > > > - /* Prefix port name with '\\.\' to work with ports above COM9. */ > > - if (!(escaped_port_name = malloc(strlen(port->name) + 5))) > > + /* Prefix port name with '\\\\.\\' to work with ports above > COM9. */ > > + if (!(escaped_port_name = malloc(strlen(port->name) + 8))) > > RETURN_ERROR(SP_ERR_MEM, "Escaped port name malloc > failed"); > > sprintf(escaped_port_name, "\\\\.\\%s", port->name); > > Can you expand on what's the issue? I cannot see an obvious > problem in the original source from looking at the patch. And > there is no comment / commit message that comes with your patch, > explaining what the problem was and what the consequences might > be, or how the changed version has improved. > There wasn't a specific issue or problem. I was stepping through the code, chasing another problem and saw what I thought was a problem with too small of a buffer being allocated to hold the string. In my haste I had forgotten about the escaping. > > Consider that strlen("\\") == 1, and thus strlen("\\\\.\\") == 4. > The first backslash escapes the subsequent one, the resulting > string after resolving the escaped sequence is a single backslash > (or three backslashes plus a dot in the second term). > Thanks for the explanation. At the time I wrote the patch I mistakenly didn't consider the escaping. > > Also note the different quote characters. The _source code_ uses > double quotes, because that's how strings are phrased in the C > programming language. The _comment_ uses single quotes, which by > convention denote "citations" or strict terms in the verbatim > sense. > > So it's really \\.\ which needs to get prepended (that's a "UNC" > style Windows path name). It just happens to get phrased as > "\\\\.\\" in the specific programming language. > > > Did I miss something? > Nope, you didn't miss anything; I did. Thanks again for the explanation. Gaitan > > |
From: Soeren A. <so...@ap...> - 2016-10-09 18:19:23
|
Hi Roman, while merging sigrok with a project like openhantek (or similar) may look beneficial at first glance, there are some issues. For example, sigrok consists of several libraries (libsigrok, libsigrokdecode, etc.) that clients such as sigrok-cli, PulseView or sigrok-meter use. This separation allows not only a high degree of flexibility for the user but also allows sigrok to support a wide range of different device classes. Projects like openhantek have an integrated driver backend tailored to the specific device(s) they target and only have one specific device class in mind: oscilloscopes (or in case of e.g. QtDMM, multimeters). For those reasons, the most sensible way to go wouldn't be to merge but instead to fork those projects and replace their internal drivers by a generic sigrok interface, turning the software into another sigrok client. This would certainly be doable, although it may prove difficult to handle all the workarounds that can be used when interacting with a known scope directly. So in the end it's a matter of "is it worth it?" and "who volunteers to do it?" - so far no one has stepped up, so it hasn't been done :) All the best, -Soeren On Sun, 2016-10-09 at 19:34 +0200, Roman Seidl wrote: > Hi! > > Sorry if this has been discussed many times. > > I am just curious if someone has thought of merging the sogrok > library > with a traditional oscilloscope viewer with a compatible license > (e.g. > openhantek). > > Would it be technically feasible to use libsigrok for such a purpose? > > cheers, > roman > > ------------------------------------------------------------------- > ----------- > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > sigrok-devel mailing list > sig...@li... > https://lists.sourceforge.net/lists/listinfo/sigrok-devel |
From: Roman S. <ro...@gr...> - 2016-10-09 17:34:28
|
Hi! Sorry if this has been discussed many times. I am just curious if someone has thought of merging the sogrok library with a traditional oscilloscope viewer with a compatible license (e.g. openhantek). Would it be technically feasible to use libsigrok for such a purpose? cheers, roman |
From: Gerhard S. <Ger...@gm...> - 2016-10-09 17:07:43
|
On Sat, Oct 08, 2016 at 16:14 -0400, Gaitan D'Antoni wrote: > > I just built libserialport on Windows 10 and while stepping through the > code I noticed a bug so I created the following patch: > > --- serialport.c 2016-10-08 13:41:23.269484000 -0400 > +++ serialport1.c 2016-10-08 13:43:10.958994300 -0400 > @@ -459,8 +459,8 @@ > char *escaped_port_name; > COMSTAT status; > > - /* Prefix port name with '\\.\' to work with ports above COM9. */ > - if (!(escaped_port_name = malloc(strlen(port->name) + 5))) > + /* Prefix port name with '\\\\.\\' to work with ports above COM9. */ > + if (!(escaped_port_name = malloc(strlen(port->name) + 8))) > RETURN_ERROR(SP_ERR_MEM, "Escaped port name malloc failed"); > sprintf(escaped_port_name, "\\\\.\\%s", port->name); Can you expand on what's the issue? I cannot see an obvious problem in the original source from looking at the patch. And there is no comment / commit message that comes with your patch, explaining what the problem was and what the consequences might be, or how the changed version has improved. Consider that strlen("\\") == 1, and thus strlen("\\\\.\\") == 4. The first backslash escapes the subsequent one, the resulting string after resolving the escaped sequence is a single backslash (or three backslashes plus a dot in the second term). Also note the different quote characters. The _source code_ uses double quotes, because that's how strings are phrased in the C programming language. The _comment_ uses single quotes, which by convention denote "citations" or strict terms in the verbatim sense. So it's really \\.\ which needs to get prepended (that's a "UNC" style Windows path name). It just happens to get phrased as "\\\\.\\" in the specific programming language. Did I miss something? virtually yours Gerhard Sittig -- If you don't understand or are scared by any of the above ask your parents or an adult to help you. |