mindio-devel Mailing List for MindIO
Status: Planning
Brought to you by:
jeremyjw
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(100) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: James S. <jam...@an...> - 2006-06-22 06:05:49
|
Hi - has devel stalled on this? is the aim to be a java version of brainbay? |
From: <ben...@id...> - 2004-05-25 08:52:02
|
Dear Open Source developer I am doing a research project on "Fun and Software Development" in which I kindly invite you to participate. You will find the online survey under http://fasd.ethz.ch/qsf/. The questionnaire consists of 53 questions and you will need about 15 minutes to complete it. With the FASD project (Fun and Software Development) we want to define the motivational significance of fun when software developers decide to engage in Open Source projects. What is special about our research project is that a similar survey is planned with software developers in commercial firms. This procedure allows the immediate comparison between the involved individuals and the conditions of production of these two development models. Thus we hope to obtain substantial new insights to the phenomenon of Open Source Development. With many thanks for your participation, Benno Luthiger PS: The results of the survey will be published under http://www.isu.unizh.ch/fuehrung/blprojects/FASD/. We have set up the mailing list fa...@we... for this study. Please see http://fasd.ethz.ch/qsf/mailinglist_en.html for registration to this mailing list. _______________________________________________________________________ Benno Luthiger Swiss Federal Institute of Technology Zurich 8092 Zurich Mail: benno.luthiger(at)id.ethz.ch _______________________________________________________________________ |
From: Ian V. <vi...@ig...> - 2003-12-31 09:41:32
|
At 01:27 AM 12/31/2003 -0700, you wrote: >Ian Vincent wrote: > >>At 12:13 AM 12/31/2003 -0700, you wrote: >> >>> Isn't what we all want just a good open-source application for >>> neurofeedback? >> >> >>Depends on your version of good. My needs are more than the basic I >>suspect. For me, as you know, there is a wider agenda. I need the >>supervisory aspects, and session/segment functionality, and the inter >>party communication, so that I can transmit and control session data remotely. > >It seems to me to be a natural extension. In fact, Chris has a copy of >your Preliminary Object Set on his web site. But even if he didn't want >to incorporate those things into BrainBay, it's possible to use BrainBay >as a starting point and create an independent application that has the >features you want. The only catch is that it would need to be GPL, since >it would be built upon BrainBay, which is GPL. GPL is fine, why not. I am totally into the open concept, I am an open kinda guy ;-) too open for my own good sometimes. Can you do me that BrainBay script in your own words, and I will send it to Chris and he can read it in his Austrian twang and it can go as the vo for flash. Credit where credit is due I say. Else you or I are going to have to read it. I am already doing my own intro and a segment for SoundCardEEG and part of ModularEEG, and Pin electrodes so there is already enough of my voice. Some others would be great to give it a true international feel. Ian |
From: Ian V. <vi...@ig...> - 2003-12-31 09:34:49
|
At 01:22 AM 12/31/2003 -0700, you wrote: >Ian Vincent wrote: > >>At 12:13 AM 12/31/2003 -0700, you wrote: >> >>>I downloaded Chris's BrainBay software and I've been playing with it a >>>little. It looks pretty good. I want to look into it more before I >>>decide anything, but I think I would rather spend my time contributing >>>to BrainBay than starting a new project from scratch. >>>There's no point in re-inventing the wheel. Isn't what we all want just >>>a good open-source application for neurofeedback? >> >> >>I was kind of thinking the same. Its up an running and there will be a >>lot of interest in it. I would be interested to learn a bit about it how >>it works internally. > > From what I've seen, it works similarly to our plans for MindIO. It uses > the win32 api, though, which would be a disappointment to Linux > users. It would take a lot of work to convert it, I think. It doesn't > have the application independence (and language independence) we wanted > for MindIO, but oh well... anyway, since it's GPL, anybody could borrow > code for other applications. It would just be a little more work. > >>How difficult was it to install. ? > >It seemed to run fine after I unzipped it, but the scope and spectrum >objects don't work. Maybe it just doesn't work with my video card. I >didn't spend any time trying to solve it, though. Have you installed it? Nope, its 10.30pm new years eve here. Maybe tomorrow. >>So that means MindIO is on hold then.? That's fine by me, and it makes >>total sense to do that. If nothing else it has given us a chance to look >>at some of the issues and now we can see how Chris has handled them. And >>who knows, some of the ideas we talked about might be useful someday. > >I think we should put MindIO on hold. Some of the issues we discussed >might even be relevent to BrainBay. It still needs a lot of work before >it's complete. > >>If its all right with you, I would still like to use the MindIO site for >>my graphics, since I don't have access to the OpenEEG site atm. > >Sure. I have heaps going on with the presentation, so i'm up to my ears in graphics & flash, so its not huge on my agenda, but i will try it obviously. just when is another issue. I think my video card is not the greatest so might have to upgrade that. Ian >------------------------------------------------------- >This SF.net email is sponsored by: IBM Linux Tutorials. >Become an expert in LINUX or just sharpen your skills. Sign up for IBM's >Free Linux Tutorials. Learn everything from the bash shell to sys admin. >Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click >_______________________________________________ >MindIO-devel mailing list >Min...@li... >https://lists.sourceforge.net/lists/listinfo/mindio-devel |
From: Jeremy W. <jjw...@ub...> - 2003-12-31 08:27:58
|
Ian Vincent wrote: > At 12:13 AM 12/31/2003 -0700, you wrote: > >> Isn't what we all want just a good open-source application for >> neurofeedback? > > > Depends on your version of good. My needs are more than the basic I > suspect. For me, as you know, there is a wider agenda. I need the > supervisory aspects, and session/segment functionality, and the inter > party communication, so that I can transmit and control session data > remotely. It seems to me to be a natural extension. In fact, Chris has a copy of your Preliminary Object Set on his web site. But even if he didn't want to incorporate those things into BrainBay, it's possible to use BrainBay as a starting point and create an independent application that has the features you want. The only catch is that it would need to be GPL, since it would be built upon BrainBay, which is GPL. |
From: Jeremy W. <jjw...@ub...> - 2003-12-31 08:22:28
|
Ian Vincent wrote: > At 12:13 AM 12/31/2003 -0700, you wrote: > >> I downloaded Chris's BrainBay software and I've been playing with it >> a little. It looks pretty good. I want to look into it more before >> I decide anything, but I think I would rather spend my time >> contributing to BrainBay than starting a new project from scratch. >> There's no point in re-inventing the wheel. Isn't what we all want >> just a good open-source application for neurofeedback? > > > I was kind of thinking the same. Its up an running and there will be a > lot of interest in it. I would be interested to learn a bit about it > how it works internally. From what I've seen, it works similarly to our plans for MindIO. It uses the win32 api, though, which would be a disappointment to Linux users. It would take a lot of work to convert it, I think. It doesn't have the application independence (and language independence) we wanted for MindIO, but oh well... anyway, since it's GPL, anybody could borrow code for other applications. It would just be a little more work. > How difficult was it to install. ? It seemed to run fine after I unzipped it, but the scope and spectrum objects don't work. Maybe it just doesn't work with my video card. I didn't spend any time trying to solve it, though. Have you installed it? > So that means MindIO is on hold then.? That's fine by me, and it makes > total sense to do that. If nothing else it has given us a chance to > look at some of the issues and now we can see how Chris has handled > them. And who knows, some of the ideas we talked about might be useful > someday. I think we should put MindIO on hold. Some of the issues we discussed might even be relevent to BrainBay. It still needs a lot of work before it's complete. > If its all right with you, I would still like to use the MindIO site > for my graphics, since I don't have access to the OpenEEG site atm. Sure. |
From: Ian V. <vi...@ig...> - 2003-12-31 08:18:58
|
At 12:13 AM 12/31/2003 -0700, you wrote: > Isn't what we all want just a good open-source application for > neurofeedback? Depends on your version of good. My needs are more than the basic I suspect. For me, as you know, there is a wider agenda. I need the supervisory aspects, and session/segment functionality, and the inter party communication, so that I can transmit and control session data remotely. This is not required for self use but only comes into play when you have a clinical/remote supervisor situation. This of course is beyond the basic design. But who knows!! a supervisory system may just bolt on once a working basic system is in place. If its TCP based, then all sorts of possibilities arise. I have written Clarion apps before that behave as a browser and I think they are now XML capable, so most things are possible. At least now I know more of what I need, and how it might be achieved. Before it was just a blur, now it has a bit of substance. I think it is all good, and that we have actually moved forward a huge step, and certainly there is now a resurgence of interest in the software aspects of OpenEEG. We have OpenServer in conceptual form and BrainBay in working form and they will naturally merge. It will be very interesting to see where it all leads. Harvest time I hope. Ian >------------------------------------------------------- >This SF.net email is sponsored by: IBM Linux Tutorials. >Become an expert in LINUX or just sharpen your skills. Sign up for IBM's >Free Linux Tutorials. Learn everything from the bash shell to sys admin. >Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click >_______________________________________________ >MindIO-devel mailing list >Min...@li... >https://lists.sourceforge.net/lists/listinfo/mindio-devel |
From: Ian V. <vi...@ig...> - 2003-12-31 08:00:33
|
At 12:13 AM 12/31/2003 -0700, you wrote: >I downloaded Chris's BrainBay software and I've been playing with it a >little. It looks pretty good. I want to look into it more before I >decide anything, but I think I would rather spend my time contributing to >BrainBay than starting a new project from scratch. There's no point in >re-inventing the wheel. Isn't what we all want just a good open-source >application for neurofeedback? I was kind of thinking the same. Its up an running and there will be a lot of interest in it. I would be interested to learn a bit about it how it works internally. How difficult was it to install. ? So that means MindIO is on hold then.? That's fine by me, and it makes total sense to do that. If nothing else it has given us a chance to look at some of the issues and now we can see how Chris has handled them. And who knows, some of the ideas we talked about might be useful someday. If its all right with you, I would still like to use the MindIO site for my graphics, since I don't have access to the OpenEEG site atm. Ian >------------------------------------------------------- >This SF.net email is sponsored by: IBM Linux Tutorials. >Become an expert in LINUX or just sharpen your skills. Sign up for IBM's >Free Linux Tutorials. Learn everything from the bash shell to sys admin. >Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click >_______________________________________________ >MindIO-devel mailing list >Min...@li... >https://lists.sourceforge.net/lists/listinfo/mindio-devel |
From: Jeremy W. <jjw...@ub...> - 2003-12-31 07:13:13
|
I downloaded Chris's BrainBay software and I've been playing with it a little. It looks pretty good. I want to look into it more before I decide anything, but I think I would rather spend my time contributing to BrainBay than starting a new project from scratch. There's no point in re-inventing the wheel. Isn't what we all want just a good open-source application for neurofeedback? |
From: Ian V. <vi...@ig...> - 2003-12-30 20:05:26
|
At 06:10 PM 12/27/2003 -0700, you wrote: >Ian Vincent wrote: > >>VB sucks compared to Clarion. You only have to hang out on the Clarion >>newsgroups for a wee while to realise that. Testaments from programmers >>that migrate. I believe them. > >That isn't hard for me to believe -- I've used VB enough to know that it >sucks. The reason I suggested VB is because it's part of .NET. I don't know what .NET is. Is it a MS thing? >If we go with C#, then your application would need to be written in a .NET >language. Or if we go with Java, your application would probably need to >be in Java. I've never heard of another language that could use Java >libraries. there is no way that i will be learning another language, it is simply not something I wish to do. I know from past experience that it is too stressful and takes over my life. But I am quite happy to learn enough so that I can understand how the code hangs together. Writing code is an entirely different thing. If I write anything, then it would be in Clarion. If the external api engine thing happens then this is possible i presume, since Clarion can use C, C++ and VBA libraries. >I've been thinking a lot about which language we should use, and I think >the answer should have been obvious. I just didn't want to see it. I >think we should go with C++. It is a poorly-designed language, IMO, far >inferior to Java or C#. But it's the most commonly-used and >commonly-known language there is, so there would be more people that could >contribute. Also, objects or libraries written in C++ can be used by most >other languages. If the internal workings are done with C++, they can be >compiled for any platform and used with almost any programming >language. Another language (maybe Java) can be used to write the more >platform-dependent parts, such as the user interface. >I used to work as a C++ programmer. It has been about 8 years, but I >could refresh my memory. I really don't like the idea of working with >C++, but I think it gives this project the best chance of success. C++ is a total mystery zone to me, as it is to many people. But if you choose to go that way, if it makes best sense to that, then it isnt a problem because the objects, as objects are still usable to me. What goes on inside the box doesn't concern me. You are going to have to make the call on what language to use, but perhaps we need to do the openeeg list poll thing before making that decision. However from what I have seen if you ask seven programmers what language to use then they will give seven different answers. ;-) |
From: Ian V. <vi...@ig...> - 2003-12-30 20:05:23
|
At 05:49 PM 12/27/2003 -0700, you wrote: >Ian Vincent wrote: > >>I just presume that the connection would be handled by a specialist >>terminal object and is conceptually similar to the ones that would handle >>data streaming to file. Effectively the 'connection' is to an external >>device, just as with audio, file, etc. Basically these Terminal objects >>are identifiable by the fact that they have Rx's and no Tx's. 'End of the >>line' Terminals, rather than 'pass through' Terminals. Something like that. > >You're right, my mistake. They would be 'end of the line', not 'pass >through'. There is no need for 'pass through' Terminals, because a Tx can >output to more than one Rx. Ok >>Yes, this is also my thinking, that MindIO will be minimalist in terms of >>clinical NFB function but will contain objects that can be used >>elsewhere. It could be a single 'segment' application, much like BE >>currently is. > >Perhaps we can start out with minimal function and plan to expand it over >time. OK, but should we have a go at defining such a minimal function. A separate thread perhaps? >>>>>If the supervisor needed the ability to modify the layout at >>>>>run-time, that >>>>>could be built into the application and would also not require any >>>>>change to >>>>>our general object relationship design. >>>> >>>> >>>>Since we need the Trainee to be able to train when the supervisor is >>>>off line, 24/7 there is a case for there being a supervisor server of >>>>some kind where the Trainee can log into to download session designs >>>>etc perhaps. Maybe this server could act as a go between for all >>>>Trainee/Supervisor transfers. One factor that I haven't even mentioned >>>>is that the Trainee needs to be able to assess the effect of each >>>>segment and transfer that info to the supervisor. This would ideally be >>>>in the form of a brief questionaire, and a html server could work well >>>>in this role. Its getting complicated though. Part of Trainee training >>>>is the doing of this self assessment, and is an integral part of the >>>>self training concept. The results of the assessment are used in >>>>customising the following sessions. >>> >>> >>>I think you're talking about a server-side web application. You could >>>say that's my specialty, as that's the majority of programming that I've >>>done recently. That's how I make a living (one of my three jobs, anyway). >> >> >>Take for instance the situation where the supervisor had two or more >>trainees training concurrently, or where the supervisor was of lnie or >>doing something else, like sleeping. A server side web app could manange >>the training process, including the collection and storage of session EEG >>data for retrieval by the supervisor as and when required. It could also >>manage the transmission of session and layout designs as required. I >>actually really like this conception. >> >>Jeremy..read through this scenario and take a stab at the question at the >>end. Assume for this example, that we are using a server side app rather >>than a socket connection. The Terminal object transmits EEG data to the >>server side app, via http, the server processes this data, stores it in >>its data base, and if required echos it to the supervisor. What would the >>time delay be reasonably expected to be. I realise that there are >>variables in here such as the number of concurrent trainees, the server >>cpu speed, the server loading, the nature of the app, or whatever. Can >>you give a ball park. Could it typically be within say 3 seconds. Why I >>ask is that there is no absolute need for strict real time monitoring in >>this situation, and a few seconds delay is acceptable. > >If it's implemented as a web-app, then all the server can do is respond to >requests by the client. I'm not sure what you're asking with the >delay. I think 3 seconds is doable, but if the supervisor might want to >change settings, there might need to be a way for the supervisor to >initiate a connection at any time, which isn't feasible with a web-app, IMO. I hadn't even considered who initiated a connection of how. There is so much ging on in the openEEG list right now regards this topic, i think we need to go with that. It is TCP based anyway, so anything is possible. The web-app seems to me to have the greatest potential as an off line store and forward methodology for the passing of session data and designs when the session is not real-time supervised. Originally i thought of using email for this, but a web-app might offer advantages. >>I just thought of a new connectivity issue and that is that the >>supervisor should somehow know exactly what the Trainee is 'actually' >>seeing and hearing so some mechanism needs to be in place to correlate >>the two systems. Trainees can change Terminal properties at run time and >>the reward response of the system will reflect that, so somehow these >>changes need to get through to the supervisor. Reward ratios are an >>important factor and Trainees might get them wrong initially, so the >>supervisor needs to be able to monitor that. Seems we almost need some >>kind of ongoing change log or something. This is very important and have >>not previously thought of it. More thinking needed here. > >Okay, then we need a way to synchronize the layout between the >supervisor's and trainee's computers, so they'll see the same results if >they both process the same data. Yes, it has to important or else the supervisor will make incorrect decisions because is not seeing the same picture >I think Rudi's suggestion of an OpenEEG Server is really good, but maybe >it could somehow allow two-way communication between clients. It would >need to be general and not specific to a particular application, though, >so maybe it isn't feasible. hmmm.. I suppose your saw my post on this. I have seen nothing since so expect they are considering this. mind you my email server is down atm, so there may be some replies. Duplex is the word we need, and function we need. Synchronization is always a difficult issue, since potentially both ends could be making changes at the same time. I have seen this before with databases and is the same conceptual problem I think. Its both doable and essential imo. |
From: Ian V. <vi...@ig...> - 2003-12-30 20:05:21
|
At 05:48 PM 12/27/2003 -0700, you wrote: >Ian Vincent wrote: > >> >>> >>>> From all the designs I have seen, six would be enough, or as you >>>> suggest use an AND as a splitter. Either way the graphical >>>> representation would not want all six to be displayed if not used >>>> because it makes the graphic too big and uses up valuable screen real >>>> estate. BE's implementation is superb in this respect, though I expect >>>> the code is tricky. >>> >>> >>>If we do it this way, then we need a method of Rx and/or Terminal that >>>can be used to find out of a specific Rx has been connected to. >>>That's to keep the graphic clean by hiding unused Rx's, as you >>>mentioned, and also so anther Terminal would know which Rx's are free, >>>if there are multiple with the same function. Or... >> >> >>Reading this, something sprung to mind, and that is that we could have >>'transmission' or 'Link' object that conceptually represents the linking >>between any two nodes. This was how I originally envisaged implementing a >>BE type design, before seeing what you had in mind. In the current >>design, Terminal declares an Rx and pass's samples via the Rx interface. >>I am not sure if my suggestion would have any advantages, but thought I >>would put it on the discussion table so it can be considered and rejected >>or adopted as we decide. Conceptually it does make some sense. >> >>This 'Link' object could have dynamic Tx's, and could graphically >>represent itself as the single output node of a terminal object. When you >>add a link in a BE design window, you effectively create a 'link' and >>wouldn't be suprised if BE actually has such an object. Eventually we >>will need a graphical linking shape of some sort, and this link object >>could manage that. Otherwise the creation of such a linking line (or >>whatever) would need to assigned to either Tx or Rx. This has to be plus >>for considering a seperate 'link' object. > >I think there is no difference in implementation between what you are >describing and what we have now. >The correlation between real-life objects and software "objects" is >limited. Objects are "linked" by method calls. And any type of visual >representation is only indirectly related to actual object relationships. I am going to take the defensive here, just for sake of clarity. My conception comes from a design that I did previously, unrelated to matters EEG. I was looking for an entirely flexible data design and what I came up with was an object set based around the linked list concept. ie there is a central set of data, ie the 'list', and another set of 'links' that defines the relationships between the various data items in the list. ie the structure is defined within the 'links' not within the list data itself. The advantage being that the list can be navigated bi-directionally via the links. Whereas an arrangement where the structure is contained within the data usually assumes that there is a parent/child, ie a uni-lateral arrangement. Having looked at this problem, the solution I came up with was to have data objects and link objects, so that any data object could navigate the link object set to create a visual relation tree of all other objects to which it was related. Thus the user could nominate any specific object in the dataset and see the data from its own perspective. Having said that, I don't think we need it. Objects are "linked" by method calls.? I see it as 'objects communicate by method calls'. They are linked within the API by interface declarations but there is another level of linking that is defined by their placement on a layout, and the placement of links onto that layout. My questions are around how those links are defined. Example, from your conception: If a Filterobject has its signalout connected to two Thresholdobjects inputs, where is the linking actually stored, and in what form.? |
From: Ian V. <vi...@ig...> - 2003-12-30 20:05:21
|
At 05:48 PM 12/27/2003 -0700, you wrote: >>>I would really prefer to refine the API design a little further, then >>>develop the basic objects, and only then work on an actual application >>>that uses those objects. Kind of a ground-up approach. >>>If we set our initial goals too high, we might do a lot of work and end >>>up with nothing to show for it. I think this is the problem that has >>>plagued other attempts to develop software for ModEEG. >> >> >>I agree, but by also defining a potential over all framework, ie the >>'applicationobject', any resulting API will reflect its needs. This >>brings up two points: >>1. There appears to be potentially two API's. One is internal and defines >>the communication between the various terminal objects, (cells perhaps) >>and the other is a potential external API that is what I refer to as the >>automation engine approach which defines communication between the mindio >>app and another host app, such as the supervisor app that I proposed . It >>is potentially possible that we may end up with both. I personally have >>no problem with the added conceptual complexity of this idea, since I >>know from experience that engineering problems are often solved by the >>introduction of additional layers of complexity, though the corollary of >>this is that additional complexity should only be added if no other >>option is viable. Another design philosophy holds that sometimes the >>additional layers are added to solve problems, but in some later >>evolution the layers dissolve into a new approach that solves all of the >>problems. Such is philosophy. ;-) Do you get my drift here? >> >>However I do agree that we should concentrate on defining the 'internal' >>API at this stage. > >I am sure that when we start working on the application, or external API, >that we will see that changes need to be made to the internal API. >In fact, I was just thinking about writing the ModEEG object and I >realized it would make more sense for the Tx and Rx to be classes instead >of interfaces, as most of their functionality will be the same for every >Terminal. However, I believe that later changes to the internal API will >be small and that, for various reasons, we should concentrate on the >internal API first. This is Ok with me now that i see the existence of both an internal and external api. sure, lets work on the internal api first. However this asks the question "what would we use as a framework for testing such an internal api and object set?". Isn't this where the 'application' comes in. Maybe the application wouldnt need the external api, but would it not make sense for the application object to exist from the very beginning, as part of the definition, since it provides such a framework. If Tx and Rx are classes instead of interfaces, does this imply that it is a 'superclass' that all terminal objects inherit from.? What about the JComponent idea since it could no longer be the superclass!! Can you explain the reasoning behind this change. I have no problem with it, I am just wondering why. |
From: Jeremy W. <jjw...@ub...> - 2003-12-28 05:16:21
|
Ian Vincent wrote: > VB sucks compared to Clarion. You only have to hang out on the Clarion > newsgroups for a wee while to realise that. Testaments from > programmers that migrate. I believe them. That isn't hard for me to believe -- I've used VB enough to know that it sucks. The reason I suggested VB is because it's part of .NET. If we go with C#, then your application would need to be written in a .NET language. Or if we go with Java, your application would probably need to be in Java. I've never heard of another language that could use Java libraries. I've been thinking a lot about which language we should use, and I think the answer should have been obvious. I just didn't want to see it. I think we should go with C++. It is a poorly-designed language, IMO, far inferior to Java or C#. But it's the most commonly-used and commonly-known language there is, so there would be more people that could contribute. Also, objects or libraries written in C++ can be used by most other languages. If the internal workings are done with C++, they can be compiled for any platform and used with almost any programming language. Another language (maybe Java) can be used to write the more platform-dependent parts, such as the user interface. I used to work as a C++ programmer. It has been about 8 years, but I could refresh my memory. I really don't like the idea of working with C++, but I think it gives this project the best chance of success. |
From: Jeremy W. <jjw...@ub...> - 2003-12-28 04:59:50
|
Ian Vincent wrote: > >> >>> From all the designs I have seen, six would be enough, or as you >>> suggest use an AND as a splitter. Either way the graphical >>> representation would not want all six to be displayed if not used >>> because it makes the graphic too big and uses up valuable screen >>> real estate. BE's implementation is superb in this respect, though I >>> expect the code is tricky. >> >> >> If we do it this way, then we need a method of Rx and/or Terminal >> that can be used to find out of a specific Rx has been connected to. >> That's to keep the graphic clean by hiding unused Rx's, as you >> mentioned, and also so anther Terminal would know which Rx's are >> free, if there are multiple with the same function. Or... > > > Reading this, something sprung to mind, and that is that we could have > 'transmission' or 'Link' object that conceptually represents the > linking between any two nodes. This was how I originally envisaged > implementing a BE type design, before seeing what you had in mind. In > the current design, Terminal declares an Rx and pass's samples via the > Rx interface. I am not sure if my suggestion would have any > advantages, but thought I would put it on the discussion table so it > can be considered and rejected or adopted as we decide. Conceptually > it does make some sense. > > This 'Link' object could have dynamic Tx's, and could graphically > represent itself as the single output node of a terminal object. When > you add a link in a BE design window, you effectively create a 'link' > and wouldn't be suprised if BE actually has such an object. Eventually > we will need a graphical linking shape of some sort, and this link > object could manage that. Otherwise the creation of such a linking > line (or whatever) would need to assigned to either Tx or Rx. This has > to be plus for considering a seperate 'link' object. I think there is no difference in implementation between what you are describing and what we have now. The correlation between real-life objects and software "objects" is limited. Objects are "linked" by method calls. And any type of visual representation is only indirectly related to actual object relationships. |
From: Jeremy W. <jjw...@ub...> - 2003-12-28 04:59:49
|
> > >> I would really prefer to refine the API design a little further, then >> develop the basic objects, and only then work on an actual >> application that uses those objects. Kind of a ground-up approach. >> If we set our initial goals too high, we might do a lot of work and >> end up with nothing to show for it. I think this is the problem that >> has plagued other attempts to develop software for ModEEG. > > > I agree, but by also defining a potential over all framework, ie the > 'applicationobject', any resulting API will reflect its needs. This > brings up two points: > 1. There appears to be potentially two API's. One is internal and > defines the communication between the various terminal objects, (cells > perhaps) and the other is a potential external API that is what I > refer to as the automation engine approach which defines communication > between the mindio app and another host app, such as the supervisor > app that I proposed . It is potentially possible that we may end up > with both. I personally have no problem with the added conceptual > complexity of this idea, since I know from experience that engineering > problems are often solved by the introduction of additional layers of > complexity, though the corollary of this is that additional complexity > should only be added if no other option is viable. Another design > philosophy holds that sometimes the additional layers are added to > solve problems, but in some later evolution the layers dissolve into a > new approach that solves all of the problems. Such is philosophy. ;-) > Do you get my drift here? > > However I do agree that we should concentrate on defining the > 'internal' API at this stage. I am sure that when we start working on the application, or external API, that we will see that changes need to be made to the internal API. In fact, I was just thinking about writing the ModEEG object and I realized it would make more sense for the Tx and Rx to be classes instead of interfaces, as most of their functionality will be the same for every Terminal. However, I believe that later changes to the internal API will be small and that, for various reasons, we should concentrate on the internal API first. |
From: Jeremy W. <jjw...@ub...> - 2003-12-28 01:38:36
|
Ian Vincent wrote: > I just presume that the connection would be handled by a specialist > terminal object and is conceptually similar to the ones that would > handle data streaming to file. Effectively the 'connection' is to an > external device, just as with audio, file, etc. Basically these > Terminal objects are identifiable by the fact that they have Rx's and > no Tx's. 'End of the line' Terminals, rather than 'pass through' > Terminals. Something like that. You're right, my mistake. They would be 'end of the line', not 'pass through'. There is no need for 'pass through' Terminals, because a Tx can output to more than one Rx. > Yes, this is also my thinking, that MindIO will be minimalist in terms > of clinical NFB function but will contain objects that can be used > elsewhere. It could be a single 'segment' application, much like BE > currently is. Perhaps we can start out with minimal function and plan to expand it over time. > > > >>>> If the supervisor needed the ability to modify the layout at >>>> run-time, that >>>> could be built into the application and would also not require any >>>> change to >>>> our general object relationship design. >>> >>> >>> Since we need the Trainee to be able to train when the supervisor is >>> off line, 24/7 there is a case for there being a supervisor server >>> of some kind where the Trainee can log into to download session >>> designs etc perhaps. Maybe this server could act as a go between for >>> all Trainee/Supervisor transfers. One factor that I haven't even >>> mentioned is that the Trainee needs to be able to assess the effect >>> of each segment and transfer that info to the supervisor. This would >>> ideally be in the form of a brief questionaire, and a html server >>> could work well in this role. Its getting complicated though. Part >>> of Trainee training is the doing of this self assessment, and is an >>> integral part of the self training concept. The results of the >>> assessment are used in customising the following sessions. >> >> >> I think you're talking about a server-side web application. You >> could say that's my specialty, as that's the majority of programming >> that I've done recently. That's how I make a living (one of my three >> jobs, anyway). > > > Take for instance the situation where the supervisor had two or more > trainees training concurrently, or where the supervisor was of lnie or > doing something else, like sleeping. A server side web app could > manange the training process, including the collection and storage of > session EEG data for retrieval by the supervisor as and when required. > It could also manage the transmission of session and layout designs as > required. I actually really like this conception. > > Jeremy..read through this scenario and take a stab at the question at > the end. Assume for this example, that we are using a server side app > rather than a socket connection. The Terminal object transmits EEG > data to the server side app, via http, the server processes this data, > stores it in its data base, and if required echos it to the > supervisor. What would the time delay be reasonably expected to be. I > realise that there are variables in here such as the number of > concurrent trainees, the server cpu speed, the server loading, the > nature of the app, or whatever. Can you give a ball park. Could it > typically be within say 3 seconds. Why I ask is that there is no > absolute need for strict real time monitoring in this situation, and a > few seconds delay is acceptable. If it's implemented as a web-app, then all the server can do is respond to requests by the client. I'm not sure what you're asking with the delay. I think 3 seconds is doable, but if the supervisor might want to change settings, there might need to be a way for the supervisor to initiate a connection at any time, which isn't feasible with a web-app, IMO. > I just thought of a new connectivity issue and that is that the > supervisor should somehow know exactly what the Trainee is 'actually' > seeing and hearing so some mechanism needs to be in place to correlate > the two systems. Trainees can change Terminal properties at run time > and the reward response of the system will reflect that, so somehow > these changes need to get through to the supervisor. Reward ratios are > an important factor and Trainees might get them wrong initially, so > the supervisor needs to be able to monitor that. Seems we almost need > some kind of ongoing change log or something. This is very important > and have not previously thought of it. More thinking needed here. Okay, then we need a way to synchronize the layout between the supervisor's and trainee's computers, so they'll see the same results if they both process the same data. I think Rudi's suggestion of an OpenEEG Server is really good, but maybe it could somehow allow two-way communication between clients. It would need to be general and not specific to a particular application, though, so maybe it isn't feasible. Jeremy |
From: Ian V. <vi...@ig...> - 2003-12-25 00:47:52
|
>> >>>I've been thinking about how we can implement remote connections, such as >>>for remote supervision. >>> >>>One option I see is that the ModEEG object will have the ability to create a >>>server socket that is listened to by a ModEEG object on the supervisor's >>>computer. It could just pass the raw bytes over the connection to be >>>decoded on the supervisor's computer. >> >> >>Nelo was talking of socket connections to. I know nothing of them, so >>can't comment in an informed way. There is no provision in the original >>design for realtime monitoring of the training session, but that is not >>to say that it would not be useful. However I need to get away from >>locking the supervisor and trainee into a 1:1 realtime relationship since >>this is what makes the current application of NFB expensive. I want to >>focus on Trainee self analysis, and use the supervisor in a revisory >>role. Real time superevision would provide an additional option and is >>interesting and no doubt achievable.. >> >>Why would the data need to come in via the ModEEG object. Could it not be >>a separate channelobject customised for this purpose. Like SocketObject >>or something. I previously envisaged that data would transfer by file, at >>the completion of a session, but thats not very technically >>sophisticated, and we can probably do better than that. > >It would be easy to create an object that writes signal data to a file, >and it may be a good option for what you have in mind, not to mention >making a permanent record of a session. I had assumed that this was a given, that data would be streamed to a file for session records. I don't think there is any standard format for EEG data, so we can evolve our own. >I thought it would make sense to implement the remote connection from one >ModEEG object to another, before the data is decoded, because the ModEEG >packet format already encapsulates 6 channels, plus a packet counter, in a >compact format. Yes, this does make perfect sense. We should stick to the predefined ModEEG data format/protocol if possible since it was designed for that purpose. >If the connection was too slow, or if only one or two channels was needed, >it may make more sense to write a Terminal object that encodes a signal >channel of data, with a packet counter, and sends it along a remote >connection. Another Terminal object on the remote computer would receive >the data and could be considered a "source" object. hmm..I guess a packet counter is essential here to maintain the integrity of the timing. Also the Terminal objects at each end of the connection could be presetable to a channel count, like 1, 2, 4, 6 etc. so long as both ends know what is going on, then anything is possible. >>>In that case, the connectivity would >>>be internal to the ModEEG object and outside the scope of our API design. > >>Can you explain this. It seems to me to be a contradictory statement. Do >>you mean that the socket connection would not go via the API, but via its >>own connection implementation. > >I'm just discussing the options for remote connectivity, to see if it >requires any changes to our design. My above statement just means that, >as far as I see it, no changes are needed, because there could be objects >that fit into the current design that implement remote connections. I just presume that the connection would be handled by a specialist terminal object and is conceptually similar to the ones that would handle data streaming to file. Effectively the 'connection' is to an external device, just as with audio, file, etc. Basically these Terminal objects are identifiable by the fact that they have Rx's and no Tx's. 'End of the line' Terminals, rather than 'pass through' Terminals. Something like that. When I was doing NFB with my supervisor, she was altering settings on the fly, and this would imply a two way connection. I hadn't planned on this, but come to think about, if you have psuedo real-time supervision, then being able to change Terminal objects properties mid segment is fairly essential. >>>Another option is that there can be a Terminal object that passes a signal >>>through but also sends the data over a server socket. One would need to be >>>placed in line of every channel the supervisor wanted to listen to. >>>There >>>would be some kind of listener object on the supervisor's computer. >> >> >>Two channels of data could be combined as a 512 bps signal, could it not? >>That way we only need a single channel. > >No. ModEEG produces 256 samples each second, but each sample is 10 >bits. So, 2 channels would require a minimum of 5120 bps, and that >doesn't take into account a packet counter. The P3 packet format, as an >example, requires 40 bits for 2 channels, though I don't see the need for >an auxilliary channel, which would cut out 8 bits. That means two >channels would require at least 8192 bps, but that's still well within >even dial-up internet connection speed, which is usually around >44,000-50,000 bps. I really had this screwed up, my thinking must have been elsewhere when I wrote that. (??:-) btw, in rural areas, like where i live such fast dialup speeds are not acheivable due to the prescence of electric fences. My dialup is restricted to 32k bps since it provides faster throughput than using the higher speed protocols. It might connect Ok at something higher, but the error rate can go way up. From a practical NFB pov, it would be absolutely fine to limit transmission to two channels of EEG, but then again I see a case for additional channel for HRV that runs concurrently with the EEG since that provides the supervisor with very good feedback on the state of relaxation of the Trainee. Since the bandwidth is a available for 6 channels, lets stick with that. >>>Neither of these options requires any change to our design. I'm sure there >>>are other options. Any ideas? >> >> >>Seems to me that this supervisory channel needs its own protocol. > >I agree, but I think our current API should be independent of any >particular application. That way the objects can be used in various >applications. We can consider the actual application we produce to be the >"default" application of MindIO. Yes, this is also my thinking, that MindIO will be minimalist in terms of clinical NFB function but will contain objects that can be used elsewhere. It could be a single 'segment' application, much like BE currently is. >>>If the supervisor needed the ability to modify the layout at run-time, that >>>could be built into the application and would also not require any change to >>>our general object relationship design. >> >>Since we need the Trainee to be able to train when the supervisor is off >>line, 24/7 there is a case for there being a supervisor server of some >>kind where the Trainee can log into to download session designs etc >>perhaps. Maybe this server could act as a go between for all >>Trainee/Supervisor transfers. One factor that I haven't even mentioned is >>that the Trainee needs to be able to assess the effect of each segment >>and transfer that info to the supervisor. This would ideally be in the >>form of a brief questionaire, and a html server could work well in this >>role. Its getting complicated though. Part of Trainee training is the >>doing of this self assessment, and is an integral part of the self >>training concept. The results of the assessment are used in customising >>the following sessions. > >I think you're talking about a server-side web application. You could say >that's my specialty, as that's the majority of programming that I've done >recently. That's how I make a living (one of my three jobs, anyway). Take for instance the situation where the supervisor had two or more trainees training concurrently, or where the supervisor was of lnie or doing something else, like sleeping. A server side web app could manange the training process, including the collection and storage of session EEG data for retrieval by the supervisor as and when required. It could also manage the transmission of session and layout designs as required. I actually really like this conception. Jeremy..read through this scenario and take a stab at the question at the end. Assume for this example, that we are using a server side app rather than a socket connection. The Terminal object transmits EEG data to the server side app, via http, the server processes this data, stores it in its data base, and if required echos it to the supervisor. What would the time delay be reasonably expected to be. I realise that there are variables in here such as the number of concurrent trainees, the server cpu speed, the server loading, the nature of the app, or whatever. Can you give a ball park. Could it typically be within say 3 seconds. Why I ask is that there is no absolute need for strict real time monitoring in this situation, and a few seconds delay is acceptable. I just thought of a new connectivity issue and that is that the supervisor should somehow know exactly what the Trainee is 'actually' seeing and hearing so some mechanism needs to be in place to correlate the two systems. Trainees can change Terminal properties at run time and the reward response of the system will reflect that, so somehow these changes need to get through to the supervisor. Reward ratios are an important factor and Trainees might get them wrong initially, so the supervisor needs to be able to monitor that. Seems we almost need some kind of ongoing change log or something. This is very important and have not previously thought of it. More thinking needed here. Time for xmas dinner..yumm Ian >------------------------------------------------------- >This SF.net email is sponsored by: IBM Linux Tutorials. >Become an expert in LINUX or just sharpen your skills. Sign up for IBM's >Free Linux Tutorials. Learn everything from the bash shell to sys admin. >Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click >_______________________________________________ >MindIO-devel mailing list >Min...@li... >https://lists.sourceforge.net/lists/listinfo/mindio-devel |
From: Ian V. <vi...@ig...> - 2003-12-25 00:47:47
|
> >> From all the designs I have seen, six would be enough, or as you suggest >> use an AND as a splitter. Either way the graphical representation would >> not want all six to be displayed if not used because it makes the >> graphic too big and uses up valuable screen real estate. BE's >> implementation is superb in this respect, though I expect the code is tricky. > >If we do it this way, then we need a method of Rx and/or Terminal that can >be used to find out of a specific Rx has been connected to. That's to >keep the graphic clean by hiding unused Rx's, as you mentioned, and also >so anther Terminal would know which Rx's are free, if there are multiple >with the same function. Or... Reading this, something sprung to mind, and that is that we could have 'transmission' or 'Link' object that conceptually represents the linking between any two nodes. This was how I originally envisaged implementing a BE type design, before seeing what you had in mind. In the current design, Terminal declares an Rx and pass's samples via the Rx interface. I am not sure if my suggestion would have any advantages, but thought I would put it on the discussion table so it can be considered and rejected or adopted as we decide. Conceptually it does make some sense. This 'Link' object could have dynamic Tx's, and could graphically represent itself as the single output node of a terminal object. When you add a link in a BE design window, you effectively create a 'link' and wouldn't be suprised if BE actually has such an object. Eventually we will need a graphical linking shape of some sort, and this link object could manage that. Otherwise the creation of such a linking line (or whatever) would need to assigned to either Tx or Rx. This has to be plus for considering a seperate 'link' object. >>>I guess we need to find a balance between program complexity and convenience >>>for the user. >>Always!!!! Lets run with dynamic and see where that leads. It is probably >>no harder to add one Tx or Rx dynamically than 25. > >...if we do it this way, we need to modify the design to allow for >additional Rx's to be added dynamically. The only object that I can think of that would require multiple Rx's would be logic objects such as AND or OR, and these could be implemented as a single universal logic object, with programmable inputs. do they need to be dynamic. If it makes things easier, logic objects could have a fixed no of Rx's (say 5) but could be cascadeable if more where needed perhaps. Can you think of any other objects that need multiple Rx's? |
From: Ian V. <vi...@ig...> - 2003-12-24 22:58:41
|
At 07:33 PM 12/23/2003 -0700, you wrote: >Ian Vincent wrote: > >>Jeremy- I am way out of my depth here, and never expected to be doing >>this stuff, but it is fun. I am taking all of these ideas from the >>graphic of the CorelDraw object set that sits on the side of my PC, but I >>have also seen this application centered design principle used in other >>object sets like Excel, and Clarion, so I know it is sound from an >>engineering pov. It makes good sense to have a master 'Application >>object', don't you think. > > >Actually, I think we should design the 'engine' to be independent of any >particular application. I have a couple reasons for this. One reason is >that objects that fit into this API could be used in various types of >biofeedback applications, developed by various types of people. That will >make it easier for other people to develop biofeedback software, since >they can use objects that are already written, and it will be beneficial >for us because we might be able to use objects written by other >people. The other reason relates to the so-called "Software Development >Problem". A complete neurofeedback application like you speak of is a >large undertaking, and I think that we should try to avoid a problem most >software projects have, which is that they are totally useless until >they're complete. We don't know what the future may bring. Perhaps this >won't be completed by us. I personally am putting in the time and effort on this project so that we can get the job done. I don't have any 'not completed' in my plans since it simply has be done, whatever it takes. To me that would be like saying that we don't actually need it. but I do get your point. I think that is the fate of the partly completed OpenEEG library project that has already passed into the land of 'not completed'. Interesting point though. I guess if we look toward each seperate object as having its own completion, ie it does its job and functions in its own right. Objects are tools, and do a specific job but can relatively useless on their own. This is why I see tying our objects together under the umbrella of an application object does give them a coherent structural framework in which they become a functioning whole. I don't know if you are familiar with Tony Buzan's MindMapping concepts, but they always rely on a central object from which all other attached concepts relate, so this appears to me to be totally compatible with 'application object' . These days my entire thinking toward systems design uses this principle. > But if we have a refined and well-designed API, and complete objects, it > would be easier for someone else to write a few more objects and tie it > all together into an application. I think the reason nobody is working > on EEGMIR is that they would need to totally understand the inner > workings of the present code. But it will possible for someone to use > our objects without understanding their inner workings. Yes..this appears to be be the very advantage of object oriented design, in that the objects can actually be used generically. ie they are what where originally termed 'black boxes'. >I would really prefer to refine the API design a little further, then >develop the basic objects, and only then work on an actual application >that uses those objects. Kind of a ground-up approach. If we set our >initial goals too high, we might do a lot of work and end up with nothing >to show for it. I think this is the problem that has plagued other >attempts to develop software for ModEEG. I agree, but by also defining a potential over all framework, ie the 'applicationobject', any resulting API will reflect its needs. This brings up two points: 1. There appears to be potentially two API's. One is internal and defines the communication between the various terminal objects, (cells perhaps) and the other is a potential external API that is what I refer to as the automation engine approach which defines communication between the mindio app and another host app, such as the supervisor app that I proposed . It is potentially possible that we may end up with both. I personally have no problem with the added conceptual complexity of this idea, since I know from experience that engineering problems are often solved by the introduction of additional layers of complexity, though the corollary of this is that additional complexity should only be added if no other option is viable. Another design philosophy holds that sometimes the additional layers are added to solve problems, but in some later evolution the layers dissolve into a new approach that solves all of the problems. Such is philosophy. ;-) Do you get my drift here? However I do agree that we should concentrate on defining the 'internal' API at this stage. Ian |
From: Ian V. <vi...@ig...> - 2003-12-24 21:18:11
|
At 12:14 PM 12/24/2003 -0700, you wrote: >Oops, I accidentally sent that message before I typed my reply. > >Ian Vincent wrote: > >>Any idea how to log in direct to the Mindio site. Currently I log in as >>attrix, which takes me to users. Then have to navigate to mindio, via >>home, then groups, then m the mi then mindio. It takes forever and is a >>pia. Often winSCP3 dumps me for one reason or another and I have to go >>through the elongated login again. There has to be an easier way. > >The only solution I've found with WinSCP is to add a bookmark for the >htdocs directory. The far-right icon, which looks like an open folder, >brings up a dialog that you can use for that. When you want to use a >bookmark, you click that same icon. Maybe there's a better way, but I >haven't found it. hmm..why hadn't I seen that?..tks >------------------------------------------------------- >This SF.net email is sponsored by: IBM Linux Tutorials. >Become an expert in LINUX or just sharpen your skills. Sign up for IBM's >Free Linux Tutorials. Learn everything from the bash shell to sys admin. >Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click >_______________________________________________ >MindIO-devel mailing list >Min...@li... >https://lists.sourceforge.net/lists/listinfo/mindio-devel |
From: Jeremy W. <jjw...@ub...> - 2003-12-24 19:14:50
|
Oops, I accidentally sent that message before I typed my reply. Ian Vincent wrote: > Any idea how to log in direct to the Mindio site. Currently I log in > as attrix, which takes me to users. Then have to navigate to mindio, > via home, then groups, then m the mi then mindio. It takes forever and > is a pia. Often winSCP3 dumps me for one reason or another and I have > to go through the elongated login again. There has to be an easier way. The only solution I've found with WinSCP is to add a bookmark for the htdocs directory. The far-right icon, which looks like an open folder, brings up a dialog that you can use for that. When you want to use a bookmark, you click that same icon. Maybe there's a better way, but I haven't found it. |
From: Jeremy W. <jjw...@ub...> - 2003-12-24 19:12:10
|
Ian Vincent wrote: > Any idea how to log in direct to the Mindio site. Currently I log in > as attrix, which takes me to users. Then have to navigate to mindio, > via home, then groups, then m the mi then mindio. It takes forever and > is a pia. Often winSCP3 dumps me for one reason or another and I have > to go through the elongated login again. There has to be an easier way. > > Ian > > > > ------------------------------------------------------- > This SF.net email is sponsored by: IBM Linux Tutorials. > Become an expert in LINUX or just sharpen your skills. Sign up for IBM's > Free Linux Tutorials. Learn everything from the bash shell to sys admin. > Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click > _______________________________________________ > MindIO-devel mailing list > Min...@li... > https://lists.sourceforge.net/lists/listinfo/mindio-devel > |
From: Ian V. <vi...@ig...> - 2003-12-24 08:40:21
|
Any idea how to log in direct to the Mindio site. Currently I log in as attrix, which takes me to users. Then have to navigate to mindio, via home, then groups, then m the mi then mindio. It takes forever and is a pia. Often winSCP3 dumps me for one reason or another and I have to go through the elongated login again. There has to be an easier way. Ian |
From: Ian V. <vi...@ig...> - 2003-12-24 08:32:09
|
Jeremy I think we are on different tacks here, but it wont hurt to discuss this anyway. We apparently have different view of what an engine is. I am seeing something that is similar to what I have experienced in CorelDraw and Excel etc where the host application declares a single object, the 'applicationobject' rather than your conception where the host application declares the entire object set. Both view are probably valid in their own right I expect. I have drawn a graphic of what I was thinking of and it can be viewed at: http://mindio.sourceforge.net/graphics/mindioeegengine1.html The graphic shows the property relationships rather than the interface relationships, is probably not that useful. It also assumes a superclass object called a 'cell', which is my new term for a layoutable, or functional. (cell as in neuron) However I see that you don't consider a superclass as necessary. I respect your more informed judgement on this, but thought I would post up the diagram anyway, since I already had it made. I have no experience of a host application that declares more than the single application object. Hence the design. Basically it mimics the CorelDraw and Excel conceptions of an automation engine. Clarions object set also has a similar 'application' centered object relationship format so I presumed it was 'normal'. I guess I had in mind a basic mindio application that could be used to run an NFB session, via its inbuilt interface, but that the same application could be run be a supervisor app that drives it via the automation interface, basically replacing the standard interface. Or something like that. Let me know what you think. As before this is only an exploratory idea, for discussion purposes. Ian PS.. I just got my scanner working again so hopefully tomorrow can post up graphic of the CorelDraw object set for interest and reference. |