You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(29) |
Apr
(9) |
May
(1) |
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <ben...@id...> - 2004-05-22 12:55:13
|
Dear Open Source developer I am doing a research project on "Fun and Software Development" in which I kindly invite you to participate. You will find the online survey under http://fasd.ethz.ch/qsf/. The questionnaire consists of 53 questions and you will need about 15 minutes to complete it. With the FASD project (Fun and Software Development) we want to define the motivational significance of fun when software developers decide to engage in Open Source projects. What is special about our research project is that a similar survey is planned with software developers in commercial firms. This procedure allows the immediate comparison between the involved individuals and the conditions of production of these two development models. Thus we hope to obtain substantial new insights to the phenomenon of Open Source Development. With many thanks for your participation, Benno Luthiger PS: The results of the survey will be published under http://www.isu.unizh.ch/fuehrung/blprojects/FASD/. We have set up the mailing list fa...@we... for this study. Please see http://fasd.ethz.ch/qsf/mailinglist_en.html for registration to this mailing list. _______________________________________________________________________ Benno Luthiger Swiss Federal Institute of Technology Zurich 8092 Zurich Mail: benno.luthiger(at)id.ethz.ch _______________________________________________________________________ |
From: <si...@sm...> - 2003-06-18 15:23:13
|
Hi, > could you please tell us what did you find out about fw-cameras and > dv-compression? i'm especially interested to know what colorspace the > result is in? and what libraries allow to decompress dv? is hw > decompression possible with some firewire cards? I don't think hw decompression can be done using firewire card (will look into that) because firewire is just the transport protocol often used to, but not restricted to, dv data. dv compression/decompression is done inside camera or software not by the ieee1394 hardware. Basically there are two types of cameras: 1. The normal DV camera. These cameras come in several tastes; from consumer to professional and can use different formats. There is a dv decompression library libdv that is able to decompress into YV12 (planer) and YUY2 (packed). The one to use also depends on the type of camera. Older cameras often used 4:2:0 sampling and so is better to convert to YV12. The newer ones are 4:1:1 and can be upsampled to YUY2 (4:2:2) 2. The DC camera (like the one stephan used for pfom) which is a digital camera. A digital camera differs from a DV camera in that it sends uncompressed image data (which means we don't need decompression of any kind, maybe up or downsampling depending on hardware). The format of DC cameras can differ per type. They are also in relativly cheap webcam versions that still outperform normal webcams (speedwise don't know about image quality). Like the C104T from aplux (http://www.aplux.com). It supports YUV 4:1:1, YUV 4:2:2 and RGB888 30fps (nothing about resolution though). It is possible to use coriander as capture program and loop it to v4l using vloopback. It is then possible to just use v4l module which is easy to write. But I don't think it is to hard to create simca module either. I'm now looking into porting parts of gstreamer and kino dv code to simca module. Can't test it though because of lack of hardware. > i'll try to find JPEG or MPEG decompressor that allows that, it'll > affect the choice of the file formats. You try to figure out if dv > decompression library allows that. otherwise we'll have to use software > colorspace conversion. > > Comment: what i previously understood as hw-accelerated colorspace > conversion is in fact ability to display YUV image, meaning there is no > RGB result available in the frame buffer :( I think there should be but it is of no use. There should be plain rgb image during blitting in the physical memory. You should be able to read it out but at that point it is already shown and you probably don't want that. Cheers, -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: <si...@sm...> - 2003-06-06 17:17:29
|
On Fri, Jun 06, 2003 at 12:06:18AM -0700, Antoine van de Ven wrote: > Oops, sent before I was finished. Again: > > Hi, I have a question. > At http://www.mozilla.org/hacking/coding-introduction.html#interfaces I > read: > > "In a CORBA environment, life is more restrictive and difficult, because you > have inter-process and inter-network communication, something which Mozilla > is not actively using. In a distributed CORBA environment, it is difficult > to change the components of an interface, because you are usually unable to > replace all running systems at the same time. If you want to change > something, you have to define a new version of an interface, but you might > still be required to support the old one." > > Is this a problem for us? I don't think that is a real problem for us as the simca interface that is exported through CORBA is very limited and no subject to heavy changes. > > I was also reading about Python and found something similar: Python with > XPCOM > http://www-106.ibm.com/developerworks/webservices/library/co-pyxp1 > > But I guess you are already far with CORBA etc so lets use that, but I like > to know > more about the choices that were made, also to be able to explain it better > to others. We're still just experimenting with CORBA. So if we find a way that seems to fit more in our needs we should have a look at it. -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Antoine v. de V. <an...@v2...> - 2003-06-06 14:03:53
|
----- Original Message ----- From: "Simon de Bakker" <si...@sm...> To: <sim...@li...> Sent: Friday, June 06, 2003 6:24 AM Subject: Re: [Simca-developers] Corba question > Hi, > What is the question? I think I already know the answer: XPCOM does not support remote objects (like CORBA, but only indirectly through HTTP for example) ps: you don't have to document the advantages/disadvantages of the different software right now, it has no high priority, so only if you have the time Antoine |
From: Antoine v. de V. <an...@v2...> - 2003-06-06 13:44:23
|
Oops, sent before I was finished. Again: Hi, I have a question. At http://www.mozilla.org/hacking/coding-introduction.html#interfaces I read: "In a CORBA environment, life is more restrictive and difficult, because you have inter-process and inter-network communication, something which Mozilla is not actively using. In a distributed CORBA environment, it is difficult to change the components of an interface, because you are usually unable to replace all running systems at the same time. If you want to change something, you have to define a new version of an interface, but you might still be required to support the old one." Is this a problem for us? I was also reading about Python and found something similar: Python with XPCOM http://www-106.ibm.com/developerworks/webservices/library/co-pyxp1 But I guess you are already far with CORBA etc so lets use that, but I like to know more about the choices that were made, also to be able to explain it better to others. The advantages and disadvantages of the different options. I'm also looking at other jamming tools myself, to compare them. But because you have already done that, could you write some more about the advantages and disadvantages of those tools too (compared with Simca)? In Twiki for example: http://twiki.artm.org/cgi-bin/twiki/view/V2jam/JammingTools Thanks, Antoine |
From: <si...@sm...> - 2003-06-06 13:24:29
|
Hi, What is the question? On Thu, Jun 05, 2003 at 11:52:03PM -0700, Antoine van de Ven wrote: > Hi, I have a question. > At http://www.mozilla.org/hacking/coding-introduction.html#interfaces I > read: > > In a CORBA environment, life is more restrictive and difficult, because you > have inter-process and inter-network communication, something which Mozilla > is not actively using. In a distributed CORBA environment, it is difficult > to change the components of an interface, because you are usually unable to > replace all running systems at the same time. If you want to change > something, you have to define a new version of an interface, but you might > still be required to support the old one. > > > > > http://www-106.ibm.com/developerworks/webservices/library/co-pyxp1.html > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Etnus, makers of TotalView, The best > thread debugger on the planet. Designed with thread debugging features > you've never dreamed of, try TotalView 6 free at www.etnus.com. > _______________________________________________ > Simca-developers mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simca-developers -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Antoine v. de V. <an...@v2...> - 2003-06-06 13:06:55
|
Hi, I have a question. At http://www.mozilla.org/hacking/coding-introduction.html#interfaces I read: In a CORBA environment, life is more restrictive and difficult, because you have inter-process and inter-network communication, something which Mozilla is not actively using. In a distributed CORBA environment, it is difficult to change the components of an interface, because you are usually unable to replace all running systems at the same time. If you want to change something, you have to define a new version of an interface, but you might still be required to support the old one. http://www-106.ibm.com/developerworks/webservices/library/co-pyxp1.html |
From: Madris <ma...@v2...> - 2003-06-03 11:36:29
|
> On Tue, Jun 03, 2003 at 01:17:14PM +0200, Madris wrote: > > Hi Artem and Simon, > > > > Next week (June 10) we have this V2_Jam presentation @ V2_ > > I suggest that we do this on the V2_Groundfloor... > > There we can use all the facilities and so. Let me know > > if you agree, so I can see with Lobke what has to be > > prepared. > > O ja, at what time you think is the best to start? > > ask people who are more often down there what's the weather conditions > in the G.F. during the day. i think ~12 is the best time. Ok, 12:00 h is good, I think. > > > > Artem, have you made any progress with speech2text > > (or speech2somethingelse) and do you think we can > > use it next Tuesday? > > no way. > > since the mikes (or cable) i've got don't work i can't work on > speech2something. sorry. may be i'll fix mikes this week, but i have no > time to write the objects. Ok, then we will have to do it without the 'spectacle' of speech triggering the slides. > > Let us communicate this week about the content(s) of > > the presentation. > > I want do a little intro about the whole project, during > > which I will refer to the more specific Artem, Simon > > and myself want to talk in the rest of the presentation. > > > > Then I want to demonstrate the BlueToothJavaDevice > > -> Pd -> Simca -> output thing. And ofcourse tell a > > little bit about it. > > > > But let me know what you want to talk about and how > > you think we can best integrate the whole and make one > > good presentation out of the pieces we have. > > i'd like to talk about media representation in simca (media source > object). i'd like simon to tell about python integration: embedded > interpreter, script object, how we're gonna use script object. a bit > along the lines of our yesterday's talk with Antoine. > > what's antoine's email and how to spell his name correct? keep him in > the loop please. > > P.S. how's you're headache? hope you feel better. Yeah. Much, much bettrer. Yesterday I could not even sit in the chair or do anything else and today it's all gone! So, I'm very glad. Thanks for asking. :) |
From: Madris <ma...@v2...> - 2003-05-13 10:04:12
|
V2_Jam meeting, May 12 2003 present: Anne Nigten, Artem Baguinski, Lenno Verhoog, Madris Duric, Simon de Bakker agenda: no -------------------------------------------------------------------------= ----- Artem wants to discuss the user perspective of the V2_Jam (project) software. -what they (users) will see -the use of V2_Jam -input/outputs in and from V2_Jam V2_Jam is very similar to Max and Pd. We need to have different 'layers' or 'views' for different types of = users... sort of: 1. programmer 2. programmer/user 3. user Anne suggested to do an interview with Stock and Simon as Max and Pd = expers, Madris as in the middle and Lenno as a designer. Also talk to Enric to = gather more info from him as interaction design expert. It's imortant to unify the process of inputting media into V2_Jam. The question of 'how' to put more sorts of (G)UI's into one = window/interface/Jamka(?) is still very important. Possible sorts of views: -control view -process view -media view |
From: Artem B. <ar...@v2...> - 2003-04-15 19:31:10
|
just testing my sendmail configuration... |
From: Simon de B. <si...@z2...> - 2003-04-15 15:52:20
|
On Tue, Apr 15, 2003 at 04:57:22PM +0200, Artem Baguinski wrote: > > > On Tue, 15 Apr 2003, Simon de Bakker wrote: > > > > > > although, centralized approach isn't good for iternet based sporadic > > > jams or travelling jam artists who want to start up jam quickly not > > > changing much in their usual setup. i'll think about this more. > > > > I was also thinking about a setup using multicast simca-name-server > > discovery. For this a client or server broadcasts a request and > > name-server responds by sending it's ior and other relevant information. > > The TTL of the multicast message defines its range and can cover as much > > as the local domain as the rest of the Internet. So you have some > > control of where you expect or want to find name-servers. Also I think > > this is nice for name-server chaining. name-server requests and receives > > responds at periodic times (like dns does) to discover dead or new > > name-servers. Maybe also clients and servers respond on client broadcast > > so client knows what is out there. > > > > Drawback of multicast is that router needs to support it. Although I > > believe most of them these days... (?) > > > > What do you think? > > I like it ;) > > but what happens once we've discovered name-servers? scenario: - name server starts and starts looking for other name servers. Creates and maintains cache list of name-servers and servers registerd with name-servers - server registers with name-server. name-server broadcasts change. - client starts and sends request for available name-servers and servers - retrieves cache list of server that first replies to it and uses its IOR or request for server with special name. e.g. propagates list to user in GUI like: rmr-name-server |--> rmr-server |----|--> rmr-page |----|--> rmr-object |----|--> ffmpeg-object in-transit-name-server |--> video-server |----|--> ffmpeg-object |----|--> ... |--> 3d-server |----|--> opengl-object |----|--> ffmpeg-receive-object Something like that? Think this should be customizable in some sense (maybe just on/off) as it is not always welcome to now all servers. If just running standalone installation you may only want to know local name servers and spare the network overhead. -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Artem B. <ar...@v2...> - 2003-04-15 14:53:04
|
On Tue, 15 Apr 2003, Simon de Bakker wrote: > > > although, centralized approach isn't good for iternet based sporadic > > jams or travelling jam artists who want to start up jam quickly not > > changing much in their usual setup. i'll think about this more. > > I was also thinking about a setup using multicast simca-name-server > discovery. For this a client or server broadcasts a request and > name-server responds by sending it's ior and other relevant information. > The TTL of the multicast message defines its range and can cover as much > as the local domain as the rest of the Internet. So you have some > control of where you expect or want to find name-servers. Also I think > this is nice for name-server chaining. name-server requests and receives > responds at periodic times (like dns does) to discover dead or new > name-servers. Maybe also clients and servers respond on client broadcast > so client knows what is out there. > > Drawback of multicast is that router needs to support it. Although I > believe most of them these days... (?) > > What do you think? I like it ;) but what happens once we've discovered name-servers? |
From: Simon de B. <si...@z2...> - 2003-04-15 14:34:31
|
> although, centralized approach isn't good for iternet based sporadic > jams or travelling jam artists who want to start up jam quickly not > changing much in their usual setup. i'll think about this more. I was also thinking about a setup using multicast simca-name-server discovery. For this a client or server broadcasts a request and name-server responds by sending it's ior and other relevant information. The TTL of the multicast message defines its range and can cover as much as the local domain as the rest of the Internet. So you have some control of where you expect or want to find name-servers. Also I think this is nice for name-server chaining. name-server requests and receives responds at periodic times (like dns does) to discover dead or new name-servers. Maybe also clients and servers respond on client broadcast so client knows what is out there. Drawback of multicast is that router needs to support it. Although I believe most of them these days... (?) What do you think? > > (/****) > > this allows the following scenario of using v2_jam for creating portable > distributed installations: > > * the description of the installation consists of two parts: > > * jam pipeline. pipeline consists of virtual named servers, which > in turn contain pages, and those are made up of connected objects. > > * jam map. jam map contains information about where the virtual > servers run. (i.e. maps server names to network addresses) > > * whenever author wants to recreate his work in different network > environement he only needs to change the map, and not touch the pipeline. > > pipeline/map can be represented either by passive documents (e.g. XML > based) or by active scripts, or by combination of the two. > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Simca-developers mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simca-developers -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Artem B. <ar...@v2...> - 2003-04-15 14:09:04
|
resending due to ml problems ---------- Forwarded message ---------- Date: Tue, 08 Apr 2003 17:27:27 +0200 From: Artem Baguinski <ar...@v2...> To: sim...@li... Subject: (**) simca and corba hi simca-dev here's my thoughts about corba, name-servers etc. first of all, i'm introducing tech level sterretjes. they are like stars in tv guide, but represent tech-details level of the mail (or it's part). (*) - isn't technical at all, (**) is technical but can be read and understood by everybody in this list, (***) - more details, read if you wanna know more about v2_jam software, (****) - only read if you're artm or simon, (*****) - low level tech documents with program listings and comments in russian :) the symbols should appear in the subject line, although (*) and (**) can be omitted (because everybody should read those mails generally). if subject line contains tech-details-code, but then in letter body there's other code, it means there's end of too-tech section below, and you can skip there if you're not interested in details. this trick should be used by programmers to attract attention of others to some conclusions, like in this mail. end of section is (/*), (/**) etc. there shouldn't be more then one detailed section per file though, otherwise the whole thing becomes too serious :) (****) if you're not simon skip through to the (/****) the evil of the current approach (name-server is a thread of one of the servers) is that if that server dies (e.g. client did something very wrong and server couldn't recover) then name server dies as well. all other servers will be inaccessible then. why not implement the following strategy: - simca-name-server isn't simca-server, it only implements the corba naming service and the IOR server. - simca-server implements the actual simca stuff and registers itself in a name server. which name server can be specified on command line: * default: localhost:<simca-port> (we should decide which port) * --ns-ior <ior> - use name server by ior, exit if no such server * --ns-ior-file <file>|- - read name server IOR from file or stdin * --ns-host <host>[:<port>] - use name server by host and probably port (default for port is simca-port of course). this approach allows, for example, easily create a setup with centralized name-server (all simca servers start with --ns-host simca-hq:9999). now, about server names. several servers can run on the same host (so that when one falls others remain running). this means that every server should be registered with distinguishable name. server name should be command line parameter, and there shouldn't be default - one must be provided. corba naming service allows tree (or even graph, they say) of names to be created, similar to dns or filesystem. components of this tree contain not only names but also "kinds". kind describes what for name-component we have here, if i'm not mistaken we can come up with our own kinds. the convention (from name-client): - components of name are delimited by '/' - id and kind of those components are delimited by '.' examples of a name: /presentation.folder/servers.folder/audio.server /presentation.folder/servers.folder/video.server /presentation.folder/servers.folder/audio.server/in.page /presentation.folder/servers.folder/audio.server/out.page i don't know actually if the same name can designate object and context like i've shown. although, centralized approach isn't good for iternet based sporadic jams or travelling jam artists who want to start up jam quickly not changing much in their usual setup. i'll think about this more. (/****) this allows the following scenario of using v2_jam for creating portable distributed installations: * the description of the installation consists of two parts: * jam pipeline. pipeline consists of virtual named servers, which in turn contain pages, and those are made up of connected objects. * jam map. jam map contains information about where the virtual servers run. (i.e. maps server names to network addresses) * whenever author wants to recreate his work in different network environement he only needs to change the map, and not touch the pipeline. pipeline/map can be represented either by passive documents (e.g. XML based) or by active scripts, or by combination of the two. |
From: Madris D. <ma...@v2...> - 2003-04-14 11:59:48
|
V2_Jam meeting Monday, April 14 / Artem, Simon, Madris Methodology We need to do some more research on the methodology. Madris suggested earlier to use the GRAPPLE method, which consists of five segments and is intended for object-oriented systems. The segments are: Requirements gathering, Analysis, Design, Development, Deployment. We also need to do some more research on the modeling language(s). The suggested technique is the UML, but according to Artem is this not ment for modeling the 'tools', like V2_Jam tool. Jamka (client) Madris is working on requirements gathering, functionalities, use cases and flow charts. By the end of the week we will start discussing the user interface with Lenno. Simca Atem is going to write a paper about V2_Jam software description. Deliverables -V2_Jam presentation on June 16: slides get triggered by key words (speach recognition) -Herman Maat (???): Video mixing and streaming in public environment -Marnix de Nijs September/August ? : combined 3d and Video environment steered by people running on fitniss like running track (what is it called :-). |
From: Madris D. <ma...@v2...> - 2003-04-07 10:10:33
|
the text in the last email looks a little bit f?<!*d up, so here it is = one more time.. mijn excuses hiervoor.. Research target groups V2_Jam software =20 Intro This document gives an overview of the results that were filtered from = the research on the possible user target groups of the V2_Jam software. = The participants of the Mediaknitting workshop that took place during = DEAF03 from Wednesday February 26 until Friday February 28, 2003 at the = V2_ground floor were taken as representative group for this research. =20 Collaboration between developers and artists from different disciplines = often results in merged media or new media formats. Media Knitting is a = three-day hands-on workshop for artists, engineers, and designers = working with software to knit various media formats and applications = together for live or real-time interactive performances. The scope of = the media used for collaboration in Media Knitting will include video, = streaming media, audio and 3D modelling. In this workshop thirty = participants will work together to discover and patch each other's = domains together by means of software and human interaction. Several = experts will be brought in from the commercial software field for Mac = and Windows as well as from the field of 'open source' and 'free = software'. Among the software facilitated for this workshop are Max-MSp, = Jitter, V2_Jam, PD, Blender, Cyclops, BigEye, gstreamer, Nato, MoB, = FreeJ and Touch 101. Participants are encouraged to bring their own = laptop and software as well. The participants and the workshop leaders = will work together on the realization of performances or media jam = sessions. The end result of the workshop will be presented in an = informal media concert open to the DEAF audience. =20 These are the questions that we are trying to answer: =20 1.. What is/are the (possible) target group(s) of users of the V2_Jam = software? 2.. Which operating systems do they use? 3.. What software and/or hardware are they familiar with and would = like to use in combination with the V2_Jam software? 4.. What are their qualifications? Do they already have some = experience with similar software like Pd or Max and how does this affect = our approach towards the GUI? 5.. What/how would they like to use the V2_Jam software for? 6.. Mobility of users; working online/offline, moving from one place = to another; how does this affect the updating issues? =20 =20 =20 =20 1. We consider the participants as the target group, because they are = all in one way or another active in the field of 'media knitting or = media jamming'. They describe themselves as following: - instigator, creative director, media designer and system = integrator of hybrid, new media projects - researcher, freelance programmer and web designer - photographer and performer - director working in multi disciplinary layered live = performance - digital architect, researching and developing evolving models = and interactive environments - freelance composer - independent media artist, curator and writer - artist and project manager - freelance consultant and programmer, software development = manager - student - web designer, visual artist - interaction designer - actor, performer and writer =20 2. Operating systems used by those participants are: - Mac OS 9/X, Familiarity with Windows - windows/mac os/ some linux - Mac OSX / linux Debian - OS X on MAC G4 laptop - Windows and MacOS - PC, linux (basic), Mac - MAC - dual boot Mac Os X.2 and Mac Os 9 - mac os. . 9 and X - Windows XP - Linux, Mac, Windows - MacOS classic, OS X, Win98, 2000, XP - MacOS 9.1 - Mac - macOS (X), Windows - PC , MAC - Windows XP - Windows/MacOS - Windows and Mac OS - MAC os 9/10 - Windows, Unix (a little bit) - Mac Os X + classic - MAC and PC - OS 8.6 - Windows 98. Little bit of Linux =20 3. Software/hardware: - G4 Powerbook, Max, Jitter - pure data/flash/some programming - MaxMSP&Jitter, Pure data & Gem, Blender, Director8.5, Form z = Radiosity, Premiere, Metasynth, Cubase, Illustrator, Photoshop, = Macintosh Powerbook G4 Titanium - Basic knowledge of Isadora and Keystroke - 3D Studio Max - Director - Premiere - WRML - basic Virtools - = Camera based Motion Capture and Patter Recognition systems - LiSa, csoundAV, commercial sound applications (sound forge, = cool edit, logic, etc), jMax - G4 Laptop, familiar with Sorenson and Quicktime Broadcaster / = Final Cut Pro, some experience of Big Eye / Max MSP and Nato - max/nato/jitter and an icube box plus basic sensors - 3D Max, Adobe Premiere, Adobe Photoshop, Adobe Illustrator, = Micro Station (2D, 3D), AutoCad (2D, 3D), Python, BasicX, = Microcontroller BasicX-24 and everything that fits in the circuit = (sensors included!!!) - Software: Finalcut, Adobe Premiere, Photoshop, Illustrator, = Flash, PureData+GEM, Max-Msp+Jitter, Buzz, Wavelab, Soundforge, Nuendo, = etc; Hardware: Work with: WJ-MX20 Videomixer, VHS/DVD players, miniDV = camera, I-Cubex sensor interface / sensors, midicontollers - ibook, video camera, photo camera, microphone, toy sampler - Designskills: Adobe Photoshop, Adobe Premi=E8re, Adobe = Illustrator, Macromedia Fireworks, Macromedia Dreamweaver, FinalCutPro, = KeyStroke, Macromedia Director, several soundeditprograms; = Programmingskills: Lingo (little), html, Java, Javascript, SQL - Flash Actionscript, Java, ASP, XML, (D)HTML, CSS, and = Javascript. Some experience with C, C++, VB, and PHP - Photoshop, Dreamweaver, Premiere, Wavelab, Flash, Online = databases, HTML, Javascript, XML/XSLT, Perl, PHP, Delphi, Notebooks, = Digital photo and Videocamera - Director, Flash, 3D Studio Max, VRML, Premiere, Visual Basic; = Velleman interface card (K8000) and a serial input device (8 box) - Jitter /PowerBook G4/Mini video camera - Photoshop, Flash MX + ActionScript, PHP, MySQL, HTML = (Homesite), Sonic Foundry Vegas Video, Webcam, JVC miniDV digital video = camera, Sony Vaio notebook - Max/Msp//Jitter // nato.0+55 - PD, Blender =20 =20 4. From an analysis point of view, I've found that about a half of the = Mediaknitting workshop participants are very much 'visually oriented', = using the software with conventional GUI elements. During the three days = workshop I have had discussions with a lot of people coming from = different disciplines (media artists, composers, writers, photographers, = programmers, designers, directors, students, architects...) and one half = of them have worked before with software like Pd, Max/MSP/Jitter. The = other half has plans to start working with it in the 'near future'. How does all of this influence our ideas about the GUI? What are the = system requirements from a user point of view? A common requirements is = "this system shall be user friendly". But how many of us have met a = friendly computer? =20 5. Some ideas for the use of V2_Jam software: ------------------------ - bringing audience participation into the heart of media = works, fostering a 'climate' of participative interaction & = collaboration - supporting new and emerging experiences for audiences through = the use of non-traditional interfaces built from custom sensors and code - natural phenomena in combination with audience interaction to = affect and evolve artworks in real time - performance/dance/sound and film practices framed within the = context of site specific, interactive installation, peer to peer = networks and web-based practices ------------------------ - experimenting how sound can (taken from the public space) = (directly/technically) influence the dynamic picture ------------------------ - developing the visual presentation of music through new = technologies ------------------------ - exploring an application of interactive electronic strategies = to generate a real-time Performance Space, in physical or/and digital = environments - an interactive environment performed on a device which in = real-time evaluates a range of human performer's body signals along the = space, (movement capture, sensors system,.?) to deliver along this space = accompanying mutations of initial conditions... (3d models, videos, = sound,.?), the responsive changes of the 'Performance Space' ------------------------ - composition and live realization of work using digital means, = and on actions and installations using sound as a primary element. ------------------------ - video conferencing, streaming media, web and digital video = technologies ------------------------ - live performance /vjing ------------------------ - interactive performance - interacting with the audience ------------------------ - real time vj scores ------------------------ - re-imagining elements originating in traditional media within = the context of new media ------------------------ - the integration of virtual space and real space ------------------------ - The present moment, exploring the concept of real-time ------------------------ - analyzing the footage and using it as parameters for audio = and video synthesis ------------------------ =20 6. Many of these participants travel a lot. This means that they have to = take their computer with them. But what if this machine has to be used = as a Simca configuration for some online performance where the = participation of several clients from elsewhere is a crucial part of = this performance? How does this influence the updating issue? And how = can this crucial problem be solved? |
From: Madris D. <ma...@v2...> - 2003-04-07 10:04:13
|
Hello, Here are the results of my Media Knitting research... I finished this about a week ago... (trok echter net wat laatste streepjes door de 't') Madris P.S. Hey Artm, beterschap enzo.. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Research target groups V2_Jam software Intro This document gives an overview of the results that were filtered from th= e research on the possible user target groups of the V2_Jam software. The participan= ts of the Mediaknitting workshop that took place during DEAF03 from Wednesday February 26 until Friday February 28, 2003 at the V2_ground floor were taken as representative group for this research. Collaboration between developers and artists from different disciplines often results in merged media or new media formats. Media Knitting is a three-day hands-on workshop for artists, engineers, and designers working with software to knit vario= us media formats and applications together for live or real-time interactive performances. The scope of the media used for collaboration in Media Knitting will include video, stream= ing media, audio and 3D modelling. In this workshop thirty participants will work together= to discover and patch each other's domains together by means of software and human interaction. Several experts will be brought in from the commercial software field for Mac and Windows as well as from the field of 'open source' and 'free software'. Among the software facilitated for this workshop are Max-MSp, Jitter, V2_Jam, PD, Blender, Cyclops, BigEye, gstreamer, Nato, MoB, FreeJ and Touch 101. Participants = are encouraged to bring their own laptop and software as well. The participan= ts and the workshop leaders will work together on the realization of performances or media jam sessions. The end result of the workshop will be presented in an informal media concert open to the DEAF audience. These are the questions that we are trying to answer: 1. What is/are the (possible) target group(s) of users of the V2_Jam software? 2. Which operating systems do they use? 3. What software and/or hardware are they familiar with and would like to use in combination with the V2_Jam software? 4. What are their qualifications? Do they already have some experience wi= th similar software like Pd or Max and how does this affect our approach towards = the GUI? 5. What/how would they like to use the V2_Jam software for? 6. Mobility of users; working online/offline, moving from one place to another; how does this affect the updating issues? -------------------------------------------------------------------------= --- ---------------- 1. We consider the participants as the target group, because they are all= in one way or another active in the field of 'media knitting or media jamming'. They describe themselves as following: -instigator, creative director, media designer and system integrator of hybrid, new media projects -researcher, freelance programmer and web designer -photographer and performer -director working in multi disciplinary layered live performance -digital architect, researching and developing evolving models and interactive environments -freelance composer -independent media artist, curator and writer -artist and project manager -freelance consultant and programmer, software development manager -student -web designer, visual artist -interaction designer -actor, performer and writer 2. Operating systems used by those participants are: -Mac OS 9/X, Familiarity with Windows -windows/mac os/ some linux -Mac OSX / linux Debian -OS X on MAC G4 laptop -Windows and MacOS -PC, linux (basic), Mac -MAC -dual boot Mac Os X.2 and Mac Os 9 -mac os. . 9 and X -Windows XP -Linux, Mac, Windows -MacOS classic, OS X, Win98, 2000, XP -MacOS 9.1 -Mac -macOS (X), Windows -PC , MAC -Windows XP -Windows/MacOS -Windows and Mac OS -MAC os 9/10 -Windows, Unix (a little bit) -Mac Os X + classic -MAC and PC -OS 8.6 -Windows 98. Little bit of Linux 3. Software/hardware: -G4 Powerbook, Max, Jitter -pure data/flash/some programming -MaxMSP&Jitter, Pure data & Gem, Blender, Director8.5, Form z Radiosity, Pemiere, Metasynth, Cubase, Illustrator, Photoshop, Macintosh Powerbook G= 4 Titanium -Basic knowledge of Isadora and Keystroke -3D Studio Max - Director - Premiere - WRML - basic Virtools - Camera bas= ed Motion Capture and Patter Recognition systems -LiSa, csoundAV, commercial sound applications (sound forge, cool edit, logic, etc), jMax -G4 Laptop, familiar with Sorenson and Quicktime Broadcaster / Final Cut Pro, some experience of Big Eye / Max MSP and Nato -max/nato/jitter and an icube box plus basic sensors -3D Max, Adobe Premiere, Adobe Photoshop, Adobe Illustrator, Micro Statio= n (2D, 3D), AutoCad (2D, 3D), Python, BasicX, Microcontroller BasicX-24 and everything that fits in the circuit (sensors included!!!) -Software: Finalcut, Adobe Premiere, Photoshop, Illustrator, Flash, PureData+GEM, Max-Msp+Jitter, Buzz, Wavelab, Soundforge, Nuendo, etc; Hardware: Work with: WJ-MX20 Videomixer, VHS/DVD players, miniDV camera, I-Cubex sensor interface / sensors, midicontollers -ibook, video camera, photo camera, microphone, toy sampler -Designskills: Adobe Photoshop, Adobe Premi=E8re, Adobe Illustrator, Macromedia Fireworks, Macromedia Dreamweaver, FinalCutPro, KeyStroke, Macromedia Director, several soundeditprograms; Programmingskills: Lingo (little), html, Java, Javascript, SQL -Flash Actionscript, Java, ASP, XML, (D)HTML, CSS, and Javascript. Some experience with C, C++, VB, and PHP -Photoshop, Dreamweaver, Premiere, Wavelab, Flash, Online databases, HTML= , Javascript, XML/XSLT, Perl, PHP, Delphi, Notebooks, Digital photo and Videocamera -Director, Flash, 3D Studio Max, VRML, Premiere, Visual Basic; Velleman interface card (K8000) and a serial input device (8 box) -Jitter /PowerBook G4/Mini video camera -Photoshop, Flash MX + ActionScript, PHP, MySQL, HTML (Homesite), Sonic Foundry Vegas Video, Webcam, JVC miniDV digital video camera, Sony Vaio notebook -Max/Msp//Jitter // nato.0+55 -PD, Blender 4. From an analysis point of view, I've found that about a half of the Mediaknitting workshop participants are very much 'visually oriented', using the softwa= re with conventional GUI elements. During the three days workshop I have had discussions with a lot of people coming from different disciplines (media artists, compose= rs, writers, photographers, programmers, designers, directors, students, architects...= ) and one half of them have worked before with software like Pd, Max/MSP/Jitter. The oth= er half has plans to start working with it in the 'near future'. How does all of this influence our ideas about the GUI? What are the syst= em requirements from a user point of view? A common requirements is "this system shall be user friendly". But how many of us have met a friendly computer? 5. Some ideas for the use of V2_Jam software: ------------------------ -bringing audience participation into the heart of media works, fostering= a 'climate' of participative interaction & collaboration -supporting new and emerging experiences for audiences through the use of non- traditional interfaces built from custom sensors and code -natural phenomena in combination with audience interaction to affect and evolve artworks in real time -performance/dance/sound and film practices framed within the context of site specific, interactive installation, peer to peer networks and web-based practices ------------------------ -experimenting how sound can (taken from the public space) (directly/technically) influence the dynamic picture ------------------------ -developing the visual presentation of music through new technologies ------------------------ -exploring an application of interactive electronic strategies to generat= e a real- time Performance Space, in physical or/and digital environments -an interactive environment performed on a device which in real-time evaluates a range of human performer's body signals along the space, (movement captur= e, sensors system,?) to deliver along this space accompanying mutations of initial conditions... (3d models, videos, sound,?), the responsive changes of the 'Performance Space' ------------------------ -composition and live realization of work using digital means, and on actions and installations using sound as a primary element. ------------------------ -video conferencing, streaming media, web and digital video technologies ------------------------ -live performance /vjing ------------------------ -interactive performance -interacting with the audience ------------------------ -real time vj scores ------------------------ -re-imagining elements originating in traditional media within the contex= t of new media ------------------------ -the integration of virtual space and real space ------------------------ -The present moment, exploring the concept of real-time ------------------------ -analyzing the footage and using it as parameters for audio and video synthesis ------------------------ 6. Many of these participants travel a lot. This means that they have to take their computer with them. But what if this machine has to be used as a Simca configuration for some online performance where the participation of several clients fr= om elsewhere is a crucial part of this performance? How does this influence the updating issue? And how can this crucial problem be solved? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D |
From: Madris <ma...@v2...> - 2003-04-03 13:03:05
|
I had a short discussion with Simon yesterday where we talked about adding a method to our development process. Perhaps it would be good to use UML. Analysis on the back of a napkin and writing program from the ground up adds ofcourse an aura of romance and daring to the process. But I think that a well-thought-out plan is crucial. If not for this project, then perhaps it would be good to think about it for the future ones. I think the Guidelines for Rapid APPLication Engineering (GRAPPLE) method would be just fine. It consists of five segments and is intended for object-oriented systems. The segments are: 1. Requirements gathering: processes, domain analysis, identifying cooperating systems, system requirements 2. Analysis: understanding system usage, use cases, class diagrams, analyzing changes of state in objects, defining interactions among objects, integration with cooperating systems 3. Design: develop and refine object diagrams, component diagrams, prototype user interface, design tests, beginning with documentation 4. Development: constructing code, testing code, constructing user interface(s), connect to code and testing, completing documentation 5. Deployment: plans for backup and recovery, installing hardware, testing installed system, and if it works: biertje(?) Why do all this? Bottom line: It's easier to make a change to the blueprint and then make the change to the house, rather than change the house while you build the phisical structure. It would also be good if we define a couple of milestones and try to work towards them. I read the latest piece Artem wrote online about the V2_Jam Presentation. I had a similar idea to do something like this (some kind of demo performance) during my final presentation in the week of June 16 at the Media Technology school where I have to present and defend my work at V2_Lab. I would like to use Pd to control Simca. Maybe we can use Artems ideas about the V2_Jam Presentation and try to realize this before the 16th of June and otherwise I will have to use some old Simca configuration to show something.. Enfin, perhaps we should discuss these things next Monday. Cya, Madris |
From: Madris D. <ma...@v2...> - 2003-03-24 14:26:51
|
Bij deze.. M. |
From: Simon de B. <si...@z2...> - 2003-03-21 14:01:19
|
> mailing is alright option, but i'm afraid every time you restart a > nameserver ior changes. i thought the following can be a decent system: > scrip that runs local name server puts its ior in a well known place > (say /var/run/name-server.ior). then every instance of simca has it. > then there should be a module in the first simca that you run which will > listen for connections on certain port and answer simple requests there. > ior of the name server will be one of the requests. > > this way you just type "simca://nerve.v2.nl:5000/server1/foo/bar" in > your client and it will send request for ior name to something > listenning to port 5000 on nerve and then use that name server to > resolve server1/foo/bar part. Other scenario is a request failing triggers the name server resolve process (registering / requesting) for both server or client. In this way a valid IOR is present even after eg. a (unlikely) name server crash. Only tricky part is simca-server should be registerd before client can request and so requests still fail even if valid IOR is used. This also means that the name server ior is only requested the first time or actually changed. -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Artyom B. <ar...@z2...> - 2003-03-21 13:34:04
|
On Wed, Mar 19, 2003 at 03:51:55PM +0100, Simon de Bakker wrote: > Another thing to think about is how clients will receive the main name-server IOR. > > Most simple way is the person setting up the name-server mails it to people > that need it. But this makes 'spontanious' jamming very difficult. So a > central name-server might be an option. Is not very urgent now but good > to keep in mind. well, i was thinking about that and thought that many different solutions are possible and should be implemented. that's why i wanted the libv2jam to be separated and contain nice API hiding all disgusting CORBA guts. mailing is alright option, but i'm afraid every time you restart a nameserver ior changes. i thought the following can be a decent system: scrip that runs local name server puts its ior in a well known place (say /var/run/name-server.ior). then every instance of simca has it. then there should be a module in the first simca that you run which will listen for connections on certain port and answer simple requests there. ior of the name server will be one of the requests. this way you just type "simca://nerve.v2.nl:5000/server1/foo/bar" in your client and it will send request for ior name to something listenning to port 5000 on nerve and then use that name server to resolve server1/foo/bar part. BTW, i'm glad you've started to code with me again, even though we have a bit of a clash at the moment :) i'll resolve it in a couple of minutes. -- Artem Baguinski: <ar...@v2...> <http://www.artm.org/> V2_lab: <http://lab.v2.nl/> <http://www.v2.nl/> |
From: Artyom B. <ar...@z2...> - 2003-03-20 10:31:43
|
On Thu, Mar 20, 2003 at 11:01:03AM +0100, Simon de Bakker wrote: > > but compound objects (or components, or maybe we'll call them servlets? > > because they run on server. then their GUIs will be applets) aren't the > > only way to hide details. the "intelligent pipeline construction" is > > another. this technique can take care of inserting ffmpeg-out / > > ffmpeg-in objects on both ends of distributed connector if the types of > > the properties being connected are supported (or suggest) using > > ffmpeg-stuff. > > So you get intelligent/flexible compound objects (servlets) that are > able to change in/out types according to connection. That's i think a > powerfull approach for hiding details.. > in the mean time i thought the following: CORBA objects should be the "servlets", they are higher level then the underlying simca_object_t based ones. they are, i guess, what i used to call components, except for GUI part (GUI is postponed to better times). i think on CORBA level we will need another interface - Link. it will represent either simple out->in connection between objects or the intelligent connection to remote object. in the second case Link object will hide actual network streaming mechanizm and in any case havig such an object will allow uniform representation of any connection - local or remote. i'm still doubting if the simca_object_t and friends should know about CORBA or not. i prefer them to not bother, but i don't know yet if that's possible. BTW, i've commited first .idl files to cvs, they may be oversimplified at the moment but at least we can start experimenting with this approach. -- Artem Baguinski: <ar...@v2...> <http://www.artm.org/> V2_lab: <http://lab.v2.nl/> <http://www.v2.nl/> |
From: Simon de B. <si...@z2...> - 2003-03-20 10:05:56
|
> > Isn't that what a (ffmpeg) stream in/out objects should take care of? Or will > > connector be able to 'recognize' some standard streams. > > you might have noticed that i'm trying to avoid creating the objects > manually. when user says i want a videoplayer he gets the complete patch > consisting of several "little" objects - media in/out, timer + > scheduler. it doesn't mean, as anne was afraid, that we (artm and simon) > will have to invent the most generic and siutable for anybody compound > objects - anybody will be able to do that, anybody who feels he has > enough expertise and courage :) yes, that's clear. > > but compound objects (or components, or maybe we'll call them servlets? > because they run on server. then their GUIs will be applets) aren't the > only way to hide details. the "intelligent pipeline construction" is > another. this technique can take care of inserting ffmpeg-out / > ffmpeg-in objects on both ends of distributed connector if the types of > the properties being connected are supported (or suggest) using > ffmpeg-stuff. So you get intelligent/flexible compound objects (servlets) that are able to change in/out types according to connection. That's i think a powerfull approach for hiding details.. -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |
From: Artyom B. <ar...@z2...> - 2003-03-20 08:47:10
|
hi the following is a result of my evaluation of possibilities to use CORBA for implementing v2_jam's distributed processing and control functionality. CORBA is a bunch of standarts on distributed object systems. it describes how to create objects that can be accessible to other programs on the same or remote hosts. i find it handier then proposed earlier XML over HTTP way to communicate with client(s) + it allows uniformity of communication (but server-server and client-server will use the same mechanism). since CORBA is all that general it's implementations aren't optimized for media streams. hence the actual media/control streams, which need realtime performance, shouldn't go via CORBA but rather via dedicated channels (such as conventional video/audio streams, OSC for control signals, etc). 1. v2jam APIs - libv2jam: comon to client and server functionality (e.g. finding the CORBA name service) used by libjamka/libsimca - libjamka: client API - finding the simca-server etc. this library allows creating clients. - simca corba interfaces. i've come up with two interfaces so far - SimcaObject and SimcaContainer. simca-server itself implements SimcaContainer interface. container allows instantiating the objects and moving them from one container to another. object allows its properties to be get/set and commands to be executed. these interfaces are used by client to control the server(s) and by server(s) to interoperate (e.g. create the distributed links). - libsimca: the objects API, like it is now. isn't CORBA aware. probably in the future some of the functions will seize to exist, because will be implemented by the simca-server. for now corba interfaces can simply wrap around the corresponding libsimca functions. 2. v2jam applications - simca (or simca-server): implements simca corba interfaces wrapping them around the libsimca objects and objects provided by modules. - simca modules: provide libsimca based objects that perform actual media processing. - jamka: generic v2_jam GUI - connects to simca-servers and allows creation of pipelines, starting/stoping them, controlling the objects parameters etc. - other clients: e.g. project specific GUIs, non-GUI controllers: scripts, daemons etc. any application linked against libjamka can be v2_jam client. 3. plan of implementation - review the libsimca code and design the simca corba interfaces - implement corba interfaces / simca-server + finish the libsimca TODOs and FIXMEs - start working on media objects - design and implement: - libjamka (client API) - distributed connectivity mechanism - simple client (e.g. command line / python based) for testing purposes - jamka (generic client) some design things can happen simultaneously, e.g. generic client design can start now, even though no underlying functionality is here. we'd rather design libjamka based on jamka description... i'd like madris and lenno to think about that, next monday we can meet and discuss jamka. cheers, artm -- Artem Baguinski: <ar...@v2...> <http://www.artm.org/> V2_lab: <http://lab.v2.nl/> <http://www.v2.nl/> |
From: Simon de B. <si...@z2...> - 2003-03-19 16:07:45
|
> solution: add the list to your .muttrc and answer with 'L' instead of > 'r' in mutt. i've just done that :) It worked ! :) > - open preview of a message while keeping list of messages partially > visible. you can set e.g. pager_index_lines=10 when viewing a message you can move to the index tree. Not quite what you mean but I think as close as you can get. > > it's so frustrating to man muttrc over dialup line :) can imagine... > > > > > > > During the pipeline creation simca should find out whether the link is > > > > local or remote, maybe even what type of link objects support > > > > (udp,tcp,..) and ask for hints. Depending it can decide what way is best for this link. > > > > > > > > > > yes this sounds like interesting strategy. i was thinking to make it > > > more automagic - like every server has a number of connectors that would > > > listen to incoming data and server decides which connector to use for > > > every incoming remote link balancing the load of every connector. > > > > > > but hinting can help - actually at the connection establishing time > > > server knows nothing about the load - that's what one could hint. > > > > The hinting I introduced because different objects may favor different > > types of connections. So does a stream object need a reliable tcp > > connection and support for larger packet sizes than does e.g. a > > realtime sensor input. For the latter it might not be a problem if it > > loses some packages along the way if it is delivered as fast as > > possible. This is known by the object and can hint the connection > > mechanism. The object itself actually doesn't care how it is done, it > > might use tcp because the connection mechanism finds the other end needs it, but if needed it can demand. This means objects demanding different connection types cannot connect (unless used with a converter object ?). > > > > we should have "properties properties" (because it depends on a > connected properties what kind of connector to use, not on an object). true, > > i was thinking, connectors can be even more complicated when they are > across the network: imagine we have ffmpeg streaming server available > and find it handier to stream media using it instead of sending via tcp > "manually". all the other side has to do is to capture the stream. the > same mechanizm will allow feed in external (not simca) media streams, as > long as ffmpeg client can deal with them. > Isn't that what a (ffmpeg) stream in/out objects should take care of? Or will connector be able to 'recognize' some standard streams. -- Simon de Bakker \/01|)7 workgroup: http://www.void7.org personal homepage: http://www.josos.org |