You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(41) |
Aug
(34) |
Sep
(45) |
Oct
(19) |
Nov
(12) |
Dec
(7) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(1) |
Feb
|
Mar
(17) |
Apr
(13) |
May
(13) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2003 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Sha X. W. <xi...@mi...> - 2003-03-02 15:46:14
|
image- & video--based interactive animation http://www.mine-control.com/ ex-mathematicians make the best game-develeopers :) |
|
From: Sha X. W. <xi...@mi...> - 2002-11-26 14:05:45
|
hi, i'm sorry to hear about martin. if you need a body to help babysit something during the show for a few hours or so, you can toss me into the pot with the other trainees. (who should i ask re. internet access in gt yarmouth ... i don't expect the imperial hotel to provide dsl yes ?) re travel: SchedulesNov7 says for me WED 11/27 ARRIVAL: AW, LB & Xin Wei (arr LGW 7:40am) Travel: Xin Wei meet AW/LB at FP office by London Bridge station (direct train from Gatwick) - Train from Liverpool Street dep. 10.30 arrive Yarmouth 13.02 Travel: SUNDAY , Xin Wei 2 hrs check in before departure LGW airport flight dep 1-dec at 11.50am What/where is FP office, ciao, soon! abbracci, xinwei |
|
From: erik.conrad <eri...@pe...> - 2002-07-11 21:59:44
|
> I heard some time ago of a flash plugin which would allow the client to > control MAX/MSP through osd. Can't recall where I saw it though. I think > it worked through tcp/ip sockets. http://www.benchun.net/flosc/ hc http://www.nervousvision.com |
|
From: Sha X. W. <xi...@mi...> - 2002-06-29 23:24:33
|
hi, tuggable drawings using simplicial complexes were invented by Tom Ngo et al at Interval Research a few years ago. very clever. and of considerable technical depth. Here are some refs making the case for such a structured approach to giving people more patternful yet tangible manipulation of synthetic images. http://www.cs.wisc.edu/~kovar/interpSpace.html Accessible Animation and Customizable Graphics via Simplicial Configuration Modeling Tom Ngo, * Doug Cutrell, =DD Jenny Dana, Bruce Donald, =FD Lorie Loeb, =A7 and Shunhui Zhu http://graphics.stanford.edu/papers/simplicial-animation/SIGGRAPH-2000-ngo-e= tal-cameraready.pdf Kovar, Gleicher, Simplicial Drawing http://www.cs.wisc.edu/graphics/Gallery/Simplicial/simpTest.pdf one issue that they encountered was that making model =3D medium made it a challenge for the player to manipulate what was called the space of the observables (or sensor data space) which generally as a complicated geometry not all isomorphic to R^n. yes, if peoiple encounter it embodied, the hope is that they'll have a better intuition of the model space, but in any case the internal representation is makng (at least as i understand it) a reduction, a submersion, from the space of oberservbvles (which has the structire of an algebraic variety), into whatever graph the hmm engine will use. anyway, the tg "topology"-based representation can be considered a very extensive generalization of simplicial drawing! xinwei -- _____________________________________________________ Prof. Sha Xin Wei * 1-404-579-4944 * Atlanta USA xi...@lc... * http://titanium.lcc.gatech.edu/topologicalmedia |
|
From: Sha X. W. <xi...@mi...> - 2002-06-15 19:20:14
|
Amici, June 14 - June 30: My USA cellphone will not work in Europe, To contact me email me OR phone Rosanna: +39-0444-928 663 (Vicenza Italy) Cheers, Xin Wei |
|
From: nik g. <ni...@f0...> - 2002-06-11 16:17:54
|
hi xinwei, > where's the best tech documentation for txOom i may look at before > flying out this week? if doc =3D code, just point :) well, its the code! and unfortunately i havent had a chance to sort it al= l=20 out since we got back from torino. mainly the new media stuff has to go i= n,=20 ive made a few changes to oz (in tg-code), and yon fixed a bug or two in = the=20 dynamics engine. the integration + testing of the dynamics engine is stil= l on=20 the 'to do' list. when its done, ill put everything in cvs, probably tx-code, if there is=20 anything that can't be safely merged with the tg-code stuff.=20 the docs should appear here -> http://f0.am/cgi-bin/cvsweb.cgi/tx-docs/ some of the notes might make sense, and can be found here -> http://lib.f0.am/cgi-bin/view/Libarynth/ProjectTxoomWorkshopNotes this stuff will be shaped into some more formal documentation during the = next=20 stage of the project. since you had some reservations about the tgarden docs being 'public' per= haps=20 we need to discuss what aspects we can (re)use for the tx0om tech docs. > see you june 27-30 in brussels! looking forward to seeing you here, will be good to chat in a more syncro= nous=20 environment :) knk |
|
From: Sha X. W. <xi...@mi...> - 2002-06-11 15:48:25
|
hi nik, yon, where's the best tech documentation for txOom i may look at before flying out this week? if doc = code, just point :) see you june 27-30 in brussels! just came back from vancouver -- great trip . talked with thecla by phone, will try to learn more re active garment/fabric work. cheers, xinwei |
|
From: Yon V. <yo...@pr...> - 2002-05-22 11:10:43
|
Hi XW I'm waiting for a book on the use of implicit functions in computer graphics & some other items. Let's see.. away from my computer, but two interesting off-the-top-of- my-head-items follow. Bye! Yon ----------------------------------- Kimber, D. and Marcia Bush "Situated State Hidden Markov Models" Xerox Parc, Palo Alto, 1993 http://citeseer.nj.nec.com/254069.html Abstract: We introduce a probabilistic model called a Situated State Hidden Markov Model (SSHMM), in which states are `situated' (i.e. assigned positions) and assumed to correspond to regions of an underlying continuous state space. Transition probabilities among states are induced by the assigned state positions in such a way that transitions occur more frequently between nearby states. The model is formally defined, and a maximum likelihood estimation procedure is described. Experiments on synthetic data are described and demonstrate that SHMM's can learn the structure of an underlying continuous state space even when observed through high dimensional discontinuous functions.... -------------------------------- Stevens, Roger and Rok Sosic Griffith Univ. Research Report CIT-95-14 "Emergent Behaviour in Slime Mould Environments" http://citeseer.nj.nec.com/93380.html Slime moulds are well studied organisms in biology. They are some of the simplest organisms that exhibit complex emergent behaviour. Although individual organisms interact with the environment only by following local rules, slime moulds produce a collective behaviour on a global scale. The modelling of slime mould patterns provides a good testbed for studying emergent behaviour. We have simulated the behaviour of slime moulds on a computer model. In the model, each organism is guided by rules, derived from studies of real organisms. By changing parameters of the simulation, we were able to study some underlying mechanisms behind the emergent behaviour. We have concentrated on the strength of the communication signal and the distance at which the signal is effective. Results show rather unexpected results that a stronger communication signal does not necessary reinforce the emergent behaviour and that the effective signal distance does not depend on the density of organisms. |
|
From: Sha X. W. <xi...@mi...> - 2002-05-22 03:59:30
|
For the record, This work by colleagues here in GVU may be interesting for some pattern deformations in sound+image. They recycled old calc of variations, etc. reminded my of Morse theory, Pontrjagin (http://www.math.harvard.edu/~elkies/M55b.99/pontrjagin.html) may be a source of music beh i don't know, ask yon or joel anyway, turk & obrien's notion of influence shapes is quite suggestive. i'd like to think of thrid player as an influence shape on mappings between two other players.... in some sense. http://www.gvu.gatech.edu/people/faculty/greg.turk/morph/morph.html Shape Transformation Using Variational Implicit Functions Greg Turk and James F. O'Brien Georgia Institute of Technology (To appear in SIGGRAPH 99) Abstract Traditionally, shape transformation using implicit functions is performed in two distinct steps: 1) creating two implicit functions, and 2) interpolating between these two functions. We present a new shape transformation method that combines these two tasks into a single step. We create a transformation between two N-dimensional objects by casting this as a scattered data interpolation problem in N+1 dimensions. For the case of 2D shapes, we place all of our data constraints within two planes, one for each shape. These planes are placed parallel to one another in 3D. Zero-valued constraints specify the locations of shape boundaries and positive-valued constraints are placed along the normal direction in towards the center of the shape. We then invoke a variational interpolation technique (the 3D generalization of thin-plate interpolation), and this yields a single implicit function in 3D. Intermediate shapes are simply the zero-valued contours of 2D slices through this 3D function. Shape transformation between 3D shapes can be performed similarly by solving a 4D interpolation problem. To our knowledge, ours is the first shape transformation method to unify the tasks of implicit function creation and interpolation. The transformations produced by this method appear smooth and natural, even between objects of differing topologies. If desired, one or more additional shapes may be introduced that influence the intermediate shapes in a sequence. Our method can also reconstruct surfaces from multiple slices that are not restricted to being parallel to one another. Don't get me wrong, I'm less interested in purely visual stuff, more on integrated pattern transformations. Cheers, Xin Wei |
|
From: Sha X. W. <xi...@sp...> - 2002-05-19 00:27:52
|
hi nik, > > Hey, how did you guys track? (softVNS? + IR ?) >> How well did it work? > >we couldnt get the registraiton key fr the beta of softvns2 to work, >so no vns. >we tested differencing in nato, using ir illumination + ir camera, >and it looked promising. (making into a well lit vision tracking problem). >i really cant remeber why you and yifan were so against this idea. hmm i can't either...oh, it wouldn't have solved the body-id problem as posed. but at this point i 'd be quite happy with blob tracking using ir lamp + ir camera solution :) i asked david demumbrun -- student -- here to make a tracking system using softvns2 + cameras as his summer project. i can ask him to give it to txOom. if you tell us what camera model make you use, we'll buy the same gear. >unfortunately, our ir camera suffered a brief dose of reverse polarity voltage >+ stoped working how did that happen? is there a way to protect against that? (a non-ee person wants to know ;) >. we then tried differencing, regulated by feedback from the >visual plane and 'twitchiness' of the activity. >this was enough to gove an aproxmattion of position, but no identity. >we didnt rely on tracking data so much. i'm also still somewhat interested in ultralight radio beacons as locators -- but that's farther off (research gear ;) and costs more $. if you hear about O(10 gram) radio-beacons for broadcasting sensor data that we can use, pls alert? thanks, xinwei |
|
From: nik g. <ni...@xd...> - 2002-05-18 16:29:46
|
> Hey, how did you guys track? (softVNS? + IR ?) > How well did it work? we couldnt get the registraiton key fr the beta of softvns2 to work, so no vns. we tested differencing in nato, using ir illumination + ir camera, and it looked promising. (making into a well lit vision tracking problem). i really cant remeber why you and yifan were so against this idea. unfortunately, our ir camera suffered a brief dose of reverse polarity voltage + stoped working. we then tried differencing, regulated by feedback from the visual plane and 'twitchiness' of the activity. this was enough to gove an aproxmattion of position, but no identity. we didnt rely on tracking data so much. |
|
From: Sha X. W. <xi...@mi...> - 2002-05-18 03:01:34
|
Hi, Hey, how did you guys track? (softVNS? + IR ?) How well did it work? Xin Wei |
|
From: Sha X. W. <xi...@sp...> - 2002-05-11 23:13:54
|
hey nik, thanks -- we did that. swapping to another sensor we got good data. so i guess i'm down to two good sensor assemblies here.... anyway, i guess we'll have to go to new wearable hw pretty soon. wireless beamer -- ucb's mote, ... ? pic -- phidget, steim, ...? sensor - garment -- soundlogic outfit, designed per garment for tracking we have options: radio -location tracker (expensive but out of the box) rokeyby's new new softVNS a sound-based system (canada - waiting for word) cheers, xinwei > .: Thursday 09 May 2002 20:12 :: >> Thanks Nik! this helped >> we swapped sensors, one accelerometer may be going bad -- high floor >> on the data like a bad hum. >> but now it's fast. > >glad it works now., > >you might also want to try recalibrating the sensor _.- >> > calibrate -c <sensor-chanel> > >and make sure this happens in the ~/sensor directory (where route.conf can >be found also, if the sensor send needs tweaking) > >if it still doesnt work, send it back to stock :) > |
|
From: nik g. <ni...@f0...> - 2002-05-09 18:25:04
|
.: Thursday 09 May 2002 20:12 :: > Thanks Nik! this helped > we swapped sensors, one accelerometer may be going bad -- high floor > on the data like a bad hum. > but now it's fast. glad it works now., you might also want to try recalibrating the sensor _.- >> calibrate -c <sensor-chanel> and make sure this happens in the ~/sensor directory (where route.conf can be found also, if the sensor send needs tweaking) if it still doesnt work, send it back to stock :) |
|
From: nik g. <ni...@f0...> - 2002-05-09 15:36:22
|
NOTE: http://libarynth.f0.am/cgi-bin/view/Libarynth/ProjectTxoomWearables > it seems that the data coming into oz from one of our wearable > assembly is too stepped: as we rotate the sensor slowly, the data > does not change smoothly with the sensor, but instead jumps at > approximately 1 second intervals to successive values. i have to > look more carefully at the std out from operate on the ipaq, but it > could be that this sounds like a sensor conenciton problem. do you have TWO sensors connected to the stamp/pic-box ?+ check the sensor input with the following command on the ipaq cat /dev/ttySA0 should see a stream of raw sensor bytes + channel numbers. see previous post on thsi topic -> (quote w/ spelling corrections!) > > our gvu lab rates seem slow -- only > > a few vectors / sec from the one single! ipaq. what factors control > > or limit the frequency of data from the accelerometer to 802.11b? > > (wasn't that a rate fixed at the pic, or does that depend on > > available 802.11b speeds/bandwidth? > > we had some very unusual fluctuations in network behaviour in torino. > generally it worked better at night, perhaps excessive radiation during the > day? badly configured cell network? if you are getting only a few samples > per second, it might be a sensor or pic problemm since stocks version waits > quite a while (tens of milliseconds) between sensor channels if no data is > sent from the sensor. make sure both sensors are plugged into the pic, and > that they are both working sensors. > > > our little 802.11b network runs only the ipaqs and a mac airport on a > > pc. there's a second mac airport card but that's built into the oz > > mac in order for it to get around some draconian sysad restrictions > > on our wireless usage. > > have a look at some of the wireless monitoring thruput/frame dropping rates more at http://libarynth.f0.am/cgi-bin/view/Libarynth/ProjectTxoomWearables please consult/append/modify > (1) the data stream is interrupted somewhere on the > accelerometer-pic-ipaq chain, > and > (2) some code (in the ipaq?) is repeating fossil data (which would > be wrong! -- we want sensor to push data) > > Q1. i recall that the data acquisition and rebroadcast code (on > ipaq) was built to poll at a regular cloack rate. where is that > rate set, and what was it? > > Q2. what could possibly affect this slow, delayed episodic > response to changes in the sensor ? > > we put every mac on a hub disconncetd from the ambient net, so they > see only one another. the only difference is that i'n using an > Airport base station to send data htough it's tittanium. > > and > > Q3. what can we do to speed up the frequency? > > please help, any suggesions? more at http://libarynth.f0.am/cgi-bin/view/Libarynth/ProjectTxoomWearables |
|
From: Sha X. W. <xi...@mi...> - 2002-05-09 03:59:21
|
hi, greetings from atlanta! joel i and some students are playng with our version of tg here. it seems that the data coming into oz from one of our wearable assembly is too stepped: as we rotate the sensor slowly, the data does not change smoothly with the sensor, but instead jumps at approximately 1 second intervals to successive values. i have to look more carefully at the std out from operate on the ipaq, but it could be that (1) the data stream is interrupted somewhere on the accelerometer-pic-ipaq chain, and (2) some code (in the ipaq?) is repeating fossil data (which would be wrong! -- we want sensor to push data) Q1. i recall that the data acquisition and rebroadcast code (on ipaq) was built to poll at a regular cloack rate. where is that rate set, and what was it? Q2. what could possibly affect this slow, delayed episodic response to changes in the sensor ? we put every mac on a hub disconncetd from the ambient net, so they see only one another. the only difference is that i'n using an Airport base station to send data htough it's tittanium. and Q3. what can we do to speed up the frequency? please help, any suggesions? Thanks ! xinwei xinwei |
|
From: Sha X. W. <xi...@mi...> - 2002-05-08 12:52:46
|
Hi Jehan, Sungmee delivered the garment with conducting fiber woven in! We should figure out how to wear the thing when Joey's here :) It seems that when I boot the two iPAQs I have in the GVU lab near the Airport, they both get the same IP. ifconfig shows inet address 10.0.0.11 on both machines. In Oz I see data from only the older one, but not the iPAQ that Sungmee returned.... But they have the same install -- and DHCP client settings, I hope. Did y ou ever test > 1 iPAQ simultaneously on the Airport? Xin Wei |
|
From: Sha X. W. <xi...@mi...> - 2002-05-05 20:42:20
|
Dear TGardeners, So, we need better wireless, tracking, body-borne sensing... Who can do what? I'd like to track down a solution to only one of those problems. Who can follow up on the wireless, beaming data (preferably by 802.11?) ? Maybe the "mote" ? (Gatech has a bunch of them) On tracking -- here's a somewhat comprehensive survey: Rolland, J.P., Davis, L. D., and Y. Baillot, "A survey of tracking technology for virtual environments", in Augmented Reality and Wearable Computers. Ed. Barfield and Caudell (Mahwah, NJ), (2000) Don has a radio system that'lll track finely 2" spatial, 4 ms or 17 ms spatial resolution. Cost $15K to $3K, tho. Shall we buy a solution? I may form a consortium to share gear here in N. Am (and maybe V2) for tgardens -- tg, tx ? whisper ? Such devices could be worn like jewelry, on wrists, ankles, around neck, ... pocketed (not so good since locations would vary too much from body to body), ... Underlining Joel and nodding with Nik's observation, causality and connection are constrained by system latency. In addition to reducing local latencies, we can also look to musical phrasing and temporal textures. Xin Wei |
|
From: nik g. <ni...@xd...> - 2002-05-04 18:03:03
|
> questions: > > (0) what data rates were we able to get from the ipaq's in ars and > rotterdam? we didnt use teh ipaq setup in linz, as i remember from rdamm it was "resonable" (sorry fr the lack of detail) > how about txooom? we measured delay times (which seem to have a more drastic effewct on coupling of the space to the player than sampling freq,) anywhere from 3,4ms to thirty seven seconds!!! > our gvu lab rates seem slow -- only > a few vectors / sec from the one single! ipaq. what factors control > or limit the frequency of data from the accelerometer to 802.11b? > (wasn't that a rate fixed at the pic, or does that depend on > available 802.11b speeds/bandwidth? we had some very unusual fluctuation s in netwrok behaviour in torino. generally it worjked better at night, perhaps excessive radiation during the day? badly configured cell network? if you are gettin gonly a few samples per second, it might be a sensor or pic problemm since stocks version waits quite a while (tens of milliseconds) between sensor channels if no data is sent from the sensor. make sure both sensors are plugged into the pic, and that they are both working sensors. > our little 802.11b network runs only the ipaqs and a mac airport on a > pc. there's a second mac airport card but that's built into the oz > mac in order for it to get around some draconian sysad restrictions > on our wireless usage. have a look at some of teh wireless monitoring thruput/frame dropping rates > (1) why do (at least 2 of) the ipaq's not stay on when detached from > the ac charger? in one case, switching to a different "phong" ac > adapter/charger fixed that, but here at gatech gvu lab this no longer > seems / doesn't seem to work. we had a similar probelm with the ipaq * power, maybe some flaw in the batteriues? maybe we are just being to harsh on the poor office accesories?! > (2) one ipaq has a "broken" serial port. > is that under warranty? how should one go about diagnozing and > having it fixed? (compaq factory, i presume -- but does anyone know > exactly?) id say waranty, phone compaq * find ou the terms > (3) now at least 2 of the pic's seem broken. what next? > need diagnosis + fix. we have decided to build better (more accurate, sensitive, robust) units, > i have this week, may 6-11 to dedicate to these problems, so am > standing by, ready to entertain all reasonable suggestions. hope some of this helps nk |
|
From: ;) <xi...@sp...> - 2002-05-04 00:52:13
|
dear wearables folk, this coming monday joel comes to play with the tg system, and a cheerful artist-hacker (& ex-mathematician) from boston is coming to work with our new sensor-embedded garment. questions: (0) what data rates were we able to get from the ipaq's in ars and rotterdam? how about txooom? our gvu lab rates seem slow -- only a few vectors / sec from the one single! ipaq. what factors control or limit the frequency of data from the accelerometer to 802.11b? (wasn't that a rate fixed at the pic, or does that depend on available 802.11b speeds/bandwidth? our little 802.11b network runs only the ipaqs and a mac airport on a pc. there's a second mac airport card but that's built into the oz mac in order for it to get around some draconian sysad restrictions on our wireless usage. (1) why do (at least 2 of) the ipaq's not stay on when detached from the ac charger? in one case, switching to a different "phong" ac adapter/charger fixed that, but here at gatech gvu lab this no longer seems / doesn't seem to work. (2) one ipaq has a "broken" serial port. is that under warranty? how should one go about diagnozing and having it fixed? (compaq factory, i presume -- but does anyone know exactly?) (3) now at least 2 of the pic's seem broken. what next? need diagnosis + fix. i have this week, may 6-11 to dedicate to these problems, so am standing by, ready to entertain all reasonable suggestions. cheers, xin wei -- _____________________________________________________ Prof. Sha Xin Wei * 1-404-579-4944 * Atlanta USA xi...@lc... * http://titanium.lcc.gatech.edu/topologicalmedia |
|
From: maja k. <ma...@f0...> - 2002-04-22 18:08:26
|
struggles, attempts, discussions, challenges, cyclic reconstruction, de_re_(...), grinding teeth and old stuff: http://f0.am/txoom/torino/ hopefully transforming later this week from a pile of unread emails, tech specs and contracts to a public experiment in Cavalerizza Reale in Torino. txoom oscillates between being an autonomous wilderness and forming symbiotic alliances with its inhabitants. It is a thick + spiky + dense environment whose gelatinous morphology is perpetually pierced by unexpected events, evolutionary glitches and penetrating human realities, thereby growing or decaying, expanding or contracting. txOom absorbs [gesture, behaviour, movement] -> recycles [experience] -> transmits [media, matter, pattern], conversing with its visitors, agreeing or contradicting them, becoming the embodiment of their imagination, charging itself or fading away in the process. txOom @ BIG Torino: - Via Verdi 9 (Cavalerizza Reale), Torino, Italy ... and as the city has made us wait for too long, we have made it into our waiting room -- with Sia, our coffee-lady, staining herself with the fortunes of the passersby, carrying them back into the txOom space in la Rotonda. Lets hope all the fortunes of the city will be enough to make txOom happen before we leave Torino on Sunday. |
|
From: nik g. <ni...@f0...> - 2002-04-22 12:16:06
|
> how has it been playing with those torinesi? wish i could dive into > the maelstrom with you :) meanwhile, back in the lab...here's the im not sure you would want to be here ,.,. more malestrom, not enough diving its starting to come together, stay tuned -> http://f0.am/txoom/torino > first live test of tgarden 2002 as deployed in the gvu: > http://titanium.lcc.gatech.edu/tgarden/video/m3/tgarden/gvu/KTSP_garment_4. >2002.mov > > people were dressed in costume of kat tejavanija's design, made by > elizabeth adams. > sungemee park will put in the sensate liner + sensors next week. > erik conrad wrote the nato code. > > thanks to my gvu colleagues, especially blair macintyre, who's been > tremendously generous in opening his lab and contributing so much > resource & expertise for this experiment. > > i'd like to make a sweeping bow to > > erik conrad > jehan moghazy > jonathan shaw > > for transplanting tgarden back to gvu atlanta. amidst a million > other demands on their talents and energy this year, they've managed > to learn to drive and even extend this rhizome. > > baci a tutti, > xinwei |
|
From: Sha X. W. <xi...@mi...> - 2002-04-21 00:26:07
|
dear euro tgamici, how has it been playing with those torinesi? wish i could dive into the maelstrom with you :) meanwhile, back in the lab...here's the first live test of tgarden 2002 as deployed in the gvu: http://titanium.lcc.gatech.edu/tgarden/video/m3/tgarden/gvu/KTSP_garment_4.2002.mov people were dressed in costume of kat tejavanija's design, made by elizabeth adams. sungemee park will put in the sensate liner + sensors next week. erik conrad wrote the nato code. thanks to my gvu colleagues, especially blair macintyre, who's been tremendously generous in opening his lab and contributing so much resource & expertise for this experiment. i'd like to make a sweeping bow to erik conrad jehan moghazy jonathan shaw for transplanting tgarden back to gvu atlanta. amidst a million other demands on their talents and energy this year, they've managed to learn to drive and even extend this rhizome. baci a tutti, xinwei -- _____________________________________________________ Prof. Sha Xin Wei * 1-404-579-4944 * Atlanta USA xi...@lc... * http://titanium.lcc.gatech.edu/topologicalmedia |
|
From: nik g. <ni...@xd...> - 2002-04-10 16:06:10
|
>>the flexible cuicuit boards are very nice, if only the componets could be >>more rubbery also! > > That's OK if they are small enough, though, isn't it? :-) cellular-components.com ;) > So it's obolete indeed :), but keep in mind that the PIC still has to > alternate power to the sensors when 'reading' more then one ADXL, > because of the input-wiring! i think its possible to just swithc chips/inputs, wait for the end of the pulse cycle (upto 1ms), to syncronise the read from the next chip if all the adxl remain powered. nk |
|
From: Stock <st...@v2...> - 2002-04-10 15:53:26
|
At 01:28 PM 9-4-02 +0200, nik gaffney wrote: > > By the way: > > There is another upcoming project her at the Lab where we MAY want to use > > ADXL's again... > > it could happen that i re-design the sensorsets, > > hopefully being able to integrate 2 or 4 sensors & the pic on ONE, flexible > > circuitboard... > >the flexible cuicuit boards are very nice, if only the componets could be >more rubbery also! That's OK if they are small enough, though, isn't it? :-) >we have recenlty gone thru the stamp code in the process of making some new >sensor sets, and found a few problems. (ill update the cvs when i get a >chance) there a a few things which could be called 'bugs' and a few which are >more likely 'design decisions' > >the most obvious change is to leave the sensors constinuously powered, rather >than cycling them, and having to wait 6ms for each chip to stabaliise (which >is not always long enough) considering each chip draws 1mA when powered (vs a >few microamps when off) its a power trade off. accuracy + data rate are more >important at the moment (considering the stamp takes anywhere from 20-60mA) Oh, i C :) yeah, quite right, this 'design decision' was made when i still tought it would be possible to power PIC + sensors from the iPAQ, Before we had the PIC-batteries... So it's obolete indeed :), but keep in mind that the PIC still has to alternate power to the sensors when 'reading' more then one ADXL, because of the input-wiring! OkDan! StK |