|
From: Ran S. <ran...@gm...> - 2018-02-23 22:18:33
|
On Fri, Feb 23, 2018 at 10:40 PM, Tim Roberts <ti...@pr...> wrote: > Ran Shalit wrote: >> >> I apologize, I had to say before that it is not just a camera, but an >> exact camera (usb3 vision) from exact manufacture/model which we use. >> The usb3 vision standard is based on Genicam standard (xml features). >> So, as I understand we can check the exact descriptors of that exact >> camera model , i.e. we must check the descriptors and can't >> "automatically fill" gadget descriptors , Right ? > > Theoretically, yes. > > >> I first thought that usb3 vision standard should define this things, >> because "generic" Genicam applications can talk to any usb3 vision >> camera. > > Well, yes, but that's at a much higher level. Their application deals > with the Windows capture APIs, and should work with any camera > anywhere.. The UVC stuff all happens at the driver level. > > >> So, I assume that as long as "another PC" (which understand Genicam >> standard) knows all the camera features, as configured (resolution, >> framerate, etc), than it (the another PC) can handle data images. >> The solution tried to make host and camera talk as if there is no >> medium in-between. > > You're not literally going to be looking at images, right? You're just > handling packets. Right >If you have to assemble an entire image before you > can forward everything along, that adds to the latency even more. > > You're going to need to create some sequence diagrams with realistic > timing estimates to figure out whether this can be made to work. > > >> Yet, I think that it might be a problem that both another PC is >> allowed to send control message and host. That is probably not >> allowed, Right ? > > I'm not sure what you mean. The "another PC" will be talking to the > camera. The "naive host" will send control messages through the > embedded passthrough to the "another PC", which will have to forward > them to the camera, and return the response. That adds latency, of course. > The other PC just receive the duplicated data (from camera or from host), that's all. It's not that there are message that pass through the "another pc" to camera. The things only pass through the embedded from camera to host and vice versa. (and the embedded always duplicate everything to the other pc) . > >>> Even more, if your camera is producing a full-bandwidth device (say >>> 300MB/s), then you have completely filled the bus. >> Isn't USB 3.0 can get up to 5Gbit/sec ? > > No. The USB 3.0 clock is 5Gbps, but the protocol has redundancy and > overhead for reliability. The most you can achieve is about 350MB/s. > Once you have a device sending that much, you can't bring another device > online. You would need a second host controller -- another separate bus. > > >> The initial idea is to duplicate each message from host device and >> device to host, so that each message is tarnsferred to another PC too >> (both control and data) > > There certainly are "two-headed" USB cables that connect two hosts > together, usually for transferring files. They are not generic > passthroughs. A USB hub acts like a passthrough, but there is long list > of timing and functionality requirements that a USB hub must follow. > It's not trivial. > > >> But I thing that as a first stage, I shall try to do only a USB >> passthrough (without any duplication and without using the "another >> PC"), i.e. the embedded board only pass the bulk/iso/control from one >> side to another. > > I don't think that helps you. You always need to remember that a USB > connector is either HOST or DEVICE. It cannot be both (except USB2GO, > which PCs do not support). Your design calls for this: > > Naive host --> embedded device <-- Aware host --> camera > > Where the arrows point from a host to a device, so the embedded device > acts like a USB device to both hosts. > > But the passthrough case is different: > > Naive host --> embedded device --> camera > > So the embedded device has to be a device to the host, but a host to the > camera. That's different. > > >> If I assume that we know the descriptors, than doing passthrough as I >> understand, means that the "embedded host" and "embedded gadget", just >> send the bulk/control/iso from one side to another, i.e. sort of a >> loopback with no much logic, or do I underestimate this challenge ? > > Passthrough, not loopback. Remember that you can't just arbitrarily > send packets through. A USB device is only allowed to send data when > the host asks it to send. That presents some serious buffering and > timing issues for you, especially with isochronous data, and the > low-level timing is not under your control. You have to respond within > a very few microseconds, or you lose your shot in the frame. When the > naive host does an isochronous read to get the next video packets, you > can't just put it on hold while you go out and so an isochronous read to > the real camera. > I've reviewed vision usb3 standard: https://www.visiononline.org/userAssets/aiaUploads/file/USB3_Vision_Specification_v1-0-1.pdf It uses bulk endpoints. so, if the pipeline is as following: host --<---embedded ---<---camera It probably means that I must use buffering in embedded in order to always be ready for the host requests , Right ? I understand that it shall add latency as you said (but seems that buffering is a must) Are you familiar with any pass-through example (probably example which required both libusb and gadgetfs) Thanks a lot for the points, it helps, Ran > -- > Tim Roberts, ti...@pr... > Providenza & Boekelheide, Inc. > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > libusb-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libusb-devel |