You can subscribe to this list here.
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(7) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2014 |
Jan
|
Feb
(4) |
Mar
(3) |
Apr
(2) |
May
(3) |
Jun
(4) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(3) |
Dec
(3) |
2015 |
Jan
(1) |
Feb
|
Mar
|
Apr
(6) |
May
(17) |
Jun
(10) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(8) |
Nov
|
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2024 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jose L. A. <jl...@mi...> - 2024-03-05 13:28:40
|
Hi there, Do we have a specific iterator in openimaj to walk FImage's and MBFImage's? Thanks, JL |
From: Hare J.S. <js...@ec...> - 2018-12-17 14:42:11
|
Have a look here for a pom.xml file that has dependencies and exclusions for just the things required for face detection on a video stream: https://github.com/jonhare/SimpleFaceDetector/blob/master/pom.xml You’ll have to convert that to gradle format, but I don’t think that should be difficult. You should also be able to remove the core-video-capture dependency. Jon On 17 Dec 2018, at 00:17, Roger Littin <ro...@wo...<mailto:ro...@wo...>> wrote: Hi, I want to put together a face detection example that takes a source video that are streamed to Wowza Streaming Engine and outputs the video with an overlay that draws a rectangle around each face. Because Wowza Streaming engine already has transcoding capabilities, I want to use this where possible. Using the Wowza transcoder Thumbnailer api and Wowza transcoder Overlay api, I can extract the raw video frames in a 4 byte per pixel BGRA format and then apply overlays later in the process using the same format. My plan is to extract the frame data then convert it to an FImage and pass that into the detector and then using the resulting list of detected faces, create an overlay rectangle for each face to add to the output video. The code I’m using is pretty straight forward but when I go to try and build the project, I’m getting a bunch of failed dependencies for quite a few third-party packages that don’t seem to be related in any way to what I’m trying to do. I’m using Gradle rather than Maven because it is a lot easier to integrate Wowza with Gradle, however I get the same result with a Maven project, that just includes the faces module as a dependency. If I try to build the project manually in eclipse, then it does compile ok, so it appears to be finding what it needs to. The problem seems to be with the package managers wanting to try and resolve all of these extra dependencies. I guess the question is, what actual jar files are required for compiling my project and at runtime? It appears that, even though my class only has 5 openimaj imports, the package manages seem to think it’s necessary to include over 140 jar files in the class path. I realise that face detection is a fairly intense process, but this does seem like a bit of an overkill. Below is a snippet of the code I’m using. FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(40); : : FImage fImage = getNextImage(frameCount); List<DetectedFace> faces = fd.detectFaces(fImage); for(DetectedFace face : faces) { Rectangle rect = face.getBounds(); addOverlay(frameCount, rect.x, rect.y, rect.width, rect.height); } Thanks, Roger. _______________________________________________ openimaj-discuss mailing list ope...@li...<mailto:ope...@li...> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Fopenimaj-discuss&data=01%7C01%7Cjsh2%40ecs.soton.ac.uk%7C49d4787890964f8146c908d663ca9fa9%7C4a5378f929f44d3ebe89669d03ada9d8%7C1&sdata=7c02XBqnCispvnB1EccujKOKxKJ%2Bs2fJCncOe8OGb54%3D&reserved=0 |
From: Roger L. <ro...@wo...> - 2018-12-17 12:55:03
|
Hi Jon, Thanks for that. I will check it out. Roger. Sent from my Samsung Galaxy smartphone. -------- Original message -------- From: "Hare J.S." <js...@ec...> Date: 17/12/18 22:07 (GMT+12:00) To: Roger Littin <ro...@wo...> Cc: ope...@li... Subject: Re: [Openimaj-discuss] Compile and runtime dependencies for Faces module. Have a look here for a pom.xml file that has dependencies and exclusions for just the things required for face detection on a video stream: https://github.com/jonhare/SimpleFaceDetector/blob/master/pom.xml You’ll have to convert that to gradle format, but I don’t think that should be difficult. You should also be able to remove the core-video-capture dependency. Jon On 17 Dec 2018, at 00:17, Roger Littin <ro...@wo...<mailto:ro...@wo...>> wrote: Hi, I want to put together a face detection example that takes a source video that are streamed to Wowza Streaming Engine and outputs the video with an overlay that draws a rectangle around each face. Because Wowza Streaming engine already has transcoding capabilities, I want to use this where possible. Using the Wowza transcoder Thumbnailer api and Wowza transcoder Overlay api, I can extract the raw video frames in a 4 byte per pixel BGRA format and then apply overlays later in the process using the same format. My plan is to extract the frame data then convert it to an FImage and pass that into the detector and then using the resulting list of detected faces, create an overlay rectangle for each face to add to the output video. The code I’m using is pretty straight forward but when I go to try and build the project, I’m getting a bunch of failed dependencies for quite a few third-party packages that don’t seem to be related in any way to what I’m trying to do. I’m using Gradle rather than Maven because it is a lot easier to integrate Wowza with Gradle, however I get the same result with a Maven project, that just includes the faces module as a dependency. If I try to build the project manually in eclipse, then it does compile ok, so it appears to be finding what it needs to. The problem seems to be with the package managers wanting to try and resolve all of these extra dependencies. I guess the question is, what actual jar files are required for compiling my project and at runtime? It appears that, even though my class only has 5 openimaj imports, the package manages seem to think it’s necessary to include over 140 jar files in the class path. I realise that face detection is a fairly intense process, but this does seem like a bit of an overkill. Below is a snippet of the code I’m using. FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(40); : : FImage fImage = getNextImage(frameCount); List<DetectedFace> faces = fd.detectFaces(fImage); for(DetectedFace face : faces) { Rectangle rect = face.getBounds(); addOverlay(frameCount, rect.x, rect.y, rect.width, rect.height); } Thanks, Roger. _______________________________________________ openimaj-discuss mailing list ope...@li...<mailto:ope...@li...> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Fopenimaj-discuss&data=01%7C01%7Cjsh2%40ecs.soton.ac.uk%7C49d4787890964f8146c908d663ca9fa9%7C4a5378f929f44d3ebe89669d03ada9d8%7C1&sdata=7c02XBqnCispvnB1EccujKOKxKJ%2Bs2fJCncOe8OGb54%3D&reserved=0 |
From: Roger L. <ro...@wo...> - 2018-12-17 02:52:00
|
Hi, I want to put together a face detection example that takes a source video that are streamed to Wowza Streaming Engine and outputs the video with an overlay that draws a rectangle around each face. Because Wowza Streaming engine already has transcoding capabilities, I want to use this where possible. Using the Wowza transcoder Thumbnailer api and Wowza transcoder Overlay api, I can extract the raw video frames in a 4 byte per pixel BGRA format and then apply overlays later in the process using the same format. My plan is to extract the frame data then convert it to an FImage and pass that into the detector and then using the resulting list of detected faces, create an overlay rectangle for each face to add to the output video. The code I’m using is pretty straight forward but when I go to try and build the project, I’m getting a bunch of failed dependencies for quite a few third-party packages that don’t seem to be related in any way to what I’m trying to do. I’m using Gradle rather than Maven because it is a lot easier to integrate Wowza with Gradle, however I get the same result with a Maven project, that just includes the faces module as a dependency. If I try to build the project manually in eclipse, then it does compile ok, so it appears to be finding what it needs to. The problem seems to be with the package managers wanting to try and resolve all of these extra dependencies. I guess the question is, what actual jar files are required for compiling my project and at runtime? It appears that, even though my class only has 5 openimaj imports, the package manages seem to think it’s necessary to include over 140 jar files in the class path. I realise that face detection is a fairly intense process, but this does seem like a bit of an overkill. Below is a snippet of the code I’m using. FaceDetector<DetectedFace,FImage> fd = new HaarCascadeDetector(40); : : FImage fImage = getNextImage(frameCount); List<DetectedFace> faces = fd.detectFaces(fImage); for(DetectedFace face : faces) { Rectangle rect = face.getBounds(); addOverlay(frameCount, rect.x, rect.y, rect.width, rect.height); } Thanks, Roger. |
From: Azahel C. A. <aco...@em...> - 2018-10-21 21:00:58
|
Hello, I am currently working on a project in a VR oriented project and I stumbled upon Open Imaj. I need to do barrel distortion correction and split the image so it is usable in a VR headset. The vide input I am working with is a live video feed coming in through an HDMI input. Does Open Imaj have the tools needed to accomplish this? Best regards, -- Azahel Córdova Arvizu University of Arizona Electrical and Computer Engineering Major SHPE UA | President |
From: Cristian L. <cri...@gm...> - 2018-05-02 22:38:02
|
DoubleFV testFeature = eigen.extractFeature(face); i could save face freature for each face as double array interpreted as polygon . i could search face in database searching the minimum euclidean distance from polygons in database and the poligon testFeature. I thought to use mongodb for saving the polygons and use the geolocation query embedded in mongodb for finding the closest face. Do you think it could work ? 2018-04-29 12:04 GMT+02:00 Cristian Lorenzetto < cri...@gm...>: > Hi > When i load a image containing a face > i d like to create a hash identifying this face or at least a set of > parameters for a easy querying in a database. > I thought to use facialkeypoint but it seams be a 2d point related to the > image matrix. > If i load different image containing the same face (for example it has > also a different scale), i m not sure these facial keypoint are identical > or in a similar range. > > Is there a way using openimaj to create a set of parameters indipendent > from scale or other space trasformations, so to permit a face > identification (or at least reducing to few cases the similarity)? > > I thank you in advance for any suggestion. > > Cristian L. > |
From: Cristian L. <cri...@gm...> - 2018-04-29 10:47:00
|
Hi When i load a image containing a face i d like to create a hash identifying this face or at least a set of parameters for a easy querying in a database. I thought to use facialkeypoint but it seams be a 2d point related to the image matrix. If i load different image containing the same face (for example it has also a different scale), i m not sure these facial keypoint are identical or in a similar range. Is there a way using openimaj to create a set of parameters indipendent from scale or other space trasformations, so to permit a face identification (or at least reducing to few cases the similarity)? I thank you in advance for any suggestion. Cristian L. |
From: Francisco J. G. <jg...@ya...> - 2016-12-13 13:52:54
|
Dear friends, I have a question. I could not figure out the how i can detect objects in picture like this: https://dl.dropboxusercontent.com/u/8292330/20161207_135741alpha1.jpg Do you have any suggestion to me? Thanks Javier |
From: Jonathon H. <js...@ec...> - 2016-10-27 07:24:18
|
Yes - something like this: public void beforeUpdate(MBFImage frame) { FImage grey = Transforms.calculateIntensity(frame); grey = ResizeProcessor.halfSize(grey); final FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); final List<DetectedFace> faces = fd.detectFaces(grey); for (final DetectedFace face : faces) { final Rectangle bounds = face.getBounds(); bounds.scale(2); frame.drawShape(bounds, 4, RGBColour.GREEN); } } Potentially you could scale further than halving the size using the other ResizeProcessor methods & change the bounds rectangle scaling accordingly Jon > On 27 Oct 2016, at 03:58, Ben Markham <ben...@gm...> wrote: > > Hi Jon, > > Would I use ResizeProcessor to lower the resolution of the frame? > > Thanks, > Ben. >> On Oct 26, 2016, at 5:39 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >> >> Lowering res is quite common for things like this - there is little point in working at high res if you don’t need the information or it is largely redundant. Implementations like the HaarCascadeDetector can also be sped up a lot by playing with the parameters (e.g. by setting the minimum and maximum detection sizes to match your expectations). >> >> Jon >> >> >>> On 26 Oct 2016, at 09:33, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>> >>> Oh wow. I just implemented it into my code and the performance when straight up. Thank you so much for this. I will try to build from this and do some more digging in the docs. >>> >>> Just curious tho, what other ways are there to process this? Would you say lowering the resolution is the most common method? >>> >>> -Ben >>>> On Oct 26, 2016, at 5:26 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>>> >>>> Ok, lowering the resolution will help, but that code isn’t really doing tracking; it’s just running the detector over the whole frame every time, which can be rather computationally expensive. >>>> >>>> Take a look at the more advanced face tracker implementations, KLTHaarFaceTracker and CLMFaceTracker, if you want something that takes advantage of prior knowledge about where a face was detected in the past to speed up detection now. There are demos showing how these work in the demos module: https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces <https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces> >>>> >>>> Jon >>>> >>>> >>>> >>>>> On 26 Oct 2016, at 09:12, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>>> >>>>> Hi John, >>>>> >>>>> Thanks for the reply. >>>>> >>>>> Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: >>>>> >>>>> Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); >>>>> VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); >>>>> >>>>> display.addVideoListener(new VideoDisplayListener<MBFImage>() { >>>>> public void beforeUpdate(MBFImage frame) { >>>>> >>>>> FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); >>>>> List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); >>>>> >>>>> for (DetectedFace face : faces) { >>>>> frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); >>>>> } >>>>> >>>>> } >>>>> >>>>> It’s tracking faces well, just FPS issues. >>>>> >>>>> -Ben >>>>>> On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>>>>> >>>>>> Hi Ben, >>>>>> >>>>>> Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. >>>>>> >>>>>> What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. >>>>>> >>>>>> Jon >>>>>> >>>>>> >>>>>>> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >>>>>>> >>>>>>> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >>>>>>> >>>>>>> Thank you, >>>>>>> Ben. >>>>>>> ------------------------------------------------------------------------------ >>>>>>> The Command Line: Reinvented for Modern Developers >>>>>>> Did the resurgence of CLI tooling catch you by surprise? >>>>>>> Reconnect with the command line and become more productive. >>>>>>> Learn the new .NET and ASP.NET <http://asp.net/> CLI. Get your free copy! >>>>>>> http://sdm.link/telerik <http://sdm.link/telerik> >>>>>>> _______________________________________________ >>>>>>> openimaj-discuss mailing list >>>>>>> ope...@li... <mailto:ope...@li...> >>>>>>> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss <https://lists.sourceforge.net/lists/listinfo/openimaj-discuss> >>>>>> >>>>> >>>> >>> >> > |
From: Ben M. <ben...@gm...> - 2016-10-27 02:58:39
|
Hi Jon, Would I use ResizeProcessor to lower the resolution of the frame? Thanks, Ben. > On Oct 26, 2016, at 5:39 PM, Jonathon Hare <js...@ec...> wrote: > > Lowering res is quite common for things like this - there is little point in working at high res if you don’t need the information or it is largely redundant. Implementations like the HaarCascadeDetector can also be sped up a lot by playing with the parameters (e.g. by setting the minimum and maximum detection sizes to match your expectations). > > Jon > > >> On 26 Oct 2016, at 09:33, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >> >> Oh wow. I just implemented it into my code and the performance when straight up. Thank you so much for this. I will try to build from this and do some more digging in the docs. >> >> Just curious tho, what other ways are there to process this? Would you say lowering the resolution is the most common method? >> >> -Ben >>> On Oct 26, 2016, at 5:26 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>> >>> Ok, lowering the resolution will help, but that code isn’t really doing tracking; it’s just running the detector over the whole frame every time, which can be rather computationally expensive. >>> >>> Take a look at the more advanced face tracker implementations, KLTHaarFaceTracker and CLMFaceTracker, if you want something that takes advantage of prior knowledge about where a face was detected in the past to speed up detection now. There are demos showing how these work in the demos module: https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces <https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces> >>> >>> Jon >>> >>> >>> >>>> On 26 Oct 2016, at 09:12, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>> >>>> Hi John, >>>> >>>> Thanks for the reply. >>>> >>>> Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: >>>> >>>> Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); >>>> VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); >>>> >>>> display.addVideoListener(new VideoDisplayListener<MBFImage>() { >>>> public void beforeUpdate(MBFImage frame) { >>>> >>>> FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); >>>> List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); >>>> >>>> for (DetectedFace face : faces) { >>>> frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); >>>> } >>>> >>>> } >>>> >>>> It’s tracking faces well, just FPS issues. >>>> >>>> -Ben >>>>> On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>>>> >>>>> Hi Ben, >>>>> >>>>> Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. >>>>> >>>>> What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. >>>>> >>>>> Jon >>>>> >>>>> >>>>>> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >>>>>> >>>>>> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >>>>>> >>>>>> Thank you, >>>>>> Ben. >>>>>> ------------------------------------------------------------------------------ >>>>>> The Command Line: Reinvented for Modern Developers >>>>>> Did the resurgence of CLI tooling catch you by surprise? >>>>>> Reconnect with the command line and become more productive. >>>>>> Learn the new .NET and ASP.NET <http://asp.net/> CLI. Get your free copy! >>>>>> http://sdm.link/telerik <http://sdm.link/telerik> >>>>>> _______________________________________________ >>>>>> openimaj-discuss mailing list >>>>>> ope...@li... <mailto:ope...@li...> >>>>>> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss <https://lists.sourceforge.net/lists/listinfo/openimaj-discuss> >>>>> >>>> >>> >> > |
From: Jonathon H. <js...@ec...> - 2016-10-26 08:39:58
|
Lowering res is quite common for things like this - there is little point in working at high res if you don’t need the information or it is largely redundant. Implementations like the HaarCascadeDetector can also be sped up a lot by playing with the parameters (e.g. by setting the minimum and maximum detection sizes to match your expectations). Jon > On 26 Oct 2016, at 09:33, Ben Markham <ben...@gm...> wrote: > > Oh wow. I just implemented it into my code and the performance when straight up. Thank you so much for this. I will try to build from this and do some more digging in the docs. > > Just curious tho, what other ways are there to process this? Would you say lowering the resolution is the most common method? > > -Ben >> On Oct 26, 2016, at 5:26 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >> >> Ok, lowering the resolution will help, but that code isn’t really doing tracking; it’s just running the detector over the whole frame every time, which can be rather computationally expensive. >> >> Take a look at the more advanced face tracker implementations, KLTHaarFaceTracker and CLMFaceTracker, if you want something that takes advantage of prior knowledge about where a face was detected in the past to speed up detection now. There are demos showing how these work in the demos module: https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces <https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces> >> >> Jon >> >> >> >>> On 26 Oct 2016, at 09:12, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>> >>> Hi John, >>> >>> Thanks for the reply. >>> >>> Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: >>> >>> Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); >>> VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); >>> >>> display.addVideoListener(new VideoDisplayListener<MBFImage>() { >>> public void beforeUpdate(MBFImage frame) { >>> >>> FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); >>> List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); >>> >>> for (DetectedFace face : faces) { >>> frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); >>> } >>> >>> } >>> >>> It’s tracking faces well, just FPS issues. >>> >>> -Ben >>>> On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>>> >>>> Hi Ben, >>>> >>>> Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. >>>> >>>> What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. >>>> >>>> Jon >>>> >>>> >>>>> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>>> >>>>> Hello, >>>>> >>>>> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >>>>> >>>>> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >>>>> >>>>> Thank you, >>>>> Ben. >>>>> ------------------------------------------------------------------------------ >>>>> The Command Line: Reinvented for Modern Developers >>>>> Did the resurgence of CLI tooling catch you by surprise? >>>>> Reconnect with the command line and become more productive. >>>>> Learn the new .NET and ASP.NET <http://asp.net/> CLI. Get your free copy! >>>>> http://sdm.link/telerik <http://sdm.link/telerik> >>>>> _______________________________________________ >>>>> openimaj-discuss mailing list >>>>> ope...@li... <mailto:ope...@li...> >>>>> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss <https://lists.sourceforge.net/lists/listinfo/openimaj-discuss> >>>> >>> >> > |
From: Ben M. <ben...@gm...> - 2016-10-26 08:33:38
|
Oh wow. I just implemented it into my code and the performance when straight up. Thank you so much for this. I will try to build from this and do some more digging in the docs. Just curious tho, what other ways are there to process this? Would you say lowering the resolution is the most common method? -Ben > On Oct 26, 2016, at 5:26 PM, Jonathon Hare <js...@ec...> wrote: > > Ok, lowering the resolution will help, but that code isn’t really doing tracking; it’s just running the detector over the whole frame every time, which can be rather computationally expensive. > > Take a look at the more advanced face tracker implementations, KLTHaarFaceTracker and CLMFaceTracker, if you want something that takes advantage of prior knowledge about where a face was detected in the past to speed up detection now. There are demos showing how these work in the demos module: https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces <https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces> > > Jon > > > >> On 26 Oct 2016, at 09:12, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >> >> Hi John, >> >> Thanks for the reply. >> >> Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: >> >> Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); >> VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); >> >> display.addVideoListener(new VideoDisplayListener<MBFImage>() { >> public void beforeUpdate(MBFImage frame) { >> >> FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); >> List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); >> >> for (DetectedFace face : faces) { >> frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); >> } >> >> } >> >> It’s tracking faces well, just FPS issues. >> >> -Ben >>> On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >>> >>> Hi Ben, >>> >>> Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. >>> >>> What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. >>> >>> Jon >>> >>> >>>> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>>> >>>> Hello, >>>> >>>> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >>>> >>>> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >>>> >>>> Thank you, >>>> Ben. >>>> ------------------------------------------------------------------------------ >>>> The Command Line: Reinvented for Modern Developers >>>> Did the resurgence of CLI tooling catch you by surprise? >>>> Reconnect with the command line and become more productive. >>>> Learn the new .NET and ASP.NET <http://asp.net/> CLI. Get your free copy! >>>> http://sdm.link/telerik <http://sdm.link/telerik> >>>> _______________________________________________ >>>> openimaj-discuss mailing list >>>> ope...@li... <mailto:ope...@li...> >>>> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss >>> >> > |
From: Jonathon H. <js...@ec...> - 2016-10-26 08:26:39
|
Ok, lowering the resolution will help, but that code isn’t really doing tracking; it’s just running the detector over the whole frame every time, which can be rather computationally expensive. Take a look at the more advanced face tracker implementations, KLTHaarFaceTracker and CLMFaceTracker, if you want something that takes advantage of prior knowledge about where a face was detected in the past to speed up detection now. There are demos showing how these work in the demos module: https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces <https://github.com/openimaj/openimaj/tree/master/demos/demos/src/main/java/org/openimaj/demos/faces> Jon > On 26 Oct 2016, at 09:12, Ben Markham <ben...@gm...> wrote: > > Hi John, > > Thanks for the reply. > > Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: > > Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); > VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); > > display.addVideoListener(new VideoDisplayListener<MBFImage>() { > public void beforeUpdate(MBFImage frame) { > > FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); > List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); > > for (DetectedFace face : faces) { > frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); > } > > } > > It’s tracking faces well, just FPS issues. > > -Ben >> On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: >> >> Hi Ben, >> >> Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. >> >> What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. >> >> Jon >> >> >>> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm... <mailto:ben...@gm...>> wrote: >>> >>> Hello, >>> >>> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >>> >>> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >>> >>> Thank you, >>> Ben. >>> ------------------------------------------------------------------------------ >>> The Command Line: Reinvented for Modern Developers >>> Did the resurgence of CLI tooling catch you by surprise? >>> Reconnect with the command line and become more productive. >>> Learn the new .NET and ASP.NET <http://asp.net/> CLI. Get your free copy! >>> http://sdm.link/telerik <http://sdm.link/telerik> >>> _______________________________________________ >>> openimaj-discuss mailing list >>> ope...@li... >>> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss >> > |
From: Ben M. <ben...@gm...> - 2016-10-26 08:13:07
|
Hi John, Thanks for the reply. Right now I’m just following the tutorial for face tracking. Here’s a small code snippet: Video<MBFImage> video = new XuggleVideo(new File("/Users/ben//Downloads/Whisper.mp4")); VideoDisplay<MBFImage> display = VideoDisplay.createVideoDisplay(video); display.addVideoListener(new VideoDisplayListener<MBFImage>() { public void beforeUpdate(MBFImage frame) { FaceDetector<DetectedFace, FImage> fd = new HaarCascadeDetector(50); List<DetectedFace> faces = fd.detectFaces(Transforms.calculateIntensity(frame)); for (DetectedFace face : faces) { frame.drawShape(face.getBounds(), 4, RGBColour.GREEN); } } It’s tracking faces well, just FPS issues. -Ben > On Oct 26, 2016, at 5:07 PM, Jonathon Hare <js...@ec...> wrote: > > Hi Ben, > > Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. > > What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. > > Jon > > >> On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm...> wrote: >> >> Hello, >> >> I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? >> >> I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? >> >> Thank you, >> Ben. >> ------------------------------------------------------------------------------ >> The Command Line: Reinvented for Modern Developers >> Did the resurgence of CLI tooling catch you by surprise? >> Reconnect with the command line and become more productive. >> Learn the new .NET and ASP.NET CLI. Get your free copy! >> http://sdm.link/telerik >> _______________________________________________ >> openimaj-discuss mailing list >> ope...@li... >> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss > |
From: Jonathon H. <js...@ec...> - 2016-10-26 08:07:44
|
Hi Ben, Parallel processing generally won’t help with video unless the operation that you are performing on each frame is independent of the previous frame(s) - generally speaking tracking doesn’t fall into this category as it relies on using the position of the object in the previous frame as a starting point for updating detection in the current frame. What method are you using for tracking? For most methods an easy way of speeding up processing is to lower the resolution of the frame the tracking is being applied to & then scale the result back to the original frame size. Jon > On 26 Oct 2016, at 08:58, Ben Markham <ben...@gm...> wrote: > > Hello, > > I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? > > I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? > > Thank you, > Ben. > ------------------------------------------------------------------------------ > The Command Line: Reinvented for Modern Developers > Did the resurgence of CLI tooling catch you by surprise? > Reconnect with the command line and become more productive. > Learn the new .NET and ASP.NET CLI. Get your free copy! > http://sdm.link/telerik > _______________________________________________ > openimaj-discuss mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openimaj-discuss |
From: Ben M. <ben...@gm...> - 2016-10-26 07:58:18
|
Hello, I’m fairly new to the concept of processing video using OpenImaj and just processing video using Java in general. In the tutorial, it shows an example of a dataset of MBFImages to do parallel processing. Would I just treat the Video as a Dataset? If so, do I put the parallel processing in the beforeUpdate method? I’m wondering because I’m trying to do face tracking on an mp4 file and I’m losing about 12 FPS. Is parallel processing not the way to go about increasing FPS? Should I be using another method? Thank you, Ben. |
From: Jonathon H. <js...@ec...> - 2016-06-17 09:30:25
|
Hi Mark, Have you looked at the tutorial (development version with some typos fixed: http://openimaj.github.io/openimaj/tutorial/ <http://openimaj.github.io/openimaj/tutorial/>)? Chapters 4 & 5 show different ways of comparing images (although there are many more). Chapter 12 shows one way of training classifiers from image features, although you should note that whilst this can be effective, much of the current research in this area has moved to using "deep-learning" technologies to get higher performance. Jon > On 13 Jun 2016, at 15:51, Altaweel, Mark <m.a...@uc...> wrote: > > Hi, > > I was evaluating OpenImaj for my work and was wondering what are the tools available for comparing like images? So if you have one image and are trying to see how similar another image is to it through a type of score. > > I was also interested in your machine learning tools, where training images are given and then unknown images are submitted to be evaluated, which could then be classified based on the training images. Which tools are best suited for this in the application? > > Thank you for your help. > > Mark > > > ------------------------------------------------------------------------------ > What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic > patterns at an interface-level. Reveals which users, apps, and protocols are > consuming the most bandwidth. Provides multi-vendor support for NetFlow, > J-Flow, sFlow and other flows. Make informed decisions using capacity > planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e > _______________________________________________ > openimaj-discuss mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openimaj-discuss |
From: Altaweel, M. <m.a...@uc...> - 2016-06-13 14:51:37
|
Hi, I was evaluating OpenImaj for my work and was wondering what are the tools available for comparing like images? So if you have one image and are trying to see how similar another image is to it through a type of score. I was also interested in your machine learning tools, where training images are given and then unknown images are submitted to be evaluated, which could then be classified based on the training images. Which tools are best suited for this in the application? Thank you for your help. Mark |
From: Jonathon H. <js...@ec...> - 2016-03-09 12:32:44
|
Hi Charlie, The reason that you’re seeing a white image is that the result of applying the SWT is an image in which each pixel value is set to the corresponding stroke width; in a normal FImage valid pixel values are between 0 and 1, however stroke widths will inevitably fall into a much larger range. When you display the image, pixel values outside the 0-1 range will be clipped (hence why you’re seeing a white image). To display the resultant image correctly do the following: StrokeWidthTransform swt = new StrokeWidthTransform(true, new CannyEdgeDetector()); //canny sigma=1; thresholds automatically selected using same algorithm as Matlab implementation swt.processImage(input); DisplayUtilities.display(swt.normaliseImage(input)); The stroke width transform itself doesn’t do text detection - it’s literally an image processing operator that constructs a map of stroke widths at every pixel. The classes to actually find text regions are in the current 1.4-SNAPSHOT versions of OpenIMAJ (in the org.openimaj.image.text.extraction.swt package of the image-feature-extraction dependency). In particular you want to use the SWTTextDetector class (javadoc here: http://openimaj.github.io/openimaj/apidocs/org/openimaj/image/text/extraction/swt/SWTTextDetector.html <http://openimaj.github.io/openimaj/apidocs/org/openimaj/image/text/extraction/swt/SWTTextDetector.html>) There is example code in the sandbox that demonstrates finding letters, words and lines: https://github.com/openimaj/openimaj/blob/master/demos/sandbox/src/main/java/org/openimaj/demos/image/text/extraction/swt/SWTTest.java <https://github.com/openimaj/openimaj/blob/master/demos/sandbox/src/main/java/org/openimaj/demos/image/text/extraction/swt/SWTTest.java> Hope that helps, Jon > On 8 Mar 2016, at 21:07, Charlie Picorini <x.p...@fr...> wrote: > > Dear Team, > > First of all thanks for sharing this java library (with the great tutorials and documentation) with which I'm discovering the world of computer vision! Indeed I'd like to see if there are any noticeable performance (and accuracy) improvements by applying text detection before OCR (the corpus I am using is made of scanned images). > > To practice I am using http://i.stack.imgur.com/EingC.jpg <http://i.stack.imgur.com/EingC.jpg> and I like to apply the Stroke Width Transform algorithm on it but first the image I get now after applying SWT is all white (can't find the right parameter values, although it had worked just beforehand), and then I am lost on how to get the text regions. > > Here is the code I used (inspired from https://sourceforge.net/p/openimaj/discussion/general/thread/a56f1a03/ <https://sourceforge.net/p/openimaj/discussion/general/thread/a56f1a03/>) : > > Main(): > URL u = new URL("http://i.stack.imgur.com/EingC.jpg" <http://i.stack.imgur.com/EingC.jpg>); > FImage grayImage = ImageUtilities.readF( u ).normalise().process( new ResizeProcessor( 620 ) ); > DisplayUtilities.display(grayImage); > applySWTOnImage(grayImage); > > applySWTOnImage(): > > private void applySWTOnImage (FImage input){ > float threshCannyLow = (float)0.1; > StrokeWidthTransform swt = new StrokeWidthTransform(true, threshCannyLow, Math.min(threshCannyLow * 3, 1), (float)0.9); > swt.processImage(input); > DisplayUtilities.display(input); > } > > Regarding the parameters, somewhere in OpenCV documentation was written that the higher threshold in Canny was recommended to be 3 times the lower (can't find it anymore) that's why I set them as such in the constructor. > > As it worked and does not anymore, I could see the result of SWT algorithm a limited number of time, but I am stuck on what to do then to get the text boundaries as shown in the SWT paper (http://yoni.wexlers.org/papers/2010TextDetection.pdf <http://yoni.wexlers.org/papers/2010TextDetection.pdf>) or everywhere when people show their results with SWT. > > So my questions are : why has the result of applying SWT become all white, or as it may be hard to answer, how should the SWT parameters be chosen to get something, and could you give me some hints on the steps to follow after the SWT ? > > Thanks a lot for helping and keep up this great work! > > Regards, > > CP > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://makebettercode.com/inteldaal-eval_______________________________________________ > openimaj-discuss mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openimaj-discuss |
From: Charlie P. <x.p...@fr...> - 2016-03-08 21:08:01
|
Dear Team, First of all thanks for sharing this java library (with the great tutorials and documentation) with which I'm discovering the world of computer vision! Indeed I'd like to see if there are any noticeable performance (and accuracy) improvements by applying text detection before OCR (the corpus I am using is made of scanned images). To practice I am using http://i.stack.imgur.com/EingC.jpg and I like to apply the Stroke Width Transform algorithm on it but first the image I get now after applying SWT is all white (can't find the right parameter values, although it had worked just beforehand), and then I am lost on how to get the text regions. Here is the code I used (inspired from https://sourceforge.net/p/openimaj/discussion/general/thread/a56f1a03/) : Main(): / URL u = new URL("http://i.stack.imgur.com/EingC.jpg");// // FImage grayImage = ImageUtilities.readF( u ).normalise().process( new ResizeProcessor( 620 ) );// // DisplayUtilities.display(grayImage);// // applySWTOnImage(grayImage);/ applySWTOnImage():/ private void applySWTOnImage (FImage input){ float threshCannyLow = (float)0.1; StrokeWidthTransform swt = new StrokeWidthTransform(true, threshCannyLow, Math.min(threshCannyLow * 3, 1), (float)0.9); swt.processImage(input); DisplayUtilities.display(input); } / Regarding the parameters, somewhere in OpenCV documentation was written that the higher threshold in Canny was recommended to be 3 times the lower (can't find it anymore) that's why I set them as such in the constructor. As it worked and does not anymore, I could see the result of SWT algorithm a limited number of time, but I am stuck on what to do then to get the text boundaries as shown in the SWT paper (http://yoni.wexlers.org/papers/2010TextDetection.pdf) or everywhere when people show their results with SWT. So my questions are : why has the result of applying SWT become all white, or as it may be hard to answer, how should the SWT parameters be chosen to get something, and could you give me some hints on the steps to follow after the SWT ? Thanks a lot for helping and keep up this great work! Regards, CP |
From: Joseph M. <mui...@gm...> - 2015-06-26 20:55:35
|
I want to align several faces I have at my disposal here using openImaj. I want to read a jpg face photo, align it and finally save it as in jpg after alignment. Here is where I am stuck. See below public class FaceImageAlignment { /** * @param args the command line arguments */ public static void main(String[] args) throws IOException { // TODO code application logic here BufferedImage img = null; img = ImageIO.read(new File("D:/face_test.jpg")); //How to align face image using openImaj //This is where I am stuck on doing face alignment. I tried doing the following AffineAligner imgAlign = new AffineAligner(); //but I could not figure out how to do face alignment with it BufferedImage imgAligned = new BufferedImage(//I will need to put aligned Image here as a BufferedImage); File f = new File("D:\\face_aligned.jpg"); ImageIO.write(imgAligned, "JPEG", f); }} What Code do I need to have there to face align face_test.jpg to face_aligned.jpg ? I actually posted this question to stack exchange here <http://stackoverflow.com/questions/31074580/face-alignment-using-openimaj-api-libraries>but I havent got any help yet since then. I discovered this mail list and decided to post it here. |
From: Jonathon H. <js...@ec...> - 2015-06-16 13:21:21
|
Try this: ByteArrayInputStream bais = new ByteArrayInputStream(decodedBytes); LocalFeatureList<ByteDSIFTKeypoint> decodedFeatures = MemoryLocalFeatureList.read(bais, ByteDSIFTKeypoint.class); (obviously this will only work for lists of features & is heavily optimised for this usecase) > On 16 Jun 2015, at 14:15, chalitha udara Perera <cha...@gm...> wrote: > > Thanks for quick response Sir. It works fine for serialization. But how can i again retrieve the list of local features by deserializing the byte array ? > > I tried following code, but it gives corrupted stream exception: java.io.StreamCorruptedException: invalid stream header: 4B505400 > ByteArrayInputStream bais = new ByteArrayInputStream(decodedBytes); > > ObjectInputStream ois = new ObjectInputStream(bais); > > LocalFeatureList<ByteDSIFTKeypoint> decodedFeatures = (LocalFeatureList<ByteDSIFTKeypoint>) ois.readObject(); > > Looking at OpenIMAJ api, there is a method to deserialze binaries, but how to use it to get LocalFeatureList<ByteDSIFTKeypoint> from the binary ? > > > > Thanks, > > Chalitha > > > On Tue, Jun 16, 2015 at 5:59 PM, Jonathon Hare <js...@ec... <mailto:js...@ec...>> wrote: > Do something like this (will automatically write the complete list of features without need to iterate through them): > > > ByteArrayOutputStream baos = new ByteArrayOutputStream(); > > org.openimaj.io.IOUtils.writeBinary(baos, features); > > byte[] bytes = baos.toByteArray(); > > > > > >> On 16 Jun 2015, at 11:19, chalitha udara Perera <cha...@gm... <mailto:cha...@gm...>> wrote: >> >> Hi All, >> I have extracted LocalFeaturesList<ByteDSIFTKeypoint> using Pyramid Dense Sift in OpenIMAJ. >> As indicated in API documentation, ByteDSIFTKeypoint implements Java Serializable interface and there fore I would like to serialize keypoints for latter use. >> I tried to serialize keypoints to get a byte array. But when retrieving key-points from the serialized object it loses all field values associated with it. >> >> I am using the following code for serialization >> >> ByteArrayOutputStream baos = new ByteArrayOutputStream(); >> >> ObjectOutputStream oos = new ObjectOutputStream(baos); >> >> >> LocalFeatureList<ByteDSIFTKeypoint> features = psift.getByteKeypoints(); >> >> for (ByteDSIFTKeypoint byteDSIFTKeypoint : features) { >> >> >> oos.writeObject(byteDSIFTKeypoint); >> >> >> byte[] bytes = baos.toByteArray(); >> >> } >> >> >> >> Any help regarding how to correctly serialize keypoints is greatly appreciated. >> >> Thanks, >> >> Chalitha >> >> >> -- >> J.M Chalitha Udara Perera >> >> Department of Computer Science and Engineering, >> University of Moratuwa, >> Sri Lanka >> ------------------------------------------------------------------------------ >> _______________________________________________ >> openimaj-discuss mailing list >> ope...@li... <mailto:ope...@li...> >> https://lists.sourceforge.net/lists/listinfo/openimaj-discuss <https://lists.sourceforge.net/lists/listinfo/openimaj-discuss> > > > > > -- > J.M Chalitha Udara Perera > > Department of Computer Science and Engineering, > University of Moratuwa, > Sri Lanka |
From: chalitha u. P. <cha...@gm...> - 2015-06-16 13:15:21
|
Thanks for quick response Sir. It works fine for serialization. But how can i again retrieve the list of local features by deserializing the byte array ? I tried following code, but it gives corrupted stream exception: java.io.StreamCorruptedException: invalid stream header: 4B505400 ByteArrayInputStream bais = new ByteArrayInputStream(decodedBytes); ObjectInputStream ois = new ObjectInputStream(bais); LocalFeatureList<ByteDSIFTKeypoint> decodedFeatures = (LocalFeatureList<ByteDSIFTKeypoint>) ois.readObject(); Looking at OpenIMAJ api, there is a method to deserialze binaries, but how to use it to get LocalFeatureList<ByteDSIFTKeypoint> from the binary ? Thanks, Chalitha On Tue, Jun 16, 2015 at 5:59 PM, Jonathon Hare <js...@ec...> wrote: > Do something like this (will automatically write the complete list of > features without need to iterate through them): > > ByteArrayOutputStream baos = new ByteArrayOutputStream(); > > org.openimaj.io.IOUtils.writeBinary(baos, features); > > byte[] bytes = baos.toByteArray(); > > > > > On 16 Jun 2015, at 11:19, chalitha udara Perera <cha...@gm...> > wrote: > > Hi All, > I have extracted LocalFeaturesList<ByteDSIFTKeypoint> using Pyramid Dense > Sift in OpenIMAJ. > As indicated in API documentation, ByteDSIFTKeypoint implements Java > Serializable interface and there fore I would like to serialize keypoints > for latter use. > I tried to serialize keypoints to get a byte array. But when retrieving > key-points from the serialized object it loses all field values associated > with it. > > I am using the following code for serialization > > ByteArrayOutputStream baos = new ByteArrayOutputStream(); > > ObjectOutputStream oos = new ObjectOutputStream(baos); > > LocalFeatureList<ByteDSIFTKeypoint> features = psift.getByteKeypoints(); > > for (ByteDSIFTKeypoint byteDSIFTKeypoint : features) { > > oos.writeObject(byteDSIFTKeypoint); > > byte[] bytes = baos.toByteArray(); > > } > > > Any help regarding how to correctly serialize keypoints is greatly > appreciated. > > Thanks, > > Chalitha > > -- > J.M Chalitha Udara Perera > > *Department of Computer Science and Engineering,* > *University of Moratuwa,* > *Sri Lanka* > > ------------------------------------------------------------------------------ > _______________________________________________ > openimaj-discuss mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openimaj-discuss > > > -- J.M Chalitha Udara Perera *Department of Computer Science and Engineering,* *University of Moratuwa,* *Sri Lanka* |
From: Jonathon H. <js...@ec...> - 2015-06-16 12:29:27
|
Do something like this (will automatically write the complete list of features without need to iterate through them): ByteArrayOutputStream baos = new ByteArrayOutputStream(); org.openimaj.io.IOUtils.writeBinary(baos, features); byte[] bytes = baos.toByteArray(); > On 16 Jun 2015, at 11:19, chalitha udara Perera <cha...@gm...> wrote: > > Hi All, > I have extracted LocalFeaturesList<ByteDSIFTKeypoint> using Pyramid Dense Sift in OpenIMAJ. > As indicated in API documentation, ByteDSIFTKeypoint implements Java Serializable interface and there fore I would like to serialize keypoints for latter use. > I tried to serialize keypoints to get a byte array. But when retrieving key-points from the serialized object it loses all field values associated with it. > > I am using the following code for serialization > > ByteArrayOutputStream baos = new ByteArrayOutputStream(); > > ObjectOutputStream oos = new ObjectOutputStream(baos); > > > LocalFeatureList<ByteDSIFTKeypoint> features = psift.getByteKeypoints(); > > for (ByteDSIFTKeypoint byteDSIFTKeypoint : features) { > > > oos.writeObject(byteDSIFTKeypoint); > > > byte[] bytes = baos.toByteArray(); > > } > > > > Any help regarding how to correctly serialize keypoints is greatly appreciated. > > Thanks, > > Chalitha > > > -- > J.M Chalitha Udara Perera > > Department of Computer Science and Engineering, > University of Moratuwa, > Sri Lanka > ------------------------------------------------------------------------------ > _______________________________________________ > openimaj-discuss mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openimaj-discuss |
From: chalitha u. P. <cha...@gm...> - 2015-06-16 10:19:26
|
Hi All, I have extracted LocalFeaturesList<ByteDSIFTKeypoint> using Pyramid Dense Sift in OpenIMAJ. As indicated in API documentation, ByteDSIFTKeypoint implements Java Serializable interface and there fore I would like to serialize keypoints for latter use. I tried to serialize keypoints to get a byte array. But when retrieving key-points from the serialized object it loses all field values associated with it. I am using the following code for serialization ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); LocalFeatureList<ByteDSIFTKeypoint> features = psift.getByteKeypoints(); for (ByteDSIFTKeypoint byteDSIFTKeypoint : features) { oos.writeObject(byteDSIFTKeypoint); byte[] bytes = baos.toByteArray(); } Any help regarding how to correctly serialize keypoints is greatly appreciated. Thanks, Chalitha -- J.M Chalitha Udara Perera *Department of Computer Science and Engineering,* *University of Moratuwa,* *Sri Lanka* |