You can subscribe to this list here.
2013 |
Jan
|
Feb
|
Mar
|
Apr
(15) |
May
(10) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2014 |
Jan
(1) |
Feb
(1) |
Mar
(61) |
Apr
(52) |
May
(4) |
Jun
(5) |
Jul
(13) |
Aug
(11) |
Sep
(1) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2015 |
Jan
(7) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
(4) |
Oct
(2) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jinman K. <jin...@sy...> - 2013-05-01 10:01:29
|
Falk, The ability to dynamically select the features in the visualisation is a core requirement for any image retrieval. Although the feature selection can be done per feature, this should not be the rule - the ability to select N features at once will be important. 100K dataset is a realistic sample size for our intended medical image retrieval and indeed the LIDC is ~1K. However, your numbers are highly dependent on the number of features. For a single patient image data in the LIDC, it consist of 100s of image slices and therefore, if we want raw features, we could be looking at massive feature space. Your approach with array would work well under reasonable data size but may not scale well and could restrict our feature extraction options. Nevertheless, your performance sounds promising for our application. The features that can be extracted from CT is vast; see for example L. Dettori and L. Semler, 'A comparison of wavelet, ridgelet, and curvelet-based texture classification algorithms in computed tomogarphy' Computers in Biology and Medicine, 2007 There are many ways to compare varying vector sizes; we can derive rules and penalty function, for example. The current VAMIR used a graph-matching algorithm to represent the features as nodes; varying node sizes were compared via a penalty function. Now a novel approach would be to have an automated feature extraction process such that most relevant features, based on the query, could be represented in the information visualisation output. This can be further improved with dynamic, user-driven, feature selection, such that the feature selection can thus become an visual analytic problem! Hope this helps, Jinman -----Original Message----- From: ne...@in... [mailto:ne...@in...] Sent: Friday, 26 April 2013 7:35 PM To: sca...@li... Subject: Re: [Scaffoldhunter-devel] GSoC: Scaffold Hunter for medical image retrieval Hi, from what I understand, the motivation for the project is that a user should be able to dynamically choose what features should be visualized, as he might be interested in only a few aspects while deeming others unimportant with regard to his question. I thought about some key points that need to be considered. The visualization relies on absolute distances between the query's and the returned images' features, so these distances have to be calculated whenever a new feature is chosen to be included in the visualization (as opposed to only sorting the results). However, in the majority of cases this will only result in a single subtraction per data base entry (e.g. query_feature_5 - result_feature_5 for all result images). In order to do some rough performance testing, I have simulated a data base of 100.000 entries that mimics the structure of the current prototype, which essentially stores the feature values as doubles in nested ArrayLists. Comparing a random query to all other entries in terms of one random feature never took more than 70 milliseconds on a below-average machine. Substituting the inner ArrayList for an actual Array led to a decrease of more than a tenfold, so there is already a simple way to optimize underlying data structures. Also, whereas I used 100.000 entries, the LIDC collection you proposed only comprises about 1000 images and a simulated query for this number dropped to less than 0.1 milliseconds. I am aware that this kind of benchmarking is not likely to be a 100% precise when taken out of context but still this issue does not really seem like a bottleneck atm. However I am not sure how to handle a "classic" retrieval task that uses all features. In the past these values were not computed during runtime. I think vector-based methods could only be applied when disregarding the fact that cases in the data base contain a different number of tumours (unlike a graph-based approach), is that right? Comparing an entry with 5 tumours to a query with only 3 tumours then could be done through averaging the respective tumour features (e.g. their homogeneity). Still this would result in a high number of operations if we take into account around 20 different features per image. Again I would firstly have a look at how to improve the current ad-hoc approach of the data structure and test optimised implementations like the Trove library (since the underlying data is homogenous and stored as primitive types). Furthermore, the possibility to select features would have to be integrated in the user interface in a smart way (this may sound trivial but looking at the code that's hard to say) and one has to think about what to display in the beginning or which feature combinations to propose to a user. Although the latter is technically not about coding (?) I had a look at the LIDC data base. The annotations are present in XML files for which I have started to write a simple reader. It should be possible to integrate them in VAMIR easily if features can be extracted. I am not sure about the meaning of the features used in the annotations or how to normalize them, I guess a corresponding publication will shed light on that. But in general I think this project will be justified by the use of more extensive data than the 50 images that are used now. Best, Falk Am 23.04.2013 14:46, schrieb Jinman Kim: > Hi Falk, very happy to hear - can you tell us more about how you want > to include 'visual features' to your project? > > For this project, we want to use public Lung cancer database (LIDC) which already has many visual features such as ROIs, types of lymph nodes, sphericity, volume, etc. Will you use these or expand them? > > http://imaging.cancer.gov/programsandresources/informationsystems/lidc > > > Jinman > > > -----Original Message----- > From: Karsten Klein [mailto:kar...@ud...] > Sent: Tuesday, April 23, 2013 2:33 PM > To: sca...@li... > Subject: Re: [Scaffoldhunter-devel] GSoC: Scaffold Hunter for medical image retrieval > > Am 21.04.2013 22:19, schrieb ne...@in...: >> Hi, >> >> I am currently a graduate student of medical engineering at the >> University of Lübeck, Germany. From October 2012 to March 2013, I >> conducted an internship at the University of Sydney with the task of >> introducing methods of visual analytics for medical image retrieval >> into Scaffold Hunter. Under the supervision of the now GSoC mentors >> Dr. Jinman Kim and Dr. Karsten Klein, I built a prototype of a plugin >> that will now serve as a starting point for the GSoC project ideas >> related to image retrieval. As the first results were very promising >> and the approach is novel, I am highly motivated to continue my >> participation in this project in the scope of this year's Summer of >> Code and plan to apply for developing efficient techniques of visual >> feature selection. Hope to work together with you soon! >> >> Best regards, >> Falk Nette >> >> >> >> --------------------------------------------------------------------- >> - >> -------- Precog is a next-generation analytics platform capable of >> advanced analytics on semi-structured data. The platform includes >> APIs for building apps and a phenomenal toolset for data science. >> Developers can use our toolset for easy data analysis & visualization. >> Get a free account! >> http://www2.precog.com/precogplatform/slashdotnewsletter >> _______________________________________________ >> Scaffoldhunter-devel mailing list >> Sca...@li... >> https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel >> > > Hi Falk, > > thanks for your interest. Good to see you were not too bored with the work on Scaffold Hunter and you are still interested in improving the tool ;-). > If you have any further questions please post them here. > > Best, > Karsten > > > ---------------------------------------------------------------------- > -------- Try New Relic Now & We'll Send You this Cool Shirt New Relic > is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > > ---------------------------------------------------------------------- > -------- Try New Relic Now & We'll Send You this Cool Shirt New Relic > is the only SaaS-based application performance monitoring service that > delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! > http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ Scaffoldhunter-devel mailing list Sca...@li... https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel |
From: Ashnil K. <as...@it...> - 2013-04-30 02:32:07
|
Hi Dushyant, The intent of these projects is to develop a method for creating queries and for analysing retrieved images using different feature priorities. In 'Interactive Image Query Formulation' we could create a tool, as an extension to Scaffold Hunter, that allows the user to apply different image processing algorithms to construct his query, e.g. segmentation to select different regions of interest, interactively building a graph-based query. In 'Image Visual Feature Selection', we wish to extend Scaffold Hunter to analyse the retrieved images based on different feature sets/graph structures that are interactively adjusted to best fit the user's intent. The idea behind both these projects is that we do not assume that there is a "best" feature set; instead, the query and features are user dependent. As such, an interaction method is necessary for analysis and feedback to the retrieval algorithm. We intend to use the LIDC data set (link below). You will be dealing with mainly 3D CT images. The feature set will be varied depending on the user but you can expect to deal with standard image features for describing texture, region of interest properties etc. We are also interested in extracting spatial relationships between regions of interest and representing these as graphs. Hope this helps. Regards, Ashnil |
From: Dushyant G. <goy...@gm...> - 2013-04-26 21:15:59
|
Dear Sir,**** Thanks a lot for your reply.**** Yes, my major interest is in Computer Vision domain.**** However, pattern recognition, machine learning also go together with it. *** * Hence, I have acquired experience in these related fields as well.**** I have joined the mailing lists.**** In the meantime I am reading about Scaffold Hunter Project.**** Could you discuss with me more about the 2 projects - "Interactive image query formation" and "Image Visual Feature Selection".**** I have some questions like - **** What kind of medical image data is this ?**** What type of visual features are to be extracted.? **** Are the features to be extracted automatically (fixed) or the user selects features using mouse.? Hope to hear from you soon**** Regards.**** Dushyant**** ** ** ** ** On Thu, Apr 18, 2013 at 8:26 PM, <ash...@sy...> wrote:**** Dear Dushyant, Thank you for your interest in our project. I apologise for the lateness of the reply, as I am currently on leave. Your resume and skills, especially in the field of computer vision, are impressive. The projects you referenced are more involved around the human aspect of image retrieval, e.g., visualisation, how best to formulate a query that produces the best results, how do you communicate to users the most discriminative features. This will involve a combination of data analysis (using clustering etc.) and visualisation using Scaffold Hunter. I would be happy to discuss specific project ideas with you, especially if you have any of your own. I suggest that if you are interested, you can read a bit more about Scaffold Hunter at http://scaffoldhunter.sourceforge.net/ . I would also recommend that you sign up for the mailing lists at http://sourceforge.net/mail/?group_id=261360 if you have questions about the projects. Thank you very much for your interest and I look forward to hearing from you soon. Kind regards, Ashnil Kumar **** Begin forwarded message:**** From: Dushyant Goyal <goy...@gm...> Date: 17 April 2013 5:41:08 AEST To: jin...@sy..., as...@it... Subject: GSoC'13 proposal discussion**** Dear Mentors, I am Dushyant Goyal, from INDIA. I am very interested for working on "Interactive Image Query Formation" or "Image Visual Feature Selection" >From the past three years I am learning and working in field on Image and Video Processing. I have got 6 International publications in these fields.You can find my details and other important information in the document "Application.pdf" attached. I successfully completed the Google Summer of Code 2012. My project was "Multithreaded implementation of Image Similarity Metrics using GPU". I used OpenCV and OpenCL for the purpose. My team won "The 2nd competition on counter measures to 2D facial spoofing attacks" Also, my algorithm stood 3rd in a contest Kitchen Scene Context based Gesture Recognition. In both of these contests I used openCV for programming and also machine learning tools like SVM, HMM, for training and classification of extracted features. Feature selection was an inherent part of these projects. I am really very interested to extend my knowledge of image processing and machine learning to one of the projects for Scaffold Hunter from "Interfaces and Interaction Track" Looking forward to have more discussion with you about it. Please reply on this email for further communication. Thank you. Sincerely, Dushyant Goyal**** ** ** ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. **** |
From: <as...@it...> - 2013-04-26 13:50:35
|
Hi Girish, As you mentioned, running different classifiers depending on user input could be one approach to real-time input. Image retrieval suffers from a problem called the "semantic gap": the retrieved images may not meet the intent of the user. In addition, different users may have different ideas of what is similar, e.g. by tumour location as opposed to tumour volume. The idea behind the feedback loop is refine the search parameters in an iterative manner until the user is satisfied with what is retrieved. Iteration 0 would be the initial query formulation in which, using your suggestion as an example, the user would select the initial set of classifiers to apply. We would also like to include as part of the query formulation process the ability for the user to select regions of interest, or to exclude parts of the region from the query (e.g., limit image similarity to the lung region). I would be interested to hear your thoughts on this. Regards, Ashnil ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. |
From: <ne...@in...> - 2013-04-26 09:35:53
|
Hi, from what I understand, the motivation for the project is that a user should be able to dynamically choose what features should be visualized, as he might be interested in only a few aspects while deeming others unimportant with regard to his question. I thought about some key points that need to be considered. The visualization relies on absolute distances between the query's and the returned images' features, so these distances have to be calculated whenever a new feature is chosen to be included in the visualization (as opposed to only sorting the results). However, in the majority of cases this will only result in a single subtraction per data base entry (e.g. query_feature_5 - result_feature_5 for all result images). In order to do some rough performance testing, I have simulated a data base of 100.000 entries that mimics the structure of the current prototype, which essentially stores the feature values as doubles in nested ArrayLists. Comparing a random query to all other entries in terms of one random feature never took more than 70 milliseconds on a below-average machine. Substituting the inner ArrayList for an actual Array led to a decrease of more than a tenfold, so there is already a simple way to optimize underlying data structures. Also, whereas I used 100.000 entries, the LIDC collection you proposed only comprises about 1000 images and a simulated query for this number dropped to less than 0.1 milliseconds. I am aware that this kind of benchmarking is not likely to be a 100% precise when taken out of context but still this issue does not really seem like a bottleneck atm. However I am not sure how to handle a "classic" retrieval task that uses all features. In the past these values were not computed during runtime. I think vector-based methods could only be applied when disregarding the fact that cases in the data base contain a different number of tumours (unlike a graph-based approach), is that right? Comparing an entry with 5 tumours to a query with only 3 tumours then could be done through averaging the respective tumour features (e.g. their homogeneity). Still this would result in a high number of operations if we take into account around 20 different features per image. Again I would firstly have a look at how to improve the current ad-hoc approach of the data structure and test optimised implementations like the Trove library (since the underlying data is homogenous and stored as primitive types). Furthermore, the possibility to select features would have to be integrated in the user interface in a smart way (this may sound trivial but looking at the code that's hard to say) and one has to think about what to display in the beginning or which feature combinations to propose to a user. Although the latter is technically not about coding (?) I had a look at the LIDC data base. The annotations are present in XML files for which I have started to write a simple reader. It should be possible to integrate them in VAMIR easily if features can be extracted. I am not sure about the meaning of the features used in the annotations or how to normalize them, I guess a corresponding publication will shed light on that. But in general I think this project will be justified by the use of more extensive data than the 50 images that are used now. Best, Falk Am 23.04.2013 14:46, schrieb Jinman Kim: > Hi Falk, very happy to hear - can you tell us more about how you want to include 'visual features' to your project? > > For this project, we want to use public Lung cancer database (LIDC) which already has many visual features such as ROIs, types of lymph nodes, sphericity, volume, etc. Will you use these or expand them? > > http://imaging.cancer.gov/programsandresources/informationsystems/lidc > > > Jinman > > > -----Original Message----- > From: Karsten Klein [mailto:kar...@ud...] > Sent: Tuesday, April 23, 2013 2:33 PM > To: sca...@li... > Subject: Re: [Scaffoldhunter-devel] GSoC: Scaffold Hunter for medical image retrieval > > Am 21.04.2013 22:19, schrieb ne...@in...: >> Hi, >> >> I am currently a graduate student of medical engineering at the >> University of Lübeck, Germany. From October 2012 to March 2013, I >> conducted an internship at the University of Sydney with the task of >> introducing methods of visual analytics for medical image retrieval >> into Scaffold Hunter. Under the supervision of the now GSoC mentors >> Dr. Jinman Kim and Dr. Karsten Klein, I built a prototype of a plugin >> that will now serve as a starting point for the GSoC project ideas >> related to image retrieval. As the first results were very promising >> and the approach is novel, I am highly motivated to continue my >> participation in this project in the scope of this year's Summer of >> Code and plan to apply for developing efficient techniques of visual >> feature selection. Hope to work together with you soon! >> >> Best regards, >> Falk Nette >> >> >> >> ---------------------------------------------------------------------- >> -------- Precog is a next-generation analytics platform capable of >> advanced analytics on semi-structured data. The platform includes APIs >> for building apps and a phenomenal toolset for data science. >> Developers can use our toolset for easy data analysis & visualization. >> Get a free account! >> http://www2.precog.com/precogplatform/slashdotnewsletter >> _______________________________________________ >> Scaffoldhunter-devel mailing list >> Sca...@li... >> https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel >> > > Hi Falk, > > thanks for your interest. Good to see you were not too bored with the work on Scaffold Hunter and you are still interested in improving the tool ;-). > If you have any further questions please post them here. > > Best, > Karsten > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > |
From: Girish M. <gir...@gm...> - 2013-04-25 18:20:29
|
Hi Ashnil, Sorry for the confusion. I am interested in the "Interactive Image Query Formulation" project<http://scaffoldhunter.sourceforge.net/wiki/doku.php?id=project_ideas#interactive_image_query_formation>. Regarding the real time input, I was thinking we could train different classifiers for the various features and at run time, depending on the user's input, we run only the necessary classifiers. Do you think this would work? I don't think I understood the "expert user feedback loop" aspect. Would the information/annotations provided by the expert users not be considered a part of the dataset ground truth labels on which the system would be trained? Thanks, Girish On Thu, Apr 25, 2013 at 11:22 PM, < sca...@li...> wrote: > > Message: 2 > Date: Thu, 25 Apr 2013 11:47:18 +1000 > From: as...@it... > Subject: Re: [Scaffoldhunter-devel] GSOC project: Interactive Image > Query Formation > To: sca...@li... > Message-ID: <201...@ww...> > Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" > > Hi Girish, > > Thank you for your interest in this project and apologies for the > lateness of the response. > > Your subject line referred to the "Interactive Image Query > Formulation" project while the link in the body of your message was to > the "Image Visual Feature Selection" project. Could you please clarify > which of the two you were interested in? > > In both of these projects we are particularly interested in how the > knowledge of expert users (such as radiologists in medical image > retrieval) could be incorporated as part of the query formulation and > feature selection processes. This could potentially be part of a > feedback loop. As such, we would be dealing with traditional image > features alongside semantic features/interpretation as specified by an > individual. We are interested in using ScaffoldHunter as a platform > for enabling such interactions with the retrieval process. As such, it > would be interesting to see if your suggested approach could be > adapted to incorporate real-time input. > > For this work we would be looking at using public data sets like LIDC > (link below). > > Please let me know if you have any further questions or would like > extra information about other aspects of the project. > > Regards, > Ashnil > > LIDC: > http://imaging.cancer.gov/programsandresources/informationsystems/lidc > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > > > > |
From: Girish M. <gir...@gm...> - 2013-04-25 17:52:43
|
Hi Jinman, Thanks for the feedback! :) Regarding retrieval v/s classification, I meant we could perform retrieval the same way as done in nearest neighbor (either exact or approximate) based classification where we first find the most similar image. We can skip the classification step (where we would label the query image with the class of the most similar image) as we are interested only in finding the most similar image(s). I missed your point, about it being impossible to translate classification to retrieval. Can you explain a bit more? Regarding the real time feature selection, I was thinking we could perhaps train different classifiers for the various features and at run time, depending on the user's input, we run only the necessary classifiers. Since each of the classifiers would be based on a super fast binary hashing (hamming distance) based calculation, we can guarantee real time performance for any of the classifiers or a combination of them (if the user selects similarity based on 2 features say). What do you feel? Thanks, Girish On Thu, Apr 25, 2013 at 11:15 AM, Jinman Kim <jin...@sy...>wrote: > Hi Girish, > > Thanks for your interest and sharing your funny slides - gold membership > is some achievement! Its good to see you applying state-of-the-art > algorithms. I think you can build up to a strong proposal. Your proposal > is interesting - it is impossible to translate classification to retrieval > - may I suggest you explore these two points? > > * In this project, we want to in real-time change the feature to use in > the retrieval e.g. a dynamic feature selection / wieght > * Your idea of coarse-to-fine filtering could work - one could use > classifier to narrow the large database and then use more conventional > image-to-image matching algorithms.... > > best > > Jinman > ------------------------------ > *From:* Girish Malkarnenkar [gir...@gm...] > *Sent:* Wednesday, 24 April 2013 2:06 AM > *To:* sca...@li... > *Subject:* [Scaffoldhunter-devel] GSOC project: Interactive Image Query > Formation > > Hi, > > I found the description of this project<http://scaffoldhunter.sourceforge.net/wiki/doku.php?id=project_ideas#image_visual_feature_selection>pretty interesting. During my summer internship last year, I implemented a > supervised binary hashing based approach for a database retrieval system > for pose estimation using simple nearest neighbors. This project involved > implementing state-of-the-art ideas from 3 IEEE CVPR 2012 (Computer > Vision and Pattern Recognition) papers which ranged from the state of the > art methods for locality sensitive hashing to fast nearest neighbor search > by non-linear embedding and fast search in hamming space with multi index > hashing. A short presentation about my implementation can be seen here<https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnaXJpc2htYWxrYXJuZW5rYXJ8Z3g6N2I4YjZkYTQ1YjA0NTFkMQ> > . > > I was wondering if a similar approach could be used here where the > problem is formulated as a image retrieval task rather than a > classification task. Since we want a real time querying experience, a > binary hashing method (such as Spherical Hashing<http://sglab.kaist.ac.kr/Spherical_Hashing/>) > might be very useful. Assuming the task to be finding the most similar > image from a database of N images, we can train the binary hashing > functions on the features (we can use either GIST<http://people.csail.mit.edu/torralba/code/spatialenvelope/>, > SIFT <http://www.cs.ubc.ca/~lowe/keypoints/>, SURF<http://www.vision.ee.ethz.ch/~surf/>or a high dimensional combination of these using a bag of visual words > approach). The binary hashing training algorithm will take care of figuring > out which are the important dimensions/features from this combination of > features. Given the high dimensional representation of each image, apart > from using binary hashing which is a form of approximate k-Nearest > Neighbors, we can also use an exact fast k-NN method such as this<http://research.yoonho.info/fnnne/>, > which would still be much faster than a normal k-NN approach). The > approximate k-NN & the exact k-NN can be used in a filter & refine method > where we first obtain the X most similar image to the query image from the > N images in the database using the (super fast) binary hashing approach and > then re-rank these using the exact k-NN algorithm. > > I would appreciate any comments/suggestions on this proposal. > > Thanks, > Girish > http://www.girishmalkarnenkar.com/ > > |
From: <as...@it...> - 2013-04-25 07:54:02
|
Hi Girish, Thank you for your interest in this project and apologies for the lateness of the response. Your subject line referred to the "Interactive Image Query Formulation" project while the link in the body of your message was to the "Image Visual Feature Selection" project. Could you please clarify which of the two you were interested in? In both of these projects we are particularly interested in how the knowledge of expert users (such as radiologists in medical image retrieval) could be incorporated as part of the query formulation and feature selection processes. This could potentially be part of a feedback loop. As such, we would be dealing with traditional image features alongside semantic features/interpretation as specified by an individual. We are interested in using ScaffoldHunter as a platform for enabling such interactions with the retrieval process. As such, it would be interesting to see if your suggested approach could be adapted to incorporate real-time input. For this work we would be looking at using public data sets like LIDC (link below). Please let me know if you have any further questions or would like extra information about other aspects of the project. Regards, Ashnil LIDC: http://imaging.cancer.gov/programsandresources/informationsystems/lidc ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. |
From: Jinman K. <jin...@sy...> - 2013-04-25 06:04:59
|
Hi Girish, Thanks for your interest and sharing your funny slides - gold membership is some achievement! Its good to see you applying state-of-the-art algorithms. I think you can build up to a strong proposal. Your proposal is interesting - it is impossible to translate classification to retrieval - may I suggest you explore these two points? * In this project, we want to in real-time change the feature to use in the retrieval e.g. a dynamic feature selection / wieght * Your idea of coarse-to-fine filtering could work - one could use classifier to narrow the large database and then use more conventional image-to-image matching algorithms.... best Jinman ________________________________ From: Girish Malkarnenkar [gir...@gm...] Sent: Wednesday, 24 April 2013 2:06 AM To: sca...@li... Subject: [Scaffoldhunter-devel] GSOC project: Interactive Image Query Formation Hi, I found the description of this project<http://scaffoldhunter.sourceforge.net/wiki/doku.php?id=project_ideas#image_visual_feature_selection> pretty interesting. During my summer internship last year, I implemented a supervised binary hashing based approach for a database retrieval system for pose estimation using simple nearest neighbors. This project involved implementing state-of-the-art ideas from 3 IEEE CVPR 2012 (Computer Vision and Pattern Recognition) papers which ranged from the state of the art methods for locality sensitive hashing to fast nearest neighbor search by non-linear embedding and fast search in hamming space with multi index hashing. A short presentation about my implementation can be seen here<https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnaXJpc2htYWxrYXJuZW5rYXJ8Z3g6N2I4YjZkYTQ1YjA0NTFkMQ>. I was wondering if a similar approach could be used here where the problem is formulated as a image retrieval task rather than a classification task. Since we want a real time querying experience, a binary hashing method (such as Spherical Hashing<http://sglab.kaist.ac.kr/Spherical_Hashing/>) might be very useful. Assuming the task to be finding the most similar image from a database of N images, we can train the binary hashing functions on the features (we can use either GIST<http://people.csail.mit.edu/torralba/code/spatialenvelope/>, SIFT<http://www.cs.ubc.ca/~lowe/keypoints/>, SURF<http://www.vision.ee.ethz.ch/~surf/> or a high dimensional combination of these using a bag of visual words approach). The binary hashing training algorithm will take care of figuring out which are the important dimensions/features from this combination of features. Given the high dimensional representation of each image, apart from using binary hashing which is a form of approximate k-Nearest Neighbors, we can also use an exact fast k-NN method such as this<http://research.yoonho.info/fnnne/>, which would still be much faster than a normal k-NN approach). The approximate k-NN & the exact k-NN can be used in a filter & refine method where we first obtain the X most similar image to the query image from the N images in the database using the (super fast) binary hashing approach and then re-rank these using the exact k-NN algorithm. I would appreciate any comments/suggestions on this proposal. Thanks, Girish http://www.girishmalkarnenkar.com/ |
From: Girish M. <gir...@gm...> - 2013-04-23 16:07:08
|
Hi, I found the description of this project<http://scaffoldhunter.sourceforge.net/wiki/doku.php?id=project_ideas#image_visual_feature_selection>pretty interesting. During my summer internship last year, I implemented a supervised binary hashing based approach for a database retrieval system for pose estimation using simple nearest neighbors. This project involved implementing state-of-the-art ideas from 3 IEEE CVPR 2012 (Computer Vision and Pattern Recognition) papers which ranged from the state of the art methods for locality sensitive hashing to fast nearest neighbor search by non-linear embedding and fast search in hamming space with multi index hashing. A short presentation about my implementation can be seen here<https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnaXJpc2htYWxrYXJuZW5rYXJ8Z3g6N2I4YjZkYTQ1YjA0NTFkMQ> . I was wondering if a similar approach could be used here where the problem is formulated as a image retrieval task rather than a classification task. Since we want a real time querying experience, a binary hashing method (such as Spherical Hashing <http://sglab.kaist.ac.kr/Spherical_Hashing/>) might be very useful. Assuming the task to be finding the most similar image from a database of N images, we can train the binary hashing functions on the features (we can use either GIST<http://people.csail.mit.edu/torralba/code/spatialenvelope/>, SIFT <http://www.cs.ubc.ca/~lowe/keypoints/>, SURF<http://www.vision.ee.ethz.ch/~surf/>or a high dimensional combination of these using a bag of visual words approach). The binary hashing training algorithm will take care of figuring out which are the important dimensions/features from this combination of features. Given the high dimensional representation of each image, apart from using binary hashing which is a form of approximate k-Nearest Neighbors, we can also use an exact fast k-NN method such as this<http://research.yoonho.info/fnnne/>, which would still be much faster than a normal k-NN approach). The approximate k-NN & the exact k-NN can be used in a filter & refine method where we first obtain the X most similar image to the query image from the N images in the database using the (super fast) binary hashing approach and then re-rank these using the exact k-NN algorithm. I would appreciate any comments/suggestions on this proposal. Thanks, Girish http://www.girishmalkarnenkar.com/ |
From: Girish M. <gir...@gm...> - 2013-04-23 16:00:36
|
Hi, I found the description of this project<http://scaffoldhunter.sourceforge.net/wiki/doku.php?id=project_ideas#image_visual_feature_selection>pretty interesting. During my summer internship last year, I implemented a supervised binary hashing based approach for a database retrieval system for pose estimation using simple nearest neighbors. This project involved implementing state-of-the-art ideas from 3 IEEE CVPR 2012 (Computer Vision and Pattern Recognition) papers which ranged from the state of the art methods for locality sensitive hashing to fast nearest neighbor search by non-linear embedding and fast search in hamming space with multi index hashing. A short presentation about my implementation can be seen here<https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxnaXJpc2htYWxrYXJuZW5rYXJ8Z3g6N2I4YjZkYTQ1YjA0NTFkMQ> . I was wondering if a similar approach could be used here where the problem is formulated as a image retrieval task rather than a classification task. Since we want a real time querying experience, a binary hashing method (such as Spherical Hashing <http://sglab.kaist.ac.kr/Spherical_Hashing/>) might be very useful. Assuming the task to be finding the most similar image from a database of N images, we can train the binary hashing functions on the features (we can use either GIST<http://people.csail.mit.edu/torralba/code/spatialenvelope/>, SIFT <http://www.cs.ubc.ca/~lowe/keypoints/>, SURF<http://www.vision.ee.ethz.ch/~surf/>or a high dimensional combination of these using a bag of visual words approach). The binary hashing training algorithm will take care of figuring out which are the important dimensions/features from this combination of features. Given the high dimensional representation of each image, apart from using binary hashing which is a form of approximate k-Nearest Neighbors, we can also use an exact fast k-NN method such as this<http://research.yoonho.info/fnnne/>, which would still be much faster than a normal k-NN approach). The approximate k-NN & the exact k-NN can be used in a filter & refine method where we first obtain the X most similar image to the query image from the N images in the database using the (super fast) binary hashing approach and then re-rank these using the exact k-NN algorithm. I would appreciate any comments/suggestions on this proposal. Thanks, Girish http://www.girishmalkarnenkar.com/ |
From: Jinman K. <jin...@sy...> - 2013-04-23 05:08:01
|
Hi Falk, very happy to hear - can you tell us more about how you want to include 'visual features' to your project? For this project, we want to use public Lung cancer database (LIDC) which already has many visual features such as ROIs, types of lymph nodes, sphericity, volume, etc. Will you use these or expand them? http://imaging.cancer.gov/programsandresources/informationsystems/lidc Jinman -----Original Message----- From: Karsten Klein [mailto:kar...@ud...] Sent: Tuesday, April 23, 2013 2:33 PM To: sca...@li... Subject: Re: [Scaffoldhunter-devel] GSoC: Scaffold Hunter for medical image retrieval Am 21.04.2013 22:19, schrieb ne...@in...: > Hi, > > I am currently a graduate student of medical engineering at the > University of Lübeck, Germany. From October 2012 to March 2013, I > conducted an internship at the University of Sydney with the task of > introducing methods of visual analytics for medical image retrieval > into Scaffold Hunter. Under the supervision of the now GSoC mentors > Dr. Jinman Kim and Dr. Karsten Klein, I built a prototype of a plugin > that will now serve as a starting point for the GSoC project ideas > related to image retrieval. As the first results were very promising > and the approach is novel, I am highly motivated to continue my > participation in this project in the scope of this year's Summer of > Code and plan to apply for developing efficient techniques of visual > feature selection. Hope to work together with you soon! > > Best regards, > Falk Nette > > > > ---------------------------------------------------------------------- > -------- Precog is a next-generation analytics platform capable of > advanced analytics on semi-structured data. The platform includes APIs > for building apps and a phenomenal toolset for data science. > Developers can use our toolset for easy data analysis & visualization. > Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > Hi Falk, thanks for your interest. Good to see you were not too bored with the work on Scaffold Hunter and you are still interested in improving the tool ;-). If you have any further questions please post them here. Best, Karsten ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ Scaffoldhunter-devel mailing list Sca...@li... https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel |
From: Karsten K. <kar...@ud...> - 2013-04-23 04:32:45
|
Am 21.04.2013 22:19, schrieb ne...@in...: > Hi, > > I am currently a graduate student of medical engineering at the University > of Lübeck, Germany. From October 2012 to March 2013, I conducted an > internship at the University of Sydney with the task of introducing > methods of visual analytics for medical image retrieval into Scaffold > Hunter. Under the supervision of the now GSoC mentors Dr. Jinman Kim and > Dr. Karsten Klein, I built a prototype of a plugin that will now serve as > a starting point for the GSoC project ideas related to image retrieval. As > the first results were very promising and the approach is novel, I am > highly motivated to continue my participation in this project in the scope > of this year's Summer of Code and plan to apply for developing efficient > techniques of visual feature selection. Hope to work together with you > soon! > > Best regards, > Falk Nette > > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Scaffoldhunter-devel mailing list > Sca...@li... > https://lists.sourceforge.net/lists/listinfo/scaffoldhunter-devel > Hi Falk, thanks for your interest. Good to see you were not too bored with the work on Scaffold Hunter and you are still interested in improving the tool ;-). If you have any further questions please post them here. Best, Karsten |
From: <ne...@in...> - 2013-04-21 12:55:42
|
Hi, I am currently a graduate student of medical engineering at the University of Lübeck, Germany. From October 2012 to March 2013, I conducted an internship at the University of Sydney with the task of introducing methods of visual analytics for medical image retrieval into Scaffold Hunter. Under the supervision of the now GSoC mentors Dr. Jinman Kim and Dr. Karsten Klein, I built a prototype of a plugin that will now serve as a starting point for the GSoC project ideas related to image retrieval. As the first results were very promising and the approach is novel, I am highly motivated to continue my participation in this project in the scope of this year's Summer of Code and plan to apply for developing efficient techniques of visual feature selection. Hope to work together with you soon! Best regards, Falk Nette |
From: Prateek G. <pra...@gm...> - 2013-04-11 16:53:52
|
I tried to explore through the web trying to understand the substructure search method implemented in the scaffold hunter project.I was only able to get read through the abstract of the IEEE link as provided in the ideas link. >From the sources,I was able to understand 1)The idea is basically an alternative way to approximate NP complete problem of exact structure match in the area of chemical compound structure for its presence within a particular drug. 2)The approach is to filter part of the structure(substructure) of the compound and to analyze for maximum isomorphism to the query cited. 3)The alternative way implements a hash-key implementation. My Doubts: But i am not able to understand the "fingerprint technique with a combination of tree and cycle features" used in the filtering?Is it that subgraph is broken into tree and used as key and the corresponding cyclic link is matched as the value? Some more doubts still. I am also starting to acquaint with the structure editor Jchempaint.But need more links for better understanding of the concept and can you provide me with the link to the source code of the present module or is this idea proposes the development of a new module? On Thu, Apr 11, 2013 at 10:22 PM, Prateek Gupta <pra...@gm... > wrote: > hello, > I am Prateek Gupta,IInd year student pursuing Bachelors in engineering > with majors in computer science from BITS Pilani Goa,India. > I am interested to participate in gsoc 2013 and interested in Scaffold > Hunter organization. > My skill set includes C,Java,python,mysql and android application > development. > I am interested in the idea substructure search in the ideas list. > > Thank you. > |
From: Prateek G. <pra...@gm...> - 2013-04-11 16:52:52
|
hello, I am Prateek Gupta,IInd year student pursuing Bachelors in engineering with majors in computer science from BITS Pilani Goa,India. I am interested to participate in gsoc 2013 and interested in Scaffold Hunter organization. My skill set includes C,Java,python,mysql and android application development. I am interested in the idea substructure search in the ideas list. Thank you. |