|
From: Genevieve M G. <g.g...@sh...> - 2015-02-23 09:58:41
|
I would suggest you split your corpus into training and testing parts and run your own evaluation. You can then inspect the result using Corpus QA on the test corpus, which will give you more options. On 19 February 2015 at 16:36, Abduladem Eljamel <a_e...@ya...> wrote: > Hi all > I am trying to use the GATE embeded to evaluation the the ML model by > K-Fold value of 10. I am using it to extract relations from unstructured > data. I want to get a piece of information to draw the relation between > sensitivity and specificity. If I understood the evaluation measures done > by the learning process resource cocorectly, the sensitivity is the Recall > measure. But I couldn't get specificity meausre because it requires these > values. > specificity=TN/(TN+FN) > TN=True Negative > FN=False Negative (Missing) > > To extract the TN value I have to get the number of examples during every > fold run. Is there way to extarct the all information during the evaluation > run? > > Thanks > Abdul > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > > http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk > _______________________________________________ > GATE-users mailing list > GAT...@li... > https://lists.sourceforge.net/lists/listinfo/gate-users > > |