Eduction returns entities based on the extraction rules from the grammars and dictionaries. Eduction provides a test mode to measure extraction relevance precision and recall. Precision and recall are based on the comparison between human-marked results and engine-marked results. The following terms describe result relevance as used in Eduction.
From these relevance terms, you can determine precision and recall as follows:
Recall is the percentage of true relevant entities that are extracted by an extraction rule, that is,
TP / (TP + FN) * 100
Precision is the percentage of extracted entities that are true entities, that is,
TP / (TP + FP) * 100
|