Assignment based metrics
ie_eval.metrics.assignment_based
Compute the reading-order-independent ECER/EWER/Nerval metrics from a label/prediction dataset.
Attributes
logger
module-attribute
logger = logging.getLogger(__name__)
Classes
Functions
compute_oiecerewer
compute_oiecerewer(
label_dir: Path,
prediction_dir: Path,
by_category: bool = False,
print_table: bool = True,
) -> PrettyTable
Compute reading-order-independent ECER and EWER metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label_dir |
Path
|
Path to the directory containing BIO label files. |
required |
prediction_dir |
Path
|
Path to the directory containing BIO prediction files. |
required |
by_category |
bool
|
Whether to compute the metric globally or for each category. Defaults to False. (Not implemented yet) |
False
|
print_table |
bool
|
Whether to print the table. Defaults to True. |
True
|
Returns:
Name | Type | Description |
---|---|---|
PrettyTable |
PrettyTable
|
The evaluation table formatted in Markdown. |
Source code in ie_eval/metrics/assignment_based.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
|
compute_oinerval
compute_oinerval(
label_dir: Path,
prediction_dir: Path,
nerval_threshold: float = 30.0,
by_category: bool = False,
print_table: bool = True,
) -> PrettyTable
Compute reading-order-independent Nerval Precision, Recall and F1 scores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label_dir |
Path
|
Path to the directory containing BIO label files. |
required |
prediction_dir |
Path
|
Path to the directory containing BIO prediction files. |
required |
nerval_threshold |
float, default = 30.0
|
Threshold for the amount of character error that is tolerable during the computation (values in the range [0.0, 100.0]]) |
30.0
|
by_category |
bool
|
Whether to compute the metric globally or for each category. Defaults to False. (Not implemented yet) |
False
|
print_table |
bool
|
Whether to print the table. Defaults to True. |
True
|
Returns:
Name | Type | Description |
---|---|---|
PrettyTable |
PrettyTable
|
The evaluation table formatted in Markdown. |
Source code in ie_eval/metrics/assignment_based.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|