Skip to main content

Table 6 Performance measures of our alignment algorithms with varying thresholds (micro-average on both reference sets)

From: Alignment of vaccine codes using an ontology of vaccine descriptions

Method

Measure

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Classes

F-score

0.643

0.756

0.755

0.744

0.719

0.715

0.734

0.722

0.728

0.717

0.713

 

Precision

0.592

0.745

0.745

0.746

0.747

0.756

0.923

0.939

0.967

0.970

0.968

 

Recall

0.816

0.771

0.770

0.751

0.716

0.709

0.644

0.629

0.624

0.604

0.599

Metamap

F-score

0.504

0.638

0.492

0.453

0.441

0.432

0.377

0.367

0.370

0.370

0.370

 

Precision

0.487

0.793

0.798

0.800

0.818

0.814

0.861

0.926

0.943

0.943

0.943

 

Recall

0.828

0.590

0.426

0.381

0.367

0.357

0.285

0.279

0.277

0.277

0.277

Properties

F-score

0.847

0.932

0.932

0.932

0.920

0.920

0.879

0.875

0.870

0.857

0.857

 

Precision

0.812

0.944

0.944

0.946

0.947

0.947

0.949

0.958

0.962

0.968

0.968

 

Recall

0.952

0.923

0.923

0.921

0.897

0.897

0.828

0.813

0.802

0.781

0.781

Tokens

F-score

0.557

0.646

0.631

0.634

0.370

0.346

0.249

0.160

0.160

0.160

0.160

 

Precision

0.538

0.788

0.845

0.895

0.877

0.888

0.952

0.950

0.933

0.933

0.933

 

Recall

0.834

0.618

0.591

0.576

0.269

0.244

0.156

0.094

0.094

0.094

0.094