Skip to main content
Fig. 5 | Journal of Biomedical Semantics

Fig. 5

From: BioBLP: a modular framework for learning on multimodal biomedical knowledge graphs

Fig. 5

The impact of our proposed pretraining strategy. Left: increase in MRR due to the pretraining strategy over base models trained from scratch. Asterisks indicate significance at p-values less than 0.05. Right: A comparison of the runtime required until convergence on the validation set, when training BioBLP-D from scratch, and when including pretraining. During pretraining, attributes are ignored, and after pretraining we start optimizing the attribute encoders

Back to article page