{"id":9935,"date":"2023-06-06T16:26:32","date_gmt":"2023-06-06T15:26:32","guid":{"rendered":"https:\/\/www.blopig.com\/blog\/?p=9935"},"modified":"2023-06-22T19:38:03","modified_gmt":"2023-06-22T18:38:03","slug":"machine-learning-strategies-to-overcome-limited-data-availability","status":"publish","type":"post","link":"https:\/\/www.blopig.com\/blog\/2023\/06\/machine-learning-strategies-to-overcome-limited-data-availability\/","title":{"rendered":"Machine learning strategies to overcome limited data availability"},"content":{"rendered":"\n<p>Machine learning (ML) for biological\/biomedical applications is very challenging \u2013 in large part due to limitations in publicly available data (<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2023.05.17.541222v1\" target=\"_blank\" rel=\"noreferrer noopener\">something we recently published about [1]<\/a>). Substantial amounts of time and resources may be required to generate the types of data (eg protein structures, protein-protein binding affinity, microscopy images, gene expression values) required to train ML models, however.<\/p>\n\n\n\n<p>In cases where there is sufficient data available to provide signal, but not enough for the desired performance, ML strategies can be employed:<\/p>\n\n\n\n<!--more-->\n\n\n\n<p><strong>1. Pre-training and transfer learning<\/strong><\/p>\n\n\n\n<p>One solution to limited data availability is to turn to a different, related problem with more data. Pre-training on a data-rich problem can be followed by transfer learning on the desired task (by initializing the weights from the pre-trained model) (Figure 1). The underlying concept is that the pre-trained model will learn the fundamentals of the system \u2013\u00a0for example, the physics governing protein-protein interactions \u2013\u00a0and the pre-trained weights will be closer to the optimum for the desired task. As such, less data would be required to calibrate the weights for the desired task.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"625\" height=\"447\" loading=\"lazy\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=625%2C447&#038;ssl=1\" alt=\"\" class=\"wp-image-9968\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=1024%2C733&amp;ssl=1 1024w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=300%2C215&amp;ssl=1 300w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=768%2C550&amp;ssl=1 768w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=1536%2C1100&amp;ssl=1 1536w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=2048%2C1466&amp;ssl=1 2048w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?resize=624%2C447&amp;ssl=1 624w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?w=1250&amp;ssl=1 1250w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/transfer_learning-1.png?w=1875&amp;ssl=1 1875w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><figcaption class=\"wp-element-caption\">Figure 1. Schematic of transfer learning.<\/figcaption><\/figure>\n\n\n\n<p>There are many considerations for pre-training. The initial pre-training task can be supervised or unsupervised, although the latter is common due to the limitations in supervised (labelled) data availability. The model parameters \u2013 such as learning rate, learning rate scheduler, weight decay, optimizer, etc.\u00a0\u2013 for transfer learning can also be adjusted. Additionally, the weights for some layers can be frozen (left unchanged during further training), and layers can be added\/removed.<\/p>\n\n\n\n<p>A note\u00a0about terminology \u2013\u00a0for tasks that are in the same dataset domain (have the same data type\/format) as the original pre-training task, updating the model weights is typically referred to as fine-tuning (e.g. pre-training on general protein data followed by fine-tuning on antibody data).<\/p>\n\n\n\n<p>Examples where pre-training followed by transfer learning\/fine-tuning has been applied include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Fine-tuning protein structure prediction networks for peptide binding specificity prediction\u00a0<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.07.12.499365v1\">[2]<\/a><\/li>\n\n\n\n<li>Geometric encoder to reconstruct perturbed protein structure, followed by transfer learning to change in affinity prediction\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1009284\" target=\"_blank\">[3]<\/a><\/li>\n\n\n\n<li>Protein family-specific scoring functions for small molecule virtual screening\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/pubs.acs.org\/doi\/10.1021\/acs.jcim.8b00350\" target=\"_blank\">[4]<\/a><\/li>\n\n\n\n<li>Transfer learning to make predictions about network biology\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.nature.com\/articles\/s41586-023-06139-9\" target=\"_blank\">[5]<\/a><\/li>\n<\/ul>\n\n\n\n<p>Pre-trained unsupervised models (e.g. protein language models) can also be used for &#8220;zero-shot&#8221; prediction, where no further transfer learning\/fine-tuning is done&nbsp;\u2013 e.g.:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Inverse folding model (ESM-IF1) applied to tasks such as protein stability prediction\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2\" target=\"_blank\">[<\/a><a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2\" target=\"_blank\" rel=\"noreferrer noopener\">6<\/a><a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2\" target=\"_blank\">]<\/a><\/li>\n\n\n\n<li>Codon language model (CaLM) applied to tasks such as transcript abundance prediction\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.12.15.519894v1\" target=\"_blank\">[7]<\/a><\/li>\n<\/ul>\n\n\n\n<p><strong>2. Synthetic data<\/strong><\/p>\n\n\n\n<p>Synthetic data is generated computationally rather than from experiments. In order to be used successfully, the synthetic data must be representative of biological data, or at least contain sufficient signal for the model to learn.<\/p>\n\n\n\n<p>Synthetic data has been used to augment experimental training data for ML model development. For example, ESM-IF1\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2\" target=\"_blank\">[6]<\/a>, an inverse folding model, was trained on both experimental protein structures (thousands) and AlphaFold2 models of protein sequences (millions) (Figure 2). The modelled structures expanded their training dataset drastically: the authors employed a 1:80 experimental:predicted structure ratio in training.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"625\" height=\"197\" loading=\"lazy\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?resize=625%2C197&#038;ssl=1\" alt=\"\" class=\"wp-image-9969\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?resize=1024%2C323&amp;ssl=1 1024w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?resize=300%2C95&amp;ssl=1 300w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?resize=768%2C242&amp;ssl=1 768w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?resize=624%2C197&amp;ssl=1 624w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/ESM-IF1.jpg?w=1280&amp;ssl=1 1280w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><figcaption class=\"wp-element-caption\">Figure 2. ESM-IF1 training scheme with solved (CATH) and predicted (AlphaFold2) structures. Figure reproduced from [6].<\/figcaption><\/figure>\n\n\n\n<p>Synthetic datasets have been created for ML development:\u00a0for example, Absolut for antibody specificity prediction\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.nature.com\/articles\/s43588-022-00372-4\" target=\"_blank\">[8]<\/a> and an antibody-antigen ddG dataset to investigate the amount\/diversity of experimental data which will be required for robust prediction <a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2023.05.17.541222v1\" target=\"_blank\">[1]<\/a>.<br><br>A synthetic dataset could also be used for pre-training followed by fine-tuning on experimental data (for example, as was presented by Sam Gelman at a recent ML4ProteinEngineering seminar).<\/p>\n\n\n\n<p><strong>3. Semi-supervised learning<\/strong><\/p>\n\n\n\n<p>In addition to supervised and unsupervised ML, there is also semi-supervised ML. Semi-supervised approaches involve training on a moderate amount of labeled data and a large amount of unlabelled data (Figure 3). A model trained on the labeled data is used to provide labels for the unlabeled data, which can then be combined with the original labeled dataset for further training.<\/p>\n\n\n\n<p>Semi-supervised learning has been applied to, for example, single-cell multi-omics\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/academic.oup.com\/pnasnexus\/article\/1\/4\/pgac165\/6672590\" target=\"_blank\">[9]<\/a>\u00a0and Hidden Markov Models for biological sequence analysis\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/academic.oup.com\/bioinformatics\/article\/35\/13\/2208\/5184961\" target=\"_blank\">[10]<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"625\" height=\"313\" loading=\"lazy\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?resize=625%2C313&#038;ssl=1\" alt=\"\" class=\"wp-image-9971\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?w=900&amp;ssl=1 900w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?resize=300%2C150&amp;ssl=1 300w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?resize=768%2C384&amp;ssl=1 768w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/semi-supervised-learning.png?resize=624%2C312&amp;ssl=1 624w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><figcaption class=\"wp-element-caption\"> Figure 3. Schematic of semi-supervised learning. Figure reproduced from [11].<\/figcaption><\/figure>\n\n\n\n<p><strong>4. Meta-learning<\/strong><\/p>\n\n\n\n<p>In a similar vein, meta-learning is applicable to cases with a small amount of clean data (&#8220;meta set&#8221;) and large amount of noisy data. Meta-learning approaches include learning to reweight, in which the importance of data points is weighted (for example by down-weighting suspected noisy data points), and meta label correction, which attempts to correct data labels.<\/p>\n\n\n\n<p>The application of meta-learning to an antibody binder dataset (antibodies binding HER2 with varying affinities) greatly improved robustness to noise and performance on small amounts of training data\u00a0<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2023.01.30.526201v1\" target=\"_blank\">[12]<\/a> (Figure 4).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"625\" height=\"403\" loading=\"lazy\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?resize=625%2C403&#038;ssl=1\" alt=\"\" class=\"wp-image-9972\" srcset=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?resize=1024%2C660&amp;ssl=1 1024w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?resize=300%2C193&amp;ssl=1 300w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?resize=768%2C495&amp;ssl=1 768w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?resize=624%2C402&amp;ssl=1 624w, https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/Meta-Learning.jpg?w=1253&amp;ssl=1 1253w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><figcaption class=\"wp-element-caption\"> Figure 4. Meta-learning approach for antibody binder data. Figure reproduced from [12].<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Extra 1: Active learning<\/strong><\/p>\n\n\n\n<p>If you do not have enough data for the model you are trying to train \u2013 but have the ability to conduct further experiments \u2013 active learning can be used to explore what data is needed to improve model performance. Active learning involves iterative cycles (Figure 5) of:<\/p>\n\n\n\n<ol class=\"wp-block-list\" type=\"1\">\n<li>Model training \u2013&nbsp;on available labeled data<\/li>\n\n\n\n<li>Querying \u2013&nbsp;identifying areas in the data space, where the model has high uncertainty and\/or low coverage<\/li>\n\n\n\n<li>Data collection\/labeling \u2013\u00a0for data points in the high uncertainty and\/or low coverage space<\/li>\n\n\n\n<li>Appending \u2013 adding the new labeled data to the training data<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/active-learning.webp?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"625\" height=\"367\" loading=\"lazy\" src=\"https:\/\/i0.wp.com\/www.blopig.com\/blog\/wp-content\/uploads\/2023\/06\/active-learning.webp?resize=625%2C367&#038;ssl=1\" alt=\"\" class=\"wp-image-9974\" \/><\/a><figcaption class=\"wp-element-caption\"> Figure 5. Schematic of the iterative steps in active learning. Figure reproduced from [13].<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Extra 2: Machine learning-grade data<\/strong><\/p>\n\n\n\n<p>To overcome challenges in data availability \u2026 we will need more data! But not just any kind of data. It will be essential to consider machine learning model development in data generation (to generate &#8220;machine learning-grade data&#8221;). This will include using standard processes for generation, estimations of uncertainty and assessments of bias and dataset diversity.<\/p>\n\n\n\n<p>\u2013\u2013\u2013<\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\" type=\"1\">\n<li>Hummer <em>et al.<\/em>, <em>bioRxiv<\/em>, 2023 \u2013\u00a0<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2023.05.17.541222v1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.biorxiv.org\/content\/10.1101\/2023.05.17.541222v1<\/a><\/li>\n\n\n\n<li>Motmaen <em>et al.<\/em>, <em>bioRxiv<\/em>, 2022 \u2013 <a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.07.12.499365v1\" target=\"_blank\">https:\/\/www.biorxiv.org\/content\/10.1101\/2022.07.12.499365v1<\/a><\/li>\n\n\n\n<li>Liu <em>et al.<\/em>, <em>PLoS Comp Bio<\/em>, 2021 <a rel=\"noreferrer noopener\" href=\"https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1009284\" target=\"_blank\">https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1009284<\/a><\/li>\n\n\n\n<li>Imrie <em>et al.<\/em>, <em>J Chem Inf Model<\/em>,&nbsp;2018 \u2013 <a rel=\"noreferrer noopener\" href=\"https:\/\/pubs.acs.org\/doi\/10.1021\/acs.jcim.8b00350\" target=\"_blank\">https:\/\/pubs.acs.org\/doi\/10.1021\/acs.jcim.8b00350<\/a><\/li>\n\n\n\n<li>Theodoris <em>et al.<\/em>, <em>Nature<\/em>, 2023 \u2013 <a rel=\"noreferrer noopener\" href=\"https:\/\/www.nature.com\/articles\/s41586-023-06139-9\" target=\"_blank\">https:\/\/www.nature.com\/articles\/s41586-023-06139-9<\/a><\/li>\n\n\n\n<li>Hsu <em>et al.<\/em>, <em>bioRxiv<\/em>, 2022 \u2013 <a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2\" target=\"_blank\">https:\/\/www.biorxiv.org\/content\/10.1101\/2022.04.10.487779v2<\/a><\/li>\n\n\n\n<li>Outeiral and Deane, <em>bioRxiv<\/em>, 2022 \u2013&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2022.12.15.519894v1\" target=\"_blank\">https:\/\/www.biorxiv.org\/content\/10.1101\/2022.12.15.519894v1<\/a><\/li>\n\n\n\n<li>Robert <em>et al.<\/em>, <em>Nat Comp Sci<\/em>, 2022 \u2013&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.nature.com\/articles\/s43588-022-00372-4\" target=\"_blank\">https:\/\/www.nature.com\/articles\/s43588-022-00372-4<\/a><\/li>\n\n\n\n<li>Wang <em>et al.<\/em>, <em>PNAS Nexus<\/em>, 2022 \u2013&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/academic.oup.com\/pnasnexus\/article\/1\/4\/pgac165\/6672590\" target=\"_blank\">https:\/\/academic.oup.com\/pnasnexus\/article\/1\/4\/pgac165\/6672590<\/a><\/li>\n\n\n\n<li>Tamposis <em>et al.<\/em>, <em>bioRxiv<\/em>, 2019 \u2013&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/academic.oup.com\/bioinformatics\/article\/35\/13\/2208\/5184961\" target=\"_blank\">https:\/\/academic.oup.com\/bioinformatics\/article\/35\/13\/2208\/5184961<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/teksands.ai\/blog\/semi-supervised-learning\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/teksands.ai\/blog\/semi-supervised-learning<\/a><\/li>\n\n\n\n<li>Minot and Reddy, <em>bioRxiv<\/em>, 2023 \u2013&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2023.01.30.526201v1\" target=\"_blank\">https:\/\/www.biorxiv.org\/content\/10.1101\/2023.01.30.526201v1<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/blogs.nvidia.com\/blog\/2020\/01\/16\/what-is-active-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/blogs.nvidia.com\/blog\/2020\/01\/16\/what-is-active-learning\/<\/a><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Machine learning (ML) for biological\/biomedical applications is very challenging \u2013 in large part due to limitations in publicly available data (something we recently published about [1]). Substantial amounts of time and resources may be required to generate the types of data (eg protein structures, protein-protein binding affinity, microscopy images, gene expression values) required to train [&hellip;]<\/p>\n","protected":false},"author":78,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","wikipediapreview_detectlinks":true,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ngg_post_thumbnail":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[633,361,341,189],"tags":[172],"ppma_author":[484],"class_list":["post-9935","post","type-post","status-publish","format-standard","hentry","category-ai","category-data-science","category-databases","category-machine-learning","tag-machine-learning"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"authors":[{"term_id":484,"user_id":78,"is_guest":0,"slug":"alissa","display_name":"Alissa Hummer","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/65ab670c32312147bd0e263ed45bf1216bc05ec0fe26e475a8c139455701c5ae?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/9935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/comments?post=9935"}],"version-history":[{"count":5,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/9935\/revisions"}],"predecessor-version":[{"id":10041,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/posts\/9935\/revisions\/10041"}],"wp:attachment":[{"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/media?parent=9935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/categories?post=9935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/tags?post=9935"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.blopig.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=9935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}