{"id":449,"date":"2016-01-24T23:48:26","date_gmt":"2016-01-24T23:48:26","guid":{"rendered":"http:\/\/www.marekrei.com\/blog\/?p=449"},"modified":"2019-09-27T23:34:08","modified_gmt":"2019-09-27T23:34:08","slug":"online-representation-learning-in-recurrent-neural-language-models","status":"publish","type":"post","link":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/","title":{"rendered":"Online Representation Learning in Recurrent Neural Language Models"},"content":{"rendered":"<p>In a basic neural language model, we optimise a fixed set of parameters based on a training corpus, and predictions on an unseen test set are a direct function of these parameters. What if instead of a static model\u00a0we constantly measured the types of errors the model is making and adjust the parameters\u00a0accordingly? It would potentially be more closer to how humans operate, constantly making small adjustments in their decisions based on feedback.<\/p>\n<p>The necessary information is already\u00a0available\u00a0&#8211;\u00a0language models use the previous word in the sequence as context, which means they know the correct answer for the previous time step (or at least need to assume they know). We can use this\u00a0to calculate error derivatives at each time step and update parameters even during testing.\u00a0This sounds like it would\u00a0require loads of extra computation at test time, but by updating only a small part\u00a0of the model we can actually get\u00a0better results with faster execution and fewer parameter.<\/p>\n<p>This post is a summary\u00a0of my EMNLP 2015 paper &#8220;<a href=\"https:\/\/aclweb.org\/anthology\/D\/D15\/D15-1026.pdf\">Online Representation Learning in Recurrent Neural Language Models<\/a>&#8220;.<\/p>\n<h2><strong>RNNLM<\/strong><\/h2>\n<p>First a short description of the RNN language model that I use as a baseline. It follows the implementation\u00a0by <a href=\"http:\/\/research.microsoft.com\/pubs\/175562\/ASRU-Demo-2011.pdf\">Mikolov et al. (2011)<\/a>\u00a0in the <a href=\"http:\/\/rnnlm.org\/\">RNNLM Toolkit<\/a>.<\/p>\n<p><a href=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-451 size-thumbnail\" src=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png\" alt=\"rnnlm\" width=\"150\" height=\"141\" srcset=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png 150w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-300x281.png 300w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-1024x961.png 1024w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/p>\n<p>The previous word goes into the network as a 1-hot vector which is then multiplied with a weight matrix, giving us a corresponding\u00a0word embedding. This, together with the previous hidden state, act as input to the current hidden state of the network:<\/p>\n<p style=\"text-align: center;\">\\(hidden_t = \\sigma(E \\cdot input_t + W_h \\cdot hidden_{t-1})\\)<\/p>\n<p><!--more--><\/p>\n<p style=\"text-align: left;\">The hidden state is connected to the output layer, which predicts the next word in the sequence. In order to avoid performing a softmax operation over the whole vocabulary, all\u00a0words are divided between\u00a0classes and the probability of the next word is factored into the probability of the class and the probability of the next word given the class:<\/p>\n<p style=\"text-align: center;\">\\(P(w_{t+1} | w_{1}^{t}) \\approx classes_c \\cdot output_{w_{t+1}}\\)<\/p>\n<p style=\"text-align: center;\">\\(classes = softmax(W_c \\cdot hidden_t)\\)<\/p>\n<p style=\"text-align: center;\">\\(output = softmax(W_o^{(c)} \\cdot hidden_t)\\)<\/p>\n<p>The words are divided into classes by frequency-based bucketing (following Mikolov et al., 2011), and the learning rate is divided by 2 if the improvement is not sufficient. The RNNLM Toolkit treats the training data as a continuous stream of tokens and performs backpropagation through time for a fixed number of steps &#8211; the text is essentially split into fixed-sized chunks for optimisation. Instead, we perform sentence splitting and backpropagate errors from the end of each sentence to the beginning.<\/p>\n<h2>RNNLM with online learning<\/h2>\n<p>Let&#8217;s introduce a special vector into the model, which\u00a0will represent the current unit of text being processed (a sentence, a paragraph, or a document). We can then update it after each prediction, based on the errors the model has made on that document.<\/p>\n<p><a href=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/bprnnlm.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-463 size-thumbnail\" src=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/bprnnlm-150x140.png\" alt=\"bprnnlm\" width=\"150\" height=\"140\" srcset=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/bprnnlm-150x140.png 150w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/bprnnlm-300x281.png 300w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/bprnnlm-1024x959.png 1024w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/p>\n<p style=\"text-align: left;\">The\u00a0output probabilities over classes and words\u00a0are then\u00a0conditioned on this new document vector:<\/p>\n<p style=\"text-align: center;\">\\(classes = softmax(W_c \\cdot hidden_t + W_{dc} \\cdot doc)\\)<br \/>\n\\(output = softmax(W_o^{(c)} \\cdot hidden_t + W_{do}^{(c)} \\cdot doc)\\)<\/p>\n<p style=\"text-align: left;\">Notice that there is no input going into the\u00a0document vector. Instead of constructing it iteratively, like the values in a hidden layer, we treat it\u00a0as a vector of parameters and optimise them both during training and testing. After predicting each word, we\u00a0calculate the error in the output layer, backpropagate this into the document vector, and adjust the values.\u00a0While the main language model is a smoothed static representation of the training data, the document vector will contain\u00a0information about how a specific sentence\/document differs from this main language model.<\/p>\n<p style=\"text-align: left;\">The document vector is connected directly to the output layers of the RNNLM, in parallel to the hidden layer. This allows us to update the document vector after every step, instead of waiting until the end of the sentence to perform backpropagation through time.<\/p>\n<p><a href=\"http:\/\/arxiv.org\/abs\/1405.4053\">Le and Mikolov (2014)<\/a>\u00a0used a related approach for learning vector representations of\u00a0sentences and achieved good results on the sentiment detection task. They added a vector for a sentence into a\u00a0feedforward language model, stepped\u00a0through the sentence, and\u00a0used the values at the last step as a representation of that\u00a0sentence.\u00a0While they connected the vector as part of the input layer, we have connected it directly to the output layer &#8211; in an RNNLM the input layer only gets updated at the end of the sentence (during backpropagation-through-time), whereas we want to update the document vector after each time step.<\/p>\n<h2 style=\"text-align: left;\">Experiments<\/h2>\n<p style=\"text-align: left;\">We constructed a dataset from English Wikipedia to evaluate language modelling performance of the two models.\u00a0The text was tokenised, sentence split and lowercased. The sentences were shuffled, in order to minimise any transfer effects between consecutive sentences, and then split into training, development and test sets. The final sentences were sampled randomly, in order to obtain reasonable training times for the experiments. Dataset sizes are as follows:<\/p>\n<p style=\"text-align: left;\">[table]<br \/>\n,\u00a0Train, Dev, Test<br \/>\nWords,\u00a09\\,990\\,782,\u00a0237\\,037,\u00a04\\,208\\,847<br \/>\nSentences,\u00a0419\\,278,\u00a010\\,000,\u00a0176\\,564<br \/>\n[\/table]<\/p>\n<p style=\"text-align: left;\">The regular RNNLM\u00a0with a 100-dimensional hidden layer\u00a0(M = 100) and no document vector (D=0) is the baseline. In the experiments we increase\u00a0the capacity of the model using different methods and measure how that\u00a0affects the perplexity on the datasets.<\/p>\n<p style=\"text-align: left;\">[table]<br \/>\n,\u00a0Train PPL ,\u00a0Dev PPL ,\u00a0Test PPL<br \/>\nBaseline M=100 ,\u00a092.65 ,\u00a0103.56 ,\u00a0102.51<br \/>\nM=120 ,\u00a088.60 ,\u00a098.78 ,\u00a097.79<br \/>\nM=100\\, D=20 ,\u00a0<strong>87.28<\/strong>\u00a0,\u00a0<strong>95.36<\/strong>\u00a0,<strong> 94.39<\/strong><br \/>\nM=135 ,\u00a085.17 ,\u00a096.33 ,\u00a095.71<br \/>\nM=100\\, D=35 ,\u00a0<strong>80.11<\/strong> ,\u00a0<strong>91.05<\/strong> ,\u00a0<strong>90.29<\/strong><br \/>\n[\/table]<\/p>\n<p style=\"text-align: left;\">Increasing the hidden layer size M does improve the model performance and perplexity decreases\u00a0from\u00a0102.51 to\u00a095.71. However, adding the same number of neurons into the actively-updated document vector gives an even lower perplexity of 90.29.<\/p>\n<h2 style=\"text-align: left;\">\u00a0Experiments with semantic similarity<\/h2>\n<p style=\"text-align: left;\">The resulting document vector can also be used for calculating semantic similarity between texts. We sampled random sentences from the development data, processed them with the language model, and used the resulting document vector to find 3 most similar sentences in the development set. Below are some examples.<\/p>\n<p style=\"text-align: left;\"><strong>Input:<\/strong> Both Hufnagel and Marston also joined the long-standing technical death metal band Gorguts.<\/p>\n<ul>\n<li style=\"text-align: left;\">The band eventually went on to become the post-hardcore band Adair.<\/li>\n<li style=\"text-align: left;\">The band members originally came from different death metal bands, bonding over a common interest in d-beat.<\/li>\n<li style=\"text-align: left;\">The proceeds went towards a home studio, which enabled him to concentrate on his solo output and songs that were to become his debut mini-album &#8220;Feeding The Wolves&#8221;.<\/li>\n<\/ul>\n<p><strong>Input:<\/strong> The Chiefs reclaimed the title on September 29, 2014 in a Monday Night Football game against the New England Patriots, hitting 142.2 decibels.<\/p>\n<ul>\n<li>He played in twenty-four regular season games for the Colts, all off the bench.<\/li>\n<li>In May 2009 the Warriors announced they had re-signed him until the end of the 2011 season.<\/li>\n<li>The team played inconsistently throughout the campaign from the outset, losing the opening two matches before winning four consecutive games during September 1927.<\/li>\n<\/ul>\n<p style=\"text-align: left;\"><strong>Input:<\/strong> He was educated at Llandovery College and Jesus College, Oxford, where he obtained an M.A. degree.<\/p>\n<ul>\n<li style=\"text-align: left;\">He studied at the Orthodox High School, then at the Faculty of Mathematics.<\/li>\n<li style=\"text-align: left;\">Kaigama studied for the priesthood at St. Augustine&#8217;s Seminary in Jos with further study in theology in Rome.<\/li>\n<li style=\"text-align: left;\">Under his stewardship, Zahira College became one of the leading schools in the country.<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>There has been a lot of work on developing static models for machine learning &#8211; we train the model parameters on the training\u00a0data and then apply them on the test data. However, there is a lot of potential for dynamical models, which take advantage of immediate feedback signals and are able to continuously adjust the model parameters. Our experiment showed that, at least for language modelling, such a model is indeed a viable option.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a basic neural language model, we optimise a fixed set of parameters based on a training corpus, and predictions on an unseen test set&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-449","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Online Representation Learning in Recurrent Neural Language Models - Marek Rei<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Online Representation Learning in Recurrent Neural Language Models - Marek Rei\" \/>\n<meta property=\"og:description\" content=\"In a basic neural language model, we optimise a fixed set of parameters based on a training corpus, and predictions on an unseen test set&hellip;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/\" \/>\n<meta property=\"og:site_name\" content=\"Marek Rei\" \/>\n<meta property=\"article:published_time\" content=\"2016-01-24T23:48:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-09-27T23:34:08+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png\" \/>\n<meta name=\"author\" content=\"Marek\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marek\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/\",\"url\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/\",\"name\":\"Online Representation Learning in Recurrent Neural Language Models - Marek Rei\",\"isPartOf\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png\",\"datePublished\":\"2016-01-24T23:48:26+00:00\",\"dateModified\":\"2019-09-27T23:34:08+00:00\",\"author\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage\",\"url\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png\",\"contentUrl\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.marekrei.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Online Representation Learning in Recurrent Neural Language Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#website\",\"url\":\"https:\/\/www.marekrei.com\/blog\/\",\"name\":\"Marek Rei\",\"description\":\"Thoughts on Machine Learning and Natural Language Processing\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.marekrei.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191\",\"name\":\"Marek\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g\",\"caption\":\"Marek\"},\"url\":\"https:\/\/www.marekrei.com\/blog\/author\/marek\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Online Representation Learning in Recurrent Neural Language Models - Marek Rei","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/","og_locale":"en_US","og_type":"article","og_title":"Online Representation Learning in Recurrent Neural Language Models - Marek Rei","og_description":"In a basic neural language model, we optimise a fixed set of parameters based on a training corpus, and predictions on an unseen test set&hellip;","og_url":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/","og_site_name":"Marek Rei","article_published_time":"2016-01-24T23:48:26+00:00","article_modified_time":"2019-09-27T23:34:08+00:00","og_image":[{"url":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png"}],"author":"Marek","twitter_misc":{"Written by":"Marek","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/","url":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/","name":"Online Representation Learning in Recurrent Neural Language Models - Marek Rei","isPartOf":{"@id":"https:\/\/www.marekrei.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage"},"image":{"@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage"},"thumbnailUrl":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png","datePublished":"2016-01-24T23:48:26+00:00","dateModified":"2019-09-27T23:34:08+00:00","author":{"@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191"},"breadcrumb":{"@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#primaryimage","url":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png","contentUrl":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/09\/rnnlm-150x141.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.marekrei.com\/blog\/online-representation-learning-in-recurrent-neural-language-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.marekrei.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Online Representation Learning in Recurrent Neural Language Models"}]},{"@type":"WebSite","@id":"https:\/\/www.marekrei.com\/blog\/#website","url":"https:\/\/www.marekrei.com\/blog\/","name":"Marek Rei","description":"Thoughts on Machine Learning and Natural Language Processing","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.marekrei.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191","name":"Marek","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g","caption":"Marek"},"url":"https:\/\/www.marekrei.com\/blog\/author\/marek\/"}]}},"_links":{"self":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/449","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/comments?post=449"}],"version-history":[{"count":44,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/449\/revisions"}],"predecessor-version":[{"id":1302,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/449\/revisions\/1302"}],"wp:attachment":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/media?parent=449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/categories?post=449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/tags?post=449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}