{"id":310,"date":"2015-06-22T00:17:48","date_gmt":"2015-06-22T00:17:48","guid":{"rendered":"http:\/\/www.marekrei.com\/blog\/?p=310"},"modified":"2019-09-27T23:35:38","modified_gmt":"2019-09-27T23:35:38","slug":"transforming-images-to-feature-vectors","status":"publish","type":"post","link":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/","title":{"rendered":"Transforming Images to Feature Vectors"},"content":{"rendered":"<p>I&#8217;m keen to explore some challenges in multimodal learning, such as jointly learning visual and textual semantics. However, I would rather not start by attempting to train an image recognition system from scratch, and prefer to leave this part to researchers who are more experienced in vision and image analysis.<\/p>\n<p>Therefore, the goal is to use an existing image recognition system, in order to extract useful features for a dataset of images, which can then be used as input to a separate machine learning system or neural network. We start with a directory of images, and create a text file containing feature vectors for each image.<\/p>\n<h2>1. Install Caffe<\/h2>\n<p>Caffe is an open-source neural network library developed\u00a0in Berkeley, with a focus on image recognition. It can be used to construct and train your own network, or load one of the pretrained models. A <a href=\"http:\/\/demo.caffe.berkeleyvision.org\/\">web demo<\/a> is available if you want to test it out.<\/p>\n<p>Follow the <a href=\"http:\/\/caffe.berkeleyvision.org\/installation.html\">installation\u00a0instructions<\/a> to compile Caffe. You will need to install quite a few dependencies (Boost, OpenCV, ATLAS, etc), but at least for Ubuntu 14.04 they were all available in public repositories.<\/p>\n<p>Once you&#8217;re done, run<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nmake test\r\nmake runtest\r\n<\/pre>\n<p>This will run the tests and make sure the installation is working properly.<br \/>\n<!--more--><\/p>\n<h2>2. Prepare your dataset<\/h2>\n<p>Put all your images you want to process into one directory. Then generate a file containing the path to each image. One image per line. We will use this file to read the images, and it will help you map images to the correct vectors later.<\/p>\n<p>You can run something like this:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nfind `pwd`\/images -type f -exec echo {} \\; &gt; images.txt\r\n<\/pre>\n<p>This will find all files in subdirectory called &#8220;images&#8221; and write their paths to images.txt<\/p>\n<h2>3. Download the model<\/h2>\n<p>There are a number of pretrained models publically available for Caffe. Four main models are part of the original Caffe distribution, but more are available in the <a href=\"https:\/\/github.com\/BVLC\/caffe\/wiki\/Model-Zoo\">Model Zoo wiki page<\/a>, provided by community members and other researchers.<\/p>\n<p>We&#8217;ll be using the <strong>BVLC GoogLeNet<\/strong> model, which is based on the model described in <a href=\"http:\/\/arxiv.org\/abs\/1409.4842\">Going Deeper with Convolutions<\/a> by Szegedy et al. (2014). It is a 22-layer deep convolutional network, trained on ImageNet data to detect 1,000 different image types. Just for fun, here&#8217;s a diragram of the network, rotated 90 degrees:<\/p>\n<p><a href=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-318\" src=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\" alt=\"googlenet_diagram\" width=\"1984\" height=\"584\" srcset=\"https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png 1984w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram-150x44.png 150w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram-300x88.png 300w, https:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram-1024x301.png 1024w\" sizes=\"auto, (max-width: 1984px) 100vw, 1984px\" \/><\/a><\/p>\n<p>The Caffe models consist of two parts:<\/p>\n<ol>\n<li>A description of the model (in the form of *.prototxt files)<\/li>\n<li>The trained parameters of the model (in the form of a *.caffemodel file)<\/li>\n<\/ol>\n<p>The prototxt files are small, and they came included with the Caffe code. But the parameters are large and need to be downloaded separately. Run the following command in your main Caffe directory to download the parameters for the GoogLeNet model:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npython scripts\/download_model_binary.py models\/bvlc_googlenet\r\n<\/pre>\n<p>This will find out where to download the caffemodel file, based on information already in the models\/bvlc_googlenet\/ directory, and will then place it into the same directory.<\/p>\n<p>In addition, run this command as well:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n.\/data\/ilsvrc12\/get_ilsvrc_aux.sh\r\n<\/pre>\n<p>It will download some auxiliary files for the ImageNet dataset, including the file of class labels which we will be using later.<\/p>\n<h2>4. Process images and print vectors<\/h2>\n<p>Now is the time to load the model into Caffe, process each image, and print a corresponding vector into a file. I created a script for that (see below, also available as a <a href=\"https:\/\/gist.github.com\/marekrei\/7adc87d2c4fde941cea6\">Gist<\/a>):<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nimport numpy as np\r\nimport os, sys, getopt\r\n\r\n# Main path to your caffe installation\r\ncaffe_root = '\/path\/to\/your\/caffe\/'\r\n\r\n# Model prototxt file\r\nmodel_prototxt = caffe_root + 'models\/bvlc_googlenet\/deploy.prototxt'\r\n\r\n# Model caffemodel file\r\nmodel_trained = caffe_root + 'models\/bvlc_googlenet\/bvlc_googlenet.caffemodel'\r\n\r\n# File containing the class labels\r\nimagenet_labels = caffe_root + 'data\/ilsvrc12\/synset_words.txt'\r\n\r\n# Path to the mean image (used for input processing)\r\nmean_path = caffe_root + 'python\/caffe\/imagenet\/ilsvrc_2012_mean.npy'\r\n\r\n# Name of the layer we want to extract\r\nlayer_name = 'pool5\/7x7_s1'\r\n\r\nsys.path.insert(0, caffe_root + 'python')\r\nimport caffe\r\n\r\ndef main(argv):\r\n    inputfile = ''\r\n    outputfile = ''\r\n\r\n    try:\r\n        opts, args = getopt.getopt(argv,&quot;hi:o:&quot;,&#x5B;&quot;ifile=&quot;,&quot;ofile=&quot;])\r\n    except getopt.GetoptError:\r\n        print 'caffe_feature_extractor.py -i &lt;inputfile&gt; -o &lt;outputfile&gt;'\r\n        sys.exit(2)\r\n\r\n    for opt, arg in opts:\r\n        if opt == '-h':\r\n            print 'caffe_feature_extractor.py -i &lt;inputfile&gt; -o &lt;outputfile&gt;'\r\n            sys.exit()\r\n        elif opt in (&quot;-i&quot;):\r\n            inputfile = arg\r\n        elif opt in (&quot;-o&quot;):\r\n            outputfile = arg\r\n\r\n    print 'Reading images from &quot;', inputfile\r\n    print 'Writing vectors to &quot;', outputfile\r\n\r\n    # Setting this to CPU, but feel free to use GPU if you have CUDA installed\r\n    caffe.set_mode_cpu()\r\n    # Loading the Caffe model, setting preprocessing parameters\r\n    net = caffe.Classifier(model_prototxt, model_trained,\r\n                           mean=np.load(mean_path).mean(1).mean(1),\r\n                           channel_swap=(2,1,0),\r\n                           raw_scale=255,\r\n                           image_dims=(256, 256))\r\n\r\n    # Loading class labels\r\n    with open(imagenet_labels) as f:\r\n        labels = f.readlines()\r\n\r\n    # This prints information about the network layers (names and sizes)\r\n    # You can uncomment this, to have a look inside the network and choose which layer to print\r\n    #print &#x5B;(k, v.data.shape) for k, v in net.blobs.items()]\r\n    #exit()\r\n\r\n    # Processing one image at a time, printint predictions and writing the vector to a file\r\n    with open(inputfile, 'r') as reader:\r\n        with open(outputfile, 'w') as writer:\r\n            writer.truncate()\r\n            for image_path in reader:\r\n                image_path = image_path.strip()\r\n                input_image = caffe.io.load_image(image_path)\r\n                prediction = net.predict(&#x5B;input_image], oversample=False)\r\n                print os.path.basename(image_path), ' : ' , labels&#x5B;prediction&#x5B;0].argmax()].strip() , ' (', prediction&#x5B;0]&#x5B;prediction&#x5B;0].argmax()] , ')'\r\n                np.savetxt(writer, net.blobs&#x5B;layer_name].data&#x5B;0].reshape(1,-1), fmt='%.8g')\r\n\r\nif __name__ == &quot;__main__&quot;:\r\n    main(sys.argv&#x5B;1:])\r\n<\/pre>\n<p>You will first need to set the caffe_root variable to point to your Caffe installation. Then run it with:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npython caffe_feature_extractor.py -i &lt;inputfile&gt; -o &lt;outputfile&gt;\r\n<\/pre>\n<p>It will first print out a lot of model-specific debugging information, and will then print a line for each input image containing the image name, the label of the most probable class, and the class probability.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nflower.jpg  :  n11939491 daisy  ( 0.576037 )\r\nhorse.jpg  :  n02389026 sorrel  ( 0.996444 )\r\nbeach.jpg  :  n09428293 seashore, coast, seacoast, sea-coast  ( 0.568305 )\r\n<\/pre>\n<p>At the same time, it will also print vectors into the output file. By default, it will extract the layer pool5\/7x7_s1 after processing each image. This is the last layer before the final softmax in the end, and it contains 1024 elements. I haven&#8217;t experimented with choosing different layers yet, but this seemed like a reasonable place to start &#8211; it should contain all the high-level processing done in the network, but before forcing it to choose a specific class. Feel free to choose a different layer though, just change the corresponding parameter in the script. If you find that specific layers work better, let me know as well.<\/p>\n<p>The outputfile will contain vectors for each image. There will be one line of values for each input image, and every line will contain 1024 values (if you printed the default layer). Mission accomplished!<\/p>\n<h2>Troubleshooting<\/h2>\n<p>Below are some tips for when you run into problems.<\/p>\n<p>First, it&#8217;s worth making sure you have compiled the python bindings in the Caffe directory:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nmake pycaffe\r\n<\/pre>\n<p>I was getting some unusual errors when this code was in a subdirectory of the main Caffe folder. After some googling I found that others had similar problems with other projects, and apparently overlapping library names were causing the wrong dependencies to be included. The simple solution was to move this code out of the Caffe directory, and put it somewhere else.<\/p>\n<p>I installed Caffe with CUDA support, and even though I turned GPU support off in the script, it was still complaining when I didn&#8217;t set the CUDA path. For example, I run the code like this (you may need to change the paths to match your system):<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nLD_LIBRARY_PATH=\/usr\/local\/cuda-7.0\/lib64\/:$LD_LIBRARY_PATH PYTHONPATH=$PYTHONPATH:\/path\/to\/caffe\/python python caffe_feature_extractor.py -i images.txt -o vectors.txt\r\n<\/pre>\n<p>Finally, Caffe is compiled against a specific version of CUDA. I initially had CUDA 6.5 installed, but after upgrading to CUDA 7.0 the Caffe library had to be recompiled.<\/p>\n<h2>Epilogue<\/h2>\n<p>There you have it &#8211; going from images to vectors. Now you can use these vectors to represent your images in various tasks, such as classification, multi-modal learning, or clustering. Ideally, you will probably want to train the whole network on a specific task, including the visual component, but for starters these pretrained vectors should be quite helpful as well.<\/p>\n<p>These instructions and the script are loosely based on Caffe examples on <a href=\"http:\/\/nbviewer.ipython.org\/github\/BVLC\/caffe\/blob\/master\/examples\/00-classification.ipynb\">ImageNet classification and filter visualisation<\/a>. If the code here isn&#8217;t doing quite what you want it to, it&#8217;s worth looking at these other similar applications.<\/p>\n<p>If you have any suggestions or fixes, let me know and I&#8217;ll be happy to incorporate them in this post.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m keen to explore some challenges in multimodal learning, such as jointly learning visual and textual semantics. However, I would rather not start by attempting&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-310","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Transforming Images to Feature Vectors - Marek Rei<\/title>\n<meta name=\"description\" content=\"Creating feature vectors for images using an existing image recognition toolkit\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Transforming Images to Feature Vectors - Marek Rei\" \/>\n<meta property=\"og:description\" content=\"Creating feature vectors for images using an existing image recognition toolkit\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/\" \/>\n<meta property=\"og:site_name\" content=\"Marek Rei\" \/>\n<meta property=\"article:published_time\" content=\"2015-06-22T00:17:48+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-09-27T23:35:38+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\" \/>\n<meta name=\"author\" content=\"Marek\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marek\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/\",\"url\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/\",\"name\":\"Transforming Images to Feature Vectors - Marek Rei\",\"isPartOf\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\",\"datePublished\":\"2015-06-22T00:17:48+00:00\",\"dateModified\":\"2019-09-27T23:35:38+00:00\",\"author\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191\"},\"description\":\"Creating feature vectors for images using an existing image recognition toolkit\",\"breadcrumb\":{\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage\",\"url\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\",\"contentUrl\":\"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.marekrei.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Transforming Images to Feature Vectors\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#website\",\"url\":\"https:\/\/www.marekrei.com\/blog\/\",\"name\":\"Marek Rei\",\"description\":\"Thoughts on Machine Learning and Natural Language Processing\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.marekrei.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191\",\"name\":\"Marek\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g\",\"caption\":\"Marek\"},\"url\":\"https:\/\/www.marekrei.com\/blog\/author\/marek\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Transforming Images to Feature Vectors - Marek Rei","description":"Creating feature vectors for images using an existing image recognition toolkit","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/","og_locale":"en_US","og_type":"article","og_title":"Transforming Images to Feature Vectors - Marek Rei","og_description":"Creating feature vectors for images using an existing image recognition toolkit","og_url":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/","og_site_name":"Marek Rei","article_published_time":"2015-06-22T00:17:48+00:00","article_modified_time":"2019-09-27T23:35:38+00:00","og_image":[{"url":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png"}],"author":"Marek","twitter_misc":{"Written by":"Marek","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/","url":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/","name":"Transforming Images to Feature Vectors - Marek Rei","isPartOf":{"@id":"https:\/\/www.marekrei.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage"},"image":{"@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage"},"thumbnailUrl":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png","datePublished":"2015-06-22T00:17:48+00:00","dateModified":"2019-09-27T23:35:38+00:00","author":{"@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191"},"description":"Creating feature vectors for images using an existing image recognition toolkit","breadcrumb":{"@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#primaryimage","url":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png","contentUrl":"http:\/\/www.marekrei.com\/blog\/wp-content\/uploads\/2015\/06\/googlenet_diagram.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.marekrei.com\/blog\/transforming-images-to-feature-vectors\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.marekrei.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Transforming Images to Feature Vectors"}]},{"@type":"WebSite","@id":"https:\/\/www.marekrei.com\/blog\/#website","url":"https:\/\/www.marekrei.com\/blog\/","name":"Marek Rei","description":"Thoughts on Machine Learning and Natural Language Processing","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.marekrei.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/a145eb0a06ed4acf5b0f84a24b7a1191","name":"Marek","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.marekrei.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/48a65414bfda6485aaa0703e548de0ed25292b5fe0d979ed8c28ad83cf5a82c0?s=96&d=mm&r=g","caption":"Marek"},"url":"https:\/\/www.marekrei.com\/blog\/author\/marek\/"}]}},"_links":{"self":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/comments?post=310"}],"version-history":[{"count":22,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/310\/revisions"}],"predecessor-version":[{"id":1304,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/posts\/310\/revisions\/1304"}],"wp:attachment":[{"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/media?parent=310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/categories?post=310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.marekrei.com\/blog\/wp-json\/wp\/v2\/tags?post=310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}