The robots exclusion protocol (REP), or robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl and index pages on their website. Although it’s not an official standard, using robots.txt is a common approach used for Internet-facing websites to exclude pieces of your website from crawling. Find out how you can easily create and manage the robots.txt file with Mavention Robots.txt for Sitecore.
Retrieving an image from an image library in SharePoint 2013 is easily done with a server-side control. But what if you are in the scenario of a public facing website? Another way of achieving this goal is to use the SharePoint 2013 Search REST API.
SharePoint Search by default crawls the contents of all site columns so that they are available for search queries. While most would consider that to be a good attribute there are times when it can be a hindrance, especially when content is shown on public facing websites which is not relevant to the content on the page or the metadata about the page.