Is it possible to restrict which documents in DMX that bots are allowed to index? We have several files we need to keep "public" to avoid requiring authentication, but we don't want bots to be able to index. There are numerous articles about how to restrict bots in various ways but the most recommend seems to be setting the X-Robots-Tag in the HTTP header. There are a few ways to set this (web.config), but as the PDFs are being served by DMX, I assume these methods won't work as I believe DMX writes its own headers. Response Header: HTTP/1.1 200 OK ... X-Robots-Tag: noindex https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag?hl=de |