The Lumar website intelligence platform is used by SEO, engineering, marketing and digital operations teams to monitor the performance of their site’s technical health, and ensure a high-performing, revenue-driving website.
The cornerstone of our platform is a website crawler, which crawls a site’s pages to collect data for sophisticated website analytics.
We obey the Robots.txt protocol and will not crawl your site if you exclude the Lumar user-agent token, e.g.
User-agent: deepcrawl Disallow: /
The IP addresses and user-agents used by Lumar will change from time to time depending on user settings, as this is sometimes required for specific testing processes. Please note that our user-agent currently references our previous company name (Deepcrawl) to avoid creating issues for our customers. This will be updated to Lumar in due course.
Please also note that we do not support the crawl-delay directive. Our aim is to match the way Google crawls as closely as possible, and Google does not support the crawl-delay directive. Crawl-delay also can make it difficult to support domain level crawl rate limits, which is why most dev ops use a bot management system to give them complete control.
If you have already excluded the Lumar user-agent but your site is still being crawled without your permission and you would like it to stop, then please contact us at support@lumar.io.