Just how do you set up robots.txt to permit crawling of the website with the exception of a couple of directory sites?
Google Webmaster devices has actually a Section called "Crawler accessibility"
This area permits you really conveniently to create your robots.txt
For instance to permit every little thing other than blog site a folder called examination your robot.txt would certainly look something like
User-agent: * Disallow: /Test Allow: /