Google search engine crawling experiment
Recently I’ve had an experiment with the way Google crawls a site. I had a client site that had not been spidered in spite of being submitted to Google a good while back. I looked at the site and saw nothing amiss. There was plenty of text on the page everything looked good. I found their page in google but their was no cached text.
After a bit of searching I found others with similar problems and the speculation was that it was banned for “invisible text”. Some developers have used that technique to load keywoards into pages to take advantage of the search engines. Now, I hadn’t done anything of the sort, but I did write the pages in PHP and I had a few php comments to remind myself what I had done and why I had done it.
I removed the comment text and within a few days the main index page was spidered with searchable text and the summary was showing up in Google. So, I went about building a site map and submitting to get the rest of the pages crawled. Now the pages all were in the main directory with .php file extensions. I submitted the sitemap and waited and watched. A couple weeks went by and the sub pages still hadn’t been crawled. In that time, I had been doing frequent posts on this site and found Google spidering all over the place. WordPress uses easy to remember permalinks that look like directory paths, for instance…. http://www.averyjparker.com/2005/08/15/pcbsd-configuration/ instead of http://www.averyjparker.com/20050815pcbsd-configuration.php or something… Google seemed to like this as it was spidering and caching the various posts with about a 2 or 3 day delay.
About this time I found an interesting writeup on search engine optimization and specifically about Google. Among the things he noted is that he had noticed this behavior. No one at google could explain it to him, but it seemed that directories got spidered more quickly than specific document files. So, contact.php aboutus.php skills.php might not get indexed before domain.com/contact/ domain.com/aboutus/ and domain.com/skills
I tested this theory out on the site in question. I moved each subpage of the site to it’s own directory and renamed it to index.php (so that on viewing the directory it would automatically display), I updated my sitemap to reflect that (and the site’s menu). Within 2 days the Googlebot spidered each of the sub-directories. (After a wait of several weeks prior.)
So my best advice is if you’re eager to get a google spidering, go ahead and plan on giving seperate pages their own directories with a relevant name.