Author: Wes

After my small firewall problem, I began looking for ways to increase my Google ranking back to where it was earlier in the year. In searching for solutions to my original problem, it was suggested that my site would be a lot more efficient if the root of the website (wesg.ca) was the opening page, instead of redirecting to wesg.ca/blog. I set out trying to have the blog appear to be at the root of the site instead of in a folder and in the process came across this article on WordPress about giving WordPress its own directory.

This is great for accessing the website, but if you’ve built up Google rankings over an amount of time, you will lose those rankings if you leave it at that. My problem was that Google had indexed my site with the folder /blog/ before each entry. Without changing those, people would get a 404 error and not make it to the proper page. That would not by good for traffic and rankings.

My solution was to use a mod_rewrite to redirect all traffic with /blog/ to the root of the site. To do this, i added a function to the .htaccess file on the root of my server. This file affects the behaviour of your webserver and is available from nearly all web hosts. If your host doesn’t provide access to it, you may want to look at a new host.

To direct all traffic, I added this function to the first line of my .htaccess file:


RewriteEngine On
RewriteRule ^blog/?(.*) /$1 [R=301,L]

What this bit of code does is turn all URLs with /blog/ in them into URLs in the root of the website. You can see this in action if you go to https://www.wesg.ca/blog and watch the address change to www.wesg.ca. The R=301 portion of the code at the end means that Google and other spiders visiting the site will know that the page they were originally accessing has now permanently moved to a new location. This will cause Google to transfer the PageRank of the original page to the new one, and thus, your website is saved.
[tags]blog, website, Google, SEO, Apache, rankings[/tags]

I started this blog/website in December of 2007. For a month and a half it was crawled regularly by Google, and I slowly climbed the Google ranks for searches like wesg, mac cool apps and MacBook 5.1 surround sound. Typically I would be on the first result page, and if not, I’d be at the top of the second page.

I thought that this was great, because it brought in good traffic, but during the middle of January, I noticed that I had slipped in the rankings. To start checking the problem, I logged into Google Webmaster Tools and looked for the problem. What I found turned out to be a much larger issue than I had thought.

Under the Tools tab, I found that Google could not download the robots.txt file for my website. Basically this is a file that tells search engines like Google what it can and can’t crawl, or index. The problem with it is that Google checks it immediately on arrival at the site, but if there is an error in gathering the file, it will stop crawling and move on. If this happens repeatedly, it can prevent Google from crawling your site indefinitely.

This was happening to me, and if I couldn’t figure out what was wrong, I would drop out of the search results and gain very little traffic. I started Googling around for similar problems, and found many people who had similar issues, and who eventually found the problem. I followed some of the advice I found, and did this:

  • Checked the robots.txt and sitemap.xml files to see if they were accessible.
  • Checked the files again using a header checker
  • Checked the .htaccess file to see if my server was blocking the Googlebot’s only known IP base, 66.249.
  • Posted to several forums to see if others could help me.

After doing all of this and resolving nothing, I came to the conclusion that my web host, webserve.ca, was blocking Googlebot. After opening many support tickets, they seemed to disagree, and basically denied that anything they were doing was preventing Google from viewing my site. I kept at it, though, and finally they stated that yes, they had some Google IPs in a blocked area of the firewall. Finally! So this morning when I checked Google Webmaster Tools, I found that the robots.txt file had been successfully downloaded and the sitemap had been accessed. Now we wait for the rankings to return.
[tags]Google, SEO, blog, robots.txt, timeout, web hosting[/tags]

For the most part my MacBook has been a great machine, but recently it has decided to become evil. It all started when I downloaded and installed the most recent software updates – Front Row, Security Update, iTunes and Quicktime. When I restarted the computer, it failed to get to the Apple loading screen, and instead left me with a blank grey background.

I forced a restart, and it continued to fail. So here I present the other steps I took to resolve my problem, in the hope that it can save someone else some headache. Read More »

When Apple moved to Intel chips and introduced the MacBook, MacBook Pro and Mac Pro (and now, MacBook Air), people were excited to find a way to have both operating systems on one machine. That would allow the best of both worlds; you could use OS X for your daily work, and when an application came up that didn’t have a Mac equivalent or was Windows only, you could use Windows.

Several months later, you now have several options to bring Redmond’s software to your Mac. Read More »


At Macworld 2008 today, Steve Jobs unveiled a brand new notebook called the MacBook Air. There is no shortage of information about it, including the official site, Engadget and Macworld.com. This 0.78″ wedge-shaped computer – billed as the “world’s thinnest notebook” – indicates a new approach to computing from Apple.
Read More »