Broken Link Destruction for Better Rankings

Like most of my posts this is not worth implementing in any manner. This is unlimited budget SEO. Works in theory SEO. This is almost make-work. There are no brakes on the marginal gains train. Theory We believe that broken links leak link equity. We also believe that pages provide a finite amount of link equity, and anything hitting a 404 is wasted rather than diverted to the live links. The standard practice is to swoop in and suggest a resource that you have a vested interest in to replace the one that's died. There is a small industry dedicated to doing just this. It works, but requires some resource. If we instead get the broken links removed, the theory goes, we increase the value of all the links remaining on the page. You can increase the value of external links to your site [...]

Speeding Up Default WordPress Part 2 – Images

You can Read Part One (Speeding Up Default WordPress) here. Image files are still the bulk of page weight for most blogs. They are the majority of page weight for the average page on the internet: They will account for an even higher proportion of this page's weight, given it's full of screenshots about image weight. Although it's possible to squeeze the most speed out by delving into the guts of WordPress and cutting the chaff, for now we're sticking to the things that we can control ourselves, with an emphasis on ease. This article mostly lists errors I wish I did not make with the images on this website and plugins that do things for us. So how do we speed up our images on WordPress? We can make the images we use smaller, so that it takes less time to transfer them. We [...]

Preserve Link Equity With File Aliasing

The standard 'SEO Friendly' way to change a URL is with a 301 'moved permanently' redirect. Search engines then attribute value to the destination page. This value is nearly as much as the original (assume 85-95%), if we believe redirects are lossy. If we want optimal squeezing-every-last-drop-out SEO, we're better off updating a resource on the same URL instead of redirecting that URL to a new location. Stay with me. But what if the resources are fundamentally different? Say I've enthusiastically converted a PDF to html. The filetypes are different. I've got to move from /resources/my-guide.pdf to /resources/my-guide, right? Not so. Someone requests a .pdf file we have painstakingly converted into html. We serve them the .html version on the original (.pdf) URL. We retain [...]

PDF to HTML (and SEO)

Last week I read Emma Barnes' post on the Branded3 blog. It got me thinking. Essentially pdf's rank fine but are a pain to track properly in Analytics, so translating them into a more friendly format like html is preferable. Before you start reading the post please note: This is a curiosity (or dead end). This is not a viable ranking strategy. This is a waste of your time. I initially thought that translating pdf to files to complete webpages probably wasn't worth the time expenditure for developers in most cases. The resource already ranks, right? Well, if it's not ranking in 1st we're probably tempted to fiddle with it. And we do want decent tracking information if it's for a term with search volume. And we have a lot more options for fiddling with it if we convert the file to HTML. This [...]

Recursively Optimise Images

We know that images account for a lot of the internet: We know that speed is good, and that page weight is not good for speed. We also know that lossless image optimisation exists; that smart people have made it possible to get smaller images of the same perceivable quality at the cost of processing power. Unfortunately, our standalone content (I have pure "Content Marketing" content in mind here) is often fragmented over a number of directories. Image compression tools, there are many, are often drag and drop affairs set to process  single images and filetypes by default. This is not good if we're trying to bake image optimisation into an organisation. When our images live in multiple folders withing a project, it's disheartening for anyone to seek them out to process. This post [...]

Sift: Grep On Steroids?

I've been playing around with Sift this weekend as a potentially friendly and faster alternative to Grep (not that grep is slow). Although the tool clearly has broader applications than parsing server logs, it's very suited to that purpose. If you're currently using Grep for this purpose, I'd recommend checking Sift out. Overview No dependencies. Easy, cross platform install. Familiar to grep users. Really fast. Handles huge files easily. Perl RegEx supporting multiple patterns simultaneously. A single flag for searching inside and outside gzipped files (e.g. sift -z Googlebot access*). Grep requires two separate instances to do this. Considering audits for larger sites with multiple servers, this is more of a time saver than you'd think. The published speed benchmarks are impressive: Installation To [...]