If one thing has become clear to me while studying digital marketing, it is that having a beautiful website is not of much use if no one finds it. You can have the best content in the world, but if Google doesn't understand your site well, it simply won't show it. With that idea in mind, I decided to do something that I had always left for "tomorrow": sit down and review the SEO of my own website.
What I found was a bit embarrassing, honestly. Basic errors that I myself explain in my articles on SEO and that, paradoxically, I was not applying on my own site. So this post is a kind of confession and, at the same time, a practical guide to what I did to correct it.
The starting point: What was wrong?
I analyzed the eight pages of my website (home, about me, blog, contact and the four articles) looking for the basic elements that any site needs to be crawlable and indexable by search engines. The result was more or less this: of the ten key points I reviewed, only two or three were good. The rest either did not exist or were half done.
The main problems were those that are not visible to the naked eye. The website worked perfectly for a human visitor, but for Google's bots it lacked a lot of information to understand what each page was, how they related to each other and which ones should be prioritized.
Meta descriptions: the first thing the user sees on Google
None of my eight pages had a meta description. None. This means that when someone searches on Google and my website appears (if it appears), the snippet shown below the title is a random snippet that Google chooses on its own. Sometimes it is a coherent text, but many times it does not make sense and the user simply moves on.
The solution was simple: write a custom description for each page. Something that summarizes the content well and invites you to click. For example, for the "About me" page I added this:
One line of code per page. Five minutes of work that can make a difference in the CTR (click-through rate) of each result.
Sitemap and robots.txt: the files that speak to Google
Another big mistake. My website didn't even have sitemap.xml nor robots.txt. For those who don't know: the sitemap is basically a list of all the pages on your site so that Google can find them quickly, and the robots.txt is a file that tells bots what they can crawl and where the sitemap is.
On a small website like mine, Google will probably end up finding all the pages anyway, but having these files is good practice. It's like leaving a map for the postman instead of waiting for him to find your mailbox on his own.
I created the sitemap.xml including all site URLs with their last modified dates, and the robots.txt with the most basic configuration: allow access to everything and point to the sitemap.
User-agent: *
Allow: /
Sitemap: https://zisquito.github.io/sitemap.xml
Canonical URLs: avoid Google getting messed up
This is a more subtle problem. On my website, each page can be accessed with extension (.html) and without it. For example, zisquito.github.io/blog and zisquito.github.io/blog.html They lead to the same place. For a human it is the same, but Google can interpret them as two different pages with the same content. That's what it's called duplicate content and can harm positioning.
The solution is to add a tag canonical on every page you tell Google: "Hey, the official URL for this page is this, ignore the others."
Open Graph: what your website looks like when you share it
Have you ever pasted a link on WhatsApp or LinkedIn and a preview appeared with an image, title and description? The labels control that. Open Graph (y las equivalentes de Twitter Cards). Solo tres de mis ocho páginas las tenían, y encima ninguna incluía la descripción.
I completed the OG tags on all pages. It is a detail that does not directly affect positioning in Google, but it does affect the traffic you receive when someone shares your website. A link with an attractive preview generates many more clicks than one that looks like plain text.
Other smaller adjustments
In addition to the main changes, I took the opportunity to correct several minor things that I had been missing:
- Alternative text for images: the attributes alt They existed but were too generic. I rewrote them to be more descriptive and SEO relevant.
- Author goal: I added the tag on all pages. It's a small detail but it helps Google associate the content with a specific person.
- Inconsistent CV paths: some pages linked to the CV with relative paths (CV_FranciscoRobles.pdf) and others with absolutes (/CV_FranciscoRobles.pdf). I unified them all to the absolute path from the root.
What was already good
It wasn't all mistakes. The audit also confirmed that some fundamentals were done well: the language declared correctly with the lang="en", a single per page, use of semantic HTML with tags like and , descriptive and unique titles, responsive design, favicon, Google Analytics implemented and reasonable image weight.
That shows that it's not that he didn't know what he was doing. Simply put, when you set up a website you tend to focus on the visual and the content, and the technical details remain a permanent "I'll do it."
The lesson
If there is something that I take away from this exercise, it is that Basic technical SEO is not difficult, but it is easy to forget. They are small changes, almost all of them one or two lines of code, but when accumulated they can make a real difference in whether Google shows your website or ignores it.
My recommendation for anyone who has their own website: spend an afternoon reviewing it. You don't need expensive tools or be an expert. With a basic checklist and a willingness to look under the hood, you'll probably find things to improve. I did it with mine and, although the result was a bit embarrassing, at least now I know that the foundations are well laid.