Semrush Site Audit Notices: Common Issues & How to Fix Them

Semrush Site Audit Notices Common Issues & How to Fix Them

In the ever-evolving world of SEO, maintaining a well-optimized website is crucial for ranking higher on search engines and ensuring a seamless user experience. One of the most effective tools for identifying website issues is Semrush’s Site Audit, a comprehensive tool that scans websites for technical errors, performance issues, and optimization gaps.

Semrush categorizes the issues it detects into three levels:

  • Errors – Critical issues that require immediate attention, such as broken links or missing meta tags.
  • Warnings – Less severe but still impactful problems, like duplicate content or large image files.
  • Notices – Informational alerts that highlight potential areas for improvement, such as multiple H1 tags or long URLs.

While Errors and Warnings demand urgent fixes, Notices provide insights that can further enhance a website’s crawlability, usability, and SEO performance. Ignoring these notices might not lead to severe penalties, but addressing them can significantly improve a site’s search engine visibility, page speed, and overall user experience.

Common Notices in Semrush Site Audit Report

Common Notices in Semrush Site Audit Report

Pages Have More Than One H1 Tag

What is an H1 Tag?

An H1 tag is an HTML element that represents the main heading of a webpage. It serves as the primary title and helps both search engines and users understand the main topic of the page. Typically, the H1 tag should summarize the content in a clear, concise, and keyword-rich manner.

For example, in HTML, an H1 tag is written as:

What is an H1 Tag

This tells search engines and readers that the page is focused on digital marketing strategies.

Why Multiple H1 Tags Create SEO Confusion

While some modern HTML5 frameworks allow multiple H1 tags on a page, sticking to one H1 per page is generally recommended. Here’s why:

  • Search Engine Confusion – Search engines might struggle to determine the page’s main topic.
  • SEO Dilution – Multiple H1 tags can weaken keyword focus, reducing the page’s ability to rank for the primary topic.
  • Accessibility Issues—Screen readers rely on a single H1 tag to navigate content easily, and multiple H1s may confuse visually impaired users.

For example, if a page has two H1 tags:

if a page has two H1 tags

Google may have difficulty determining whether the page is about digital marketing or SEO tips, which could affect its ranking potential.

Best Practices for H1 Tag Usage

To optimise your H1 tags effectively, follow these best practices:
Use Only One H1 Tag Per Page – Keep your content focused on a single topic.
Make It Unique and Descriptive – Ensure it accurately represents the page’s content.
Include Target Keywords Naturally – Place relevant keywords without keyword stuffing.
Keep It Concise – Aim for 50-70 characters to enhance readability.
Use Proper Hierarchy – Maintain a structured format by using H2, H3, and H4 tags for subheadings.

Subdomains Don’t Support HSTS

What is HSTS?

HTTP Strict Transport Security (HSTS) is a web security policy that ensures a website can only be accessed over HTTPS and never through an unsecured HTTP connection. By forcing browsers to load a secure version of the site, HSTS helps prevent man-in-the-middle (MITM) attacks, protocol downgrades, and cookie hijacking.

When HSTS is enabled, browsers remember that a website should always use HTTPS, even if a user manually types “http://” in the address bar. The implementation of HSTS is done via an HTTP header like this:

What is HSTS

  • max-age=31536000 – Enforces HTTPS for one year (in seconds).
  • includeSubDomains – Ensures all subdomains also enforce HTTPS.
  • preload – Allows browsers to always default to HTTPS, even for the first visit.

How HSTS Enhances Website Security

Enabling HSTS offers several security benefits, including:
Prevents HTTPS Downgrade Attacks – Stops hackers from forcing a website to load in HTTP instead of HTTPS.
Protects Against Cookie Hijacking – Ensures cookies are transmitted securely, reducing data interception risks.
Enhances User Trust – Visitors see a secure HTTPS connection at all times, improving credibility.
Improves SEO Rankings – Google prioritizes secure websites in search rankings, making HSTS an indirect SEO booster.

Why is Missing HSTS Flagged as a Notice?

Semrush flags this issue when a website’s subdomains do not support HSTS, leaving them vulnerable to security risks. Here’s why it matters:

  • Even if the main domain enforces HTTPS, subdomains without HSTS can still be exploited.
  • Cybercriminals can use SSL stripping attacks to force users onto an insecure HTTP version.
  • Google Chrome and other browsers warn users when accessing non-HSTS sites, leading to trust issues.

How to Fix the Issue?

To enable HSTS on your main domain and subdomains, follow these steps:

  1. Enable HTTPS on all pages and subdomains.
  2. Add the HSTS header to your server’s configuration file:
    • For Apache (in .htaccess or httpd.conf):

Header always set Strict-Transport-Security “max-age=31536000; includeSubDomains; preload”

  • For Nginx (in nginx.conf)

add_header Strict-Transport-Security “max-age=31536000; includeSubDomains; preload” always;

  1. Test Your Configuration using tools like hstspreload.org.
  2. Submit to the HSTS Preload List for added security.

Pages Are Blocked from Crawling

The Role of robots.txt and Meta Tags in Crawling

Search engines like Google use crawlers (bots) to navigate and index web pages. However, website owners can control which pages search engines can or cannot access using:

  1. robots.txt – A file that instructs search engine bots on which pages or directories they should or shouldn’t crawl.
  2. Meta Robots Tag – An HTML tag placed in a page’s <head> section to restrict indexing or following links.

Examples:

  • Blocking a page using robots.txt:

Blocking a page using robots.txt

This prevents all search engine bots from crawling /private-page/.

  • Blocking a page using the meta robots tag:

Blocking a page using the meta robots tag

  • This tells search engines not to index or follow links on that page.

How Blocking Important Pages Affects Indexing

If critical pages (such as product pages, blog posts, or landing pages) are blocked from crawling, it can lead to:
🚫 Pages Not Appearing in Search Results – If a page is disallowed, Google won’t be able to crawl or rank it.
🚫 SEO Performance Issues – If internal links are blocked, link equity won’t be passed to other pages.
🚫 Lost Traffic & Conversions – Users won’t find your pages on Google, reducing organic traffic and potential leads.

For example, if you accidentally block your blog directory in robots.txt:

How Blocking Important Pages Affects Indexing

Google won’t crawl any blog posts, leading to zero search engine visibility.

How to Fix This Issue

To ensure your important pages are crawlable, follow these steps:

Check robots.txt File:

  • Visit yourwebsite.com/robots.txt to see if important pages are blocked.
  • If a page shouldn’t be blocked, remove the Disallow rule.

Remove Noindex Meta Tags:

Remove Noindex Meta Tags

  • Check the <head> section of your pages for this tag
  • If a page should be indexed, remove the noindex directive.

Use Google Search Console:

  • Go to Google Search Console > Coverage Report to identify blocked pages.
  • If a page is incorrectly blocked, modify robots.txt or remove the noindex tag.

Use “Allow” Rules for Important Pages:

Use “Allow” Rules for Important Pages

  • Instead of blocking entire sections, allow specific pages:

URLs Are Longer Than 200 Characters

URLs Are Longer Than 200 Characters

Why Long URLs Are Problematic?

A URL (Uniform Resource Locator) serves as a website’s address, helping users and search engines understand the content of a page. However, when URLs exceed 200 characters, they can create several SEO and usability issues, such as:

🚫 Poor User Experience – Long, complex URLs are difficult to read and share, reducing click-through rates.
🚫 Lower Search Engine Crawling Efficiency – Google and other search engines may truncate or ignore excessively long URLs.
🚫 Diluted Keyword Relevance – Important keywords in long URLs lose visibility and impact rankings.
🚫 Higher Chances of Broken Links – Long URLs increase the risk of copying, pasting, or linking mistakes.

For example, an SEO-friendly URL looks like this:
https://example.com/best-seo-practices

Whereas an overly long and problematic URL might be:
🚫 https://example.com/category/1234567890/subcategory/best-seo-practices-to-rank-higher-on-google-and-improve-website-performance-2024?utm_source=tracking123

Ideal URL Structure and Best Practices

Ideal URL Structure and Best Practices

To keep URLs short, clean, and effective, follow these best practices:

Keep URLs Under 75 Characters – While Google can process long URLs, shorter URLs rank and perform better.
Use Descriptive and Relevant Keywords – Ensure URLs reflect the page content.
Avoid Special Characters and Dynamic Parameters – Use hyphens (-) instead of underscores (_) or spaces (%20).
Remove Unnecessary Words – Keep URLs simple by eliminating stop words like “and,” “the,” or “of.”
Use a Logical Hierarchy – Maintain a clear folder structure for better organisation.

Example of an optimized URL:
https://example.com/seo-tips-for-beginners

🚫 Avoid URLs like this:

  • https://example.com/index.php?id=987654321&ref=xyz&utm_campaign=abc
  • https://example.com/this-is-a-very-long-url-with-too-many-keywords-and-unnecessary-words-that-nobody-will-remember

Tools to Analyze and Shorten URLs

To check and optimize URL length, use these tools:

🔹 Google Search Console – Review indexed URLs and check for any lengthy or redundant links.
🔹 Semrush Site Audit – Identifies URLs exceeding recommended length.
🔹 URL Shorteners (for sharing) – Use Bitly or TinyURL for shortening links while keeping them readable.
🔹 Screaming Frog SEO Spider – Helps analyze URL structures and identify excessively long ones.

Outgoing External Links Contain Nofollow Attributes

What is the ‘nofollow’ Attribute?

The nofollow attribute is an HTML tag that instructs search engines not to pass link equity (PageRank) to the linked page. It is primarily used to prevent search engines from following and crawling certain external links.

A standard nofollow link looks like this:

A standard nofollow link looks like this

This tells Google not to consider this link for ranking purposes and prevents it from passing any SEO value to the external site.

When to Use and When to Avoid It?

Use ‘nofollow’ When:

  • Linking to untrusted or sponsored content (to avoid spammy backlinks).
  • Adding affiliate links or promotional links.
  • Linking to user-generated content (such as forum posts or blog comments) prevents spam abuse.
  • Preventing crawling of login pages or internal sections (though robots.txt is a better alternative).

🚫 Avoid ‘nofollow’ When:

  • Linking to reputable sources (Google values quality external links as a trust signal).
  • Internal links (Using nofollow on internal pages blocks link flow within your site).
  • Your goal is to boost the SEO value of a linked page.

For example, DO NOT use nofollow for:

DO NOT use nofollow for

If this link leads to a high-authority site (like Wikipedia or a government page), using nofollow prevents unnecessarily passing link juice.

How It Affects Link Equity and SEO

🔹 Nofollow Prevents PageRank Flow – Search engines ignore nofollow links when calculating ranking signals.
🔹 Not Useful for Internal Links—If used internally, this code breaks your site’s natural link structure.
🔹 Affects Site Authority—Overusing nofollow reduces the overall value of external linking, which can impact Google’s trust in your site.
🔹 Google Now Treats ‘nofollow’ as a Hint – Since 2020, Google may still crawl and index nofollow links, but they don’t pass ranking power.

How to Fix This Issue?

  1. Audit External Links – Use Semrush Site Audit or Screaming Frog to identify nofollow links.
  2. Remove nofollow from Trusted Links – If linking to authoritative sites, remove rel=”nofollow” to allow link equity transfer.
  3. Use sponsored or ugc Attributes – Google prefers:
    • rel=”sponsored” for paid links
    • rel=”ugc” for user-generated content
  4. Keep Internal Links ‘Dofollow’ – Ensure all internal site links pass authority naturally.

Robots.txt Not Found

What is robots.txt?

A robots.txt file is a simple text file that gives instructions to search engine crawlers on how to navigate and index a website. It is placed in the root directory of a website and tells search engines which pages or sections they can or cannot access.

Example of a basic robots.txt file:

Example of a basic robots.txt file

  • User-agent: * – Applies the rule to all search engines.
  • Disallow: /private/ – Blocks search engines from crawling the /private/ directory.
  • Allow: /public-content/ – Allows crawling of /public-content/ while restricting other parts of the site.

To check if your site has a robots.txt file, visit:
🔹 https://yourwebsite.com/robots.txt

Why Search Engines Need robots.txt?

Search engines like Google, Bing, and Yahoo use robots.txt to determine which pages should be crawled. If robots.txt is missing, it could lead to:

🚫 Unwanted Pages Getting Indexed – Admin panels, test pages, or duplicate content may appear in search results.
🚫 Overloading of Server Requests – Crawlers may consume excessive resources, slowing down site performance.
🚫 SEO Issues – Without crawl directives, search engines may focus on less important pages, impacting rankings.

How to Create and Optimize a robots.txt File

If robots.txt is missing, follow these steps to create one:

Step 1: Create a Plain Text File

  • Open a text editor (Notepad, VS Code) and name the file “robots.txt”.

Step 2: Add Basic Rules

Add Basic Rules

Here’s an SEO-friendly robots.txt file:

  • Blocks sensitive or non-public areas (like admin pages, checkout).
  • Allows crawling of important pages (like blog content).
  • Sitemap directive helps search engines discover all pages faster.

✅ Step 3: Upload robots.txt to the Root Directory

  • Place the robots.txt file in /public_html/ or your root folder.

✅ Step 4: Test Using Google Search Console

  • Go to Google Search Console > Robots.txt Tester.
  • Check for errors and validate that important pages are crawlable.

✅ Step 5: Monitor & Update Regularly

  • As your website grows, update robots.txt to optimise crawl efficiency.

Hreflang Language Mismatch Issues

What is Hreflang?

The hreflang attribute is an HTML tag that helps search engines understand which language and regional version of a webpage to display for users in different locations. It ensures that users see the correct localised version of a website based on their language preferences and geographic region.

Example of a proper hreflang implementation:

Example of a proper hreflang implementation

  • en-us – For U.S. English users.
  • en-gb – For U.K. English users.
  • fr – For French users.
  • x-default – Default version for users without a specific language setting.

Common Mistakes and Their Impact on International SEO

🚫 Incorrect Language Codes – Using “eng” instead of “en” or “fr-france” instead of “fr”.
🚫 Missing Self-Referencing Hreflang – Each page should include a hreflang reference to itself.
🚫 Conflicting or Overlapping Hreflang Tags – Multiple hreflang tags pointing to the same URL confuse search engines.
🚫 Hreflang Tags Without Canonical Tags – Google prefers pages to have both canonical and hreflang tags for clarity.
🚫 Not Using Bidirectional Annotations – If en-us links to fr, the fr version must also reference en-us.

How These Mistakes Affect SEO

🔹 Wrong Page Displayed in Search Results – A user in France might see the English version instead of the French one.
🔹 Duplicate Content Issues – Without proper hreflang, Google may consider similar pages as duplicate content, affecting rankings.
🔹 Higher Bounce Rates – Users might leave immediately if they land on a page in the wrong language.

Correct Implementation to Avoid Penalties

✅ Use Correct Language and Region Codes – Follow ISO 639-1 for languages and ISO 3166-1 Alpha-2 for country codes.
✅ Add Hreflang Tags to Every Language Version – Ensure bidirectional tagging between versions.
✅ Use the x-default Tag – Direct users to a fallback version if no specific language version exists.
✅ Check for Hreflang Errors in Google Search Console – Monitor issues under the “International Targeting” section.

Using Relative URLs in Place of Absolute URLs

Difference Between Relative and Absolute URLs

A relative URL is a shortened web address that only includes the path to a page, assuming the base domain is already known. In contrast, an absolute URL contains the full web address, including the domain name.

Example of an Absolute URL:

Example of an Absolute URL

  • Includes the protocol (https), domain name (example.com), and full path (/blog/seo-tips).
  • Works correctly on all platforms, including search engines, external links, and different domains.

🚫 Example of a Relative URL:

Example of a Relative URL

  • Only specifies the path (/blog/seo-tips), without the domain name.
  • Works only within the same website but may cause issues when shared externally.

Why Absolute URLs Are Preferred for SEO?

🚀 Better for Indexing – Search engines rely on absolute URLs to understand page location and avoid confusion.
🚀 Prevents Duplicate Content Issues – If relative URLs are used inconsistently, search engines might treat the same page as multiple versions, affecting rankings.
🚀 Improves Link Equity (SEO Value) – Internal links using absolute URLs help pass PageRank effectively, ensuring proper authority distribution.
🚀 Prevents Scraper Abuse – Absolute URLs make it harder for content scrapers to duplicate your content and gain rankings unfairly.

Best Practices for Internal Linking

✅ Always Use Absolute URLs in Canonical Tags – Ensure proper indexing by specifying the preferred version of a page:

Always Use Absolute URLs in Canonical Tags

Use Absolute URLs in Sitemaps – Help search engines crawl your site efficiently:

Use Absolute URLs in Sitemaps

Ensure Consistency in URL Structure – Keep URLs uniform across the site to avoid broken links.

Use Relative URLs Only for Development Environments – Relative URLs can be useful for staging sites but should be converted to absolute URLs before going live.

Your Site Has Orphaned Pages

Your Site Has Orphaned Pages

What Are Orphaned Pages?

Orphaned pages are web pages that are not linked from any other page on the website. Since these pages lack internal links, search engine crawlers and users cannot easily discover them, making them invisible in search results unless accessed directly via a URL.

For example, if a blog post on your site does not have internal links, Google may have difficulty crawling and indexing that page.

Why Orphaned Pages Are Bad for SEO?

🚫 Poor Indexing and Ranking – Search engines use internal links to discover new pages. If a page has no links, Google might not index it.
🚫 Wasted SEO Value – Orphaned pages do not receive link juice from other pages, making ranking difficult.
🚫 High Bounce Rates—If users land on an orphaned page and cannot navigate to other pages, they leave quickly, increasing bounce rates.
🚫 Missed Conversion Opportunities – Important product pages, blogs, or landing pages may not drive traffic or conversions if they remain isolated.

How to Fix Orphaned Pages with Internal Linking?

✅ Identify Orphaned Pages – Use tools like Google Search Console, Semrush Site Audit, Screaming Frog, or Ahrefs to find pages with zero internal links.
✅ Link to Orphaned Pages from Relevant Content – Add links to these pages from high-ranking pages, category pages, or navigation menus.
✅ Add Orphaned Pages to the Sitemap – Ensure that your XML sitemap includes these pages so search engines can crawl them faster.
✅ Use Breadcrumbs and Related Posts—If the orphaned page is a blog post, add “Related Articles” sections to help users discover it naturally.
✅ Optimize Navigation Structure – Ensure important pages are linked from the homepage, sidebar, or main menu.

Example Fix:
Before:
https://example.com/hidden-page (Not linked anywhere)

After:
Link it within a relevant blog post:

<a href=”https://example.com/hidden-page”>Learn more about this topic here</a>

Pages Take More Than 1 Second to Become Interactive

Importance of Core Web Vitals in SEO

Google’s Core Web Vitals are a set of performance metrics that evaluate user experience based on page speed, interactivity, and visual stability. One of the most crucial metrics is First Input Delay (FID) (soon replaced by Interaction to Next Paint – INP), which measures how long a page becomes interactive after a user’s first interaction.

📌 Google prioritises fast, responsive websites, and slow pages may suffer in rankings.

Factors Causing Slow Interactivity

🚫 Heavy JavaScript Execution – Too many scripts block rendering, delaying interactivity.
🚫 Large Image & Video Files – Unoptimized media slows down page loading.
🚫 Unnecessary Third-Party Scripts – Excessive analytics, tracking codes, or ads can delay responsiveness.
🚫 Slow Server Response Time – Poor hosting or lack of caching increases load times.
🚫 Too Many HTTP Requests – Multiple CSS, JS, and font requests can overwhelm the browser.

Tips to Improve Page Speed

Minimize JavaScript (JS) Execution – Defer or asynchronously load non-essential scripts.
Use Lazy Loading for Images & Videos – Load content only when needed to reduce initial load time.
Enable Browser Caching – Store website assets in the browser for faster repeat visits.
Compress & Optimize Images – Use formats like WebP and compression tools like TinyPNG.
Use a Content Delivery Network (CDN) – Distribute website resources globally for faster access.
Reduce Third-Party Scripts – Remove unnecessary plugins, widgets, or excessive tracking codes.
Optimize Server Response Time – Upgrade hosting, enable Gzip compression, and use HTTP/2 or HTTP/3 for faster requests.

📌 Check Your Page Speed – Use Google PageSpeed Insights, Lighthouse, or GTmetrix to measure and optimize performance.

Pages Blocked by X-Robots-Tag: Noindex HTTP Header

What is X-Robots-Tag?

The X-Robots-Tag is an HTTP header directive that controls how search engines index and follow web pages. Unlike the meta robots tag, which is placed inside the HTML <head> section, the X-Robots-Tag is implemented at the server level.

Example of an X-Robots-Tag in an HTTP response header:

Example of an X-Robots-Tag in an HTTP response header

This tells search engines not to index or follow links on the page.

How It Affects Search Engine Indexing?

🚫 Prevents Pages from Appearing in Search Results – If important pages are mistakenly tagged with noindex, Google won’t rank them.
🚫 Affects Internal Link Flow – Using nofollow prevents link juice from passing to other pages.
🚫 Might Block Valuable Content – Some pages (like product pages or blog posts) may accidentally get excluded from indexing.
🚫 Difficult to Detect – Unlike meta robots tags (which can be seen in the HTML), X-Robots-Tag directives require server log analysis.

How to Correctly Use the Noindex Directive?

Use It Only for Non-Public or Low-Value Pages

Use It Only for Non-Public or Low-Value Pages

  • Login pages, checkout pages, thank-you pages, admin areas.
  • Example in Apache .htaccess:

Check for Accidental Noindex Tags

  • Use Google Search ConsoleCoverage Report → Look for “Excluded by ‘noindex’ tag”.
  • Test specific pages with Google’s URL Inspection Tool.
    Allow Indexing for Important Pages
  • If mistakenly blocked, remove the directive in server settings.
  • In Apache, allow indexing with:
    apache

Allow Indexing for Important Pages

Use Meta Robots Tags for Page-Specific Control – If only certain pages need noindex, use this instead:

Use Meta Robots Tags for Page-Specific Control

Blocked External Resources in Robots.txt

Why Blocking CSS and JS Can Cause Issues?

Search engines need access to CSS and JavaScript (JS) files to properly render and understand web pages. If these resources are blocked in the robots.txt file, search engines may:

🚫 Fail to Render the Page Correctly – Googlebot won’t see the layout, styles, or interactive elements.
🚫 Misinterpret Page Content – If CSS and JS are blocked, Google may see a broken or incomplete version of your site.
🚫 Lower SEO Rankings – A poorly rendered site may result in lower user experience scores, Core Web Vitals issues, and ranking drops.

Example of a wrong robots.txt blocking CSS/JS:

Example of a wrong robots.txt blocking CSS/JS

  • This blocks essential WordPress resources (like theme styles and interactive elements), affecting page rendering.

How to Allow Important Resources for Rendering?

Modify robots.txt to Allow CSS and JS Files
Instead of blocking entire directories, allow Googlebot access to necessary resources:

Modify robots.txt to Allow CSS and JS Files

Use Google Search Console to Identify Blocked Resources

  • Go to Google Search Console > Coverage Report and look for pages affected by blocked resources.
  • Use Google’s URL Inspection Tool to check how Googlebot renders your page.

Test Rendering with “Fetch as Google”

  • Visit Google Search Console > URL Inspection > Test Live URL.
  • If styles or interactive elements fail to load, check the robots.txt file.

Use the Noindex Alternative

Use the Noindex Alternative

  • Instead of blocking CSS/JS files, use noindex meta tags to prevent indexing without affecting rendering:

Broken External JavaScript and CSS Files

How Broken JS and CSS Files Impact SEO and UX

JavaScript (JS) and CSS files are essential for website design, interactivity, and functionality. When these files are broken (due to missing, incorrect, or inaccessible URLs), they can cause serious SEO and user experience (UX) issues, such as:

🚫 Poor Page Rendering – A broken CSS file may load without styles, appearing unformatted or incomplete.
🚫 Broken Functionality – If JavaScript fails to load, important features like navigation menus, forms, or pop-ups may stop working.
🚫 Lower Core Web Vitals Scores – Google measures Largest Contentful Paint (LCP) and Interaction to Next Paint (INP), which suffer if CSS and JS don’t load properly.
🚫 Higher Bounce Rates – Users will abandon a site that appears broken or doesn’t function as expected.
🚫 Crawling and Indexing Problems – Google may struggle to interpret and rank pages with missing resources.

Tools to Detect and Fix Broken Resources

Google Search Console – Coverage Report

  • Go to Google Search Console > Coverage > Excluded Pages to check for resources that failed to load.

Google PageSpeed Insights

  • Enter your URL in PageSpeed Insights to identify slow-loading or broken JS/CSS files.

Chrome DevTools (Network Tab)

  • Open your website in Google Chrome, then press F12 → Navigate to Network > Console.
  • Look for 404 errors indicating missing JavaScript or CSS files.

Screaming Frog SEO Spider

  • Scan your website to detect broken links, missing resources, and HTTP errors related to CSS and JS files.

How to Fix Broken JS and CSS Files?

🔹 Ensure Correct File Paths – Check if files are located in the correct directory and linked properly in HTML:

Ensure Correct File Paths

🔹 Use Absolute URLs for External Resources – If referencing external scripts, always use full URLs instead of relative paths:

Use Absolute URLs for External Resources

🔹 Replace or Restore Missing Files – If a file has been deleted or moved, upload the correct version.

🔹 Minify and Optimize Resources – Reduce file size using CSS Minifier and JS Minifier to improve loading speed.

🔹 Implement a CDN (Content Delivery Network) – A CDN ensures faster delivery of CSS and JS files worldwide.

🔹 Remove Unused JavaScript and CSS – Avoid unnecessary scripts that slow down your site.

Pages Need More Than Three Clicks to Be Reached

Why Deep Pages Are Harder to Index?

Search engines and users both prefer websites with a clear, shallow navigation structure. If a page requires more than three clicks from the homepage, it becomes harder for:

🚫 Search Engines to Crawl – Googlebot prioritizes easily accessible pages, and deeply buried pages may be crawled less frequently.
🚫 Users to Find Important Content – Visitors lose interest when navigating through multiple layers to find relevant information.
🚫 Page Authority to Flow – Link equity from high-ranking pages doesn’t effectively pass to deep pages, reducing their SEO value.
🚫 Bounce Rates to Decrease – If a page is hard to find, users may leave your site sooner, affecting engagement metrics.

For example:

  • Good URL structure (2 clicks max):
    Home > Services > SEO
  • Bad URL structure (4+ clicks):
    🚫 Home > About Us > Solutions > Digital Marketing > SEO > On-Page Optimization

Best Practices for Website Navigation

Follow the 3-Click Rule – Ensure every page is reachable within three clicks from the homepage.
Use a Logical Site Hierarchy – Organize pages into broad categories with subcategories when needed.
Optimize Navigation Menus – Include dropdowns, sidebars, or breadcrumbs for easy access.
Add an Internal Search Bar – Let users quickly find deep content instead of clicking through multiple layers.

Internal Linking Strategies to Improve Accessibility

🔹 Link Deep Pages from High-Authority Pages – Pass link juice by linking deep pages from the homepage or top-ranking blogs.
🔹 Create Hub Pages – Organize related content into pillar pages that link to subtopics.
🔹 Use Breadcrumb Navigation – Helps users and search engines understand page hierarchy:

Use Breadcrumb Navigation

🔹 Update the Sitemap – Ensure deep pages are included in your XML sitemap for search engine discovery.

🔹 Use Contextual Links – Add relevant internal links within blog posts or landing pages.

Pages Have Only One Incoming Internal Link

Importance of Internal Linking in SEO

Internal links connect different pages within your website, helping both users and search engines navigate your content efficiently. When a page has only one incoming internal link, it receives less link equity, making it harder to rank in search results.

🚀 Why Internal Links Matter?
Boosts SEO Rankings – Google considers internal links as ranking signals, helping distribute PageRank (link juice) across your site.
Improves Crawling & Indexing – Pages with multiple internal links are discovered and indexed faster.
Enhances User Navigation – More links mean easier access to important content.
Increases Page Authority – A page with more incoming links from authoritative pages gets higher visibility in search results.

How to Increase Internal Links for Better Ranking?

🔹 Link to Important Pages from High-Authority Pages – If a blog post or landing page receives strong traffic, link to underperforming pages to pass SEO value.
🔹 Use Contextual Internal Links in Blog Posts – Naturally incorporate internal links within your content:

<a href=”https://example.com/seo-tips”>Check out our SEO optimization guide</a>.

🔹 Add Links to the Main Navigation – Ensure key pages appear in menus, sidebars, or footers.

🔹 Use Related Posts & Category Links – At the end of blog posts, suggest related content:

<div class=”related-posts”>

  <h3>Related Articles</h3>

  <ul>

    <li><a href=”https://example.com/on-page-seo”>On-Page SEO Guide</a></li>

    <li><a href=”https://example.com/technical-seo”>Technical SEO Explained</a></li>

  </ul>

</div>

🔹 Create Content Hubs (Topic Clusters) – Organize content into pillar pages that link to subtopics, improving site structure and SEO rankings.
🔹 Update Older Content with New Links – Regularly review and add internal links to underlinked pages.

Tools to Analyze Internal Link Structure

📌 Google Search Console – Navigate to Links > Internal Links to find pages with low internal link counts.
📌 Semrush Site Audit – Identifies pages with insufficient internal links.
📌 Screaming Frog SEO Spider – Analyzes internal link structure and link distribution.
📌 Ahrefs Site Explorer – Use the Best by Links report to identify pages that need more internal links.

URLs Have a Permanent Redirect

Difference Between Temporary and Permanent Redirects

Redirects forward users and search engines from one URL to another. The two most common types are:

301 Redirect (Permanent)

  • Tells search engines that a page has moved permanently to a new location.
  • Passes nearly 100% of link equity (ranking power) to the new URL.
  • Ideal for merging pages, fixing broken URLs, or rebranding a website.
  • Example in .htaccess (Apache):

301 Redirect (Permanent)

302 Redirect (Temporary)

  • Informs search engines that the move is temporary, and the original URL will be back.
  • Does not pass full link equity, meaning rankings might be affected.
  • Best for A/B testing, promotions, or seasonal pages.
  • Example in .htaccess:

302 Redirect (Temporary)

How Excessive Redirects Affect Page Speed and SEO

🚫 Slow Page Load Time – Every redirect adds extra HTTP requests, increasing load time.
🚫 Wasted Crawl Budget – Google spends more resources following multiple redirects, reducing efficiency.
🚫 Broken Link Chains – Too many redirects (redirect chains) confuse search engines and users.
🚫 Lost Link Equity – Each additional redirect reduces SEO value, affecting rankings.

Example of a Bad Redirect Chain:
https://example.com/old-page/https://example.com/intermediate-page/https://example.com/new-page/

Solution: Redirect directly to the final destination.
https://example.com/old-page/https://example.com/new-page/

Best Practices for Using 301 Redirects

Redirect to the Closest Relevant Page – Instead of pointing to the homepage, redirect to a similar content page.
Avoid Redirect Loops – Test redirects using tools like Google Search Console or Screaming Frog to detect infinite loops.
Minimize Redirect Chains – Ensure one-step redirections instead of multiple hops.
Use Canonical Tags When Needed – If you have multiple versions of the same content, use:

Use Canonical Tags When Needed

Update Internal Links – Instead of relying on redirects, update links to point directly to the final URL.

📌 Check Redirect Issues Using:

  • Google Search Console → Coverage Report for redirect errors.
  • Screaming Frog SEO Spider → Find redirect chains and loops.
  • Ahrefs or Semrush → Audit your site’s redirect structure.

Resources Are Formatted as a Page Link

What This Notice Means

This Semrush notice appears when non-HTML resources such as images, PDFs, or media files are linked as standard webpages instead of being properly formatted as downloadable or embeddable files.

🚫 Why Is This a Problem?

  • Search engines may index resources incorrectly – Google might interpret an image or PDF as a web page instead of a media file.
  • Poor User Experience (UX) – Clicking on a PDF or image might open it directly in the browser instead of downloading or embedding it correctly.
  • SEO Issues – If search engines treat media files as HTML pages, they may not properly rank or display them in search results.

For example, a problematic link:

Example of a problematic link

This link directly opens the PDF in the browser instead of downloading it.

How to Properly Format Images, PDFs, and Media Files

Use the Correct HTML Markup for Embedding

  • For images, use the <img> tag:

Use the Correct HTML Markup for Embedding

  • For videos, use the <video> tag:

For videos, use the <video> tag

  • For audio files, use the <audio> tag:

For audio files, use the <audio> tag

Force PDFs and Documents to Download Instead of Opening in the Browser

Use the download attribute in links:

<a href=”https://example.com/resources/brochure.pdf” download=”brochure.pdf”>Download Our Brochure</a>

Use Embed Tags for PDFs (If Necessary)

To display PDFs without making them appear as standard pages:

<embed src=”https://example.com/resources/brochure.pdf” type=”application/pdf” width=”600″ height=”500″>

Ensure Media Files Are Not Orphaned

  • Include descriptive anchor text when linking to media resources.
  • Internally link to important media assets from relevant content pages.
  • Use structured data (Schema.org) for media content to improve visibility in search results.

📌 How to Audit and Fix This Issue?

  • Google Search Console → Coverage Report → Look for non-HTML resources indexed incorrectly.
  • Semrush Site Audit → Check for improperly formatted resources.
  • Screaming Frog SEO Spider → Identify image, PDF, or media files incorrectly categorized as pages.

Links Have No Anchor Text

Importance of Descriptive Anchor Text for SEO

Anchor text is the clickable text in a hyperlink that tells both users and search engines what the linked page is about. When links have no anchor text (i.e., empty or generic links), they create SEO and accessibility issues, such as:

🚫 Poor User Experience (UX) – Users won’t understand where the link leads.
🚫 SEO Impact – Search engines use anchor text to determine page relevance, so missing text weakens rankings.
🚫 Accessibility Problems – Screen readers rely on anchor text for navigation, making empty links unusable for visually impaired users.

Example of a problematic link (no anchor text):

Example of a problematic link (no anchor text

Or

Example of a problematic link

How to Optimize Anchor Text for Better Usability and Ranking

Use Descriptive, Keyword-Rich Anchor Text
Instead of “Click here”, use a relevant phrase:

Use Descriptive, Keyword-Rich Anchor Text

This improves SEO rankings and click-through rates.

Keep Anchor Text Concise and Natural
Avoid long, keyword-stuffed phrases:
🚫 The best SEO strategies for ranking on Google in 2024 and increasing organic traffic
SEO strategies for 2024

Ensure Links Are Contextual
Place links within meaningful sentences:

For better ranking strategies, check out our <a href=”https://example.com/seo-guide”>SEO guide</a>.

Use Anchor Text Variations
Instead of always linking with the same text, use different phrases:

  • “SEO techniques”
  • “Guide to ranking higher”
  • “Improve organic traffic”

Check for Empty or Broken Links
Use Google Search Console, Semrush Site Audit, or Screaming Frog to detect and fix:

  • Missing anchor text in <a> tags.
  • Broken links leading to 404 errors.

Links to External Pages or Resources Returned a 403 HTTP Status Code

What Is a 403 Error?

A 403 HTTP status code means “Forbidden”, indicating that access to a webpage or resource is denied by the server. This typically happens when:

🚫 The external website has restricted access – The target site blocks visitors based on IP addresses, authentication, or firewall settings.
🚫 Broken or outdated URLs – The linked page no longer exists, or its permissions have changed.
🚫 Hotlink Protection Enabled – If you’re linking to an image, CSS, or JavaScript file, the external server might block unauthorized direct access.
🚫 Incorrect Link Permissions – Some private resources (e.g., Google Drive, Dropbox links) require login access, leading to a 403 error.

Example of a problematic link returning a 403 error:

Example of a problematic link returning a 403 error

Clicking this may show:
“403 Forbidden – You don’t have permission to access this resource.”

How Broken External Links Impact UX and SEO

How Broken External Links Impact UX and SEO

🚫 Poor User Experience (UX) – Visitors get frustrated when they click a link and see an error.
🚫 Negative SEO Impact – Google considers too many broken links a sign of poor site maintenance, affecting rankings.
🚫 Increased Bounce Rates – Users who encounter broken external links may leave your site immediately.

Fixing and Monitoring Broken Links

Check for 403 Errors Using SEO Tools

  • Google Search Console → Coverage Report – Identify pages with external link errors.
  • Semrush Site Audit → Broken External Links Report – Detect and fix broken links.
  • Screaming Frog SEO Spider – Scan for 403 errors in external links.

Update or Remove Broken External Links

  • Find Alternative Sources – If a linked page no longer works, link to a similar authoritative source.
  • Contact Website Owners – Request access or ask if the page has moved to a new URL.
  • Use Redirected URLs – If the external site has moved, update the link with the correct redirect.

Prevent Future 403 Errors

  • Use Trusted, Reliable Sources – Avoid linking to sites that frequently change permissions.
  • Test External Links Regularly – Set up automated link monitoring using Ahrefs, Broken Link Checker, or Screaming Frog.
  • Avoid Linking to Hotlinked Resources – Instead of directly linking an image or script from another website, download it or use a CDN.
Scroll to Top