Generate XML Sitemap for Blogger Website in Google Search Console for Fast Indexing

Generate XML sitemap for your blogger blog / wordpress website for free and rank higher on Google, Microsoft Bing, Yahoo, MSN, Yandex search engines.
Creating a sitemap for a Blogger website typically involves gathering all the pages, posts, categories, and tags from your Blogger site and generating an XML file that can be submitted to search engines like Google for better indexing.
To generate a HTML and JavaScript sitemap, I'll provide an approach that dynamically lists all your posts and categories. We can't directly pull a sitemap from Blogger using just HTML/JS because Blogger doesn't provide an API for this out of the box, but we can scrape the post URLs dynamically with JavaScript.
Below is a basic implementation of an HTML and JavaScript-based sitemap generator for Blogger.
Step 1: HTML Structure
Create a simple HTML page that can display a list of blog posts. The JavaScript will fetch the list of posts and categories from the Blogger site and populate the sitemap.
How it works (technically):
- A website generates a sitemap (often “sitemap.xml”) listing its pages in structured XML.
- The sitemap is made discoverable — e.g. by placing it in the root folder, referencing it in robots.txt, or submitting it via Search Console.
- Search engine bots (crawlers) fetch the sitemap, parse it, and see what URLs are listed.
- Bots then crawl (visit) those URLs (and perhaps follow internal links), retrieve page content, and pass it into the indexer. The sitemap metadata (e.g. lastmod) helps bots prioritize which pages to crawl sooner.
- Even though a sitemap gives hints, inclusion in the sitemap does not guarantee indexing — the search engine uses its own algorithms and signals to decide what to index.
How does Sitemap file helps in Search Engine Optomisation ?
Sitemaps are powerful SEO tools: they guide bots to fetch your website’s URLs so no page is left unseen. By listing all pages (plus metadata like last modified date), sitemaps let crawlers know which content to prioritize and when it was changed.
When bots read the sitemap, they parse <loc>, <lastmod>, <changefreq>, <priority> tags, then enqueue those URLs for crawling—ensuring deeper, orphan, or freshly updated pages get discovered.
How long does it takes to fetch the URLs in Google Search Console ?
Firstly, Understand that - How Bots Crawls / Identifies the URLs. There are 2 types of Crawling.
- Automatic Crawling
- Manual Crawling
- Automatic Crawling
- Manual Crawling
Once the Sitemap file is submitted to the Google Search console, It will takes some time to process. The time will varies depending on Accessibility of the URLs on you website. Time may varies upto 2 weeks.
In Manual Crawling, you needs to submit the URLs manually from your website which you wanted to be crawled.
NOTE: First, Generate and Add the sitemap.xml file from the below tool Before checking the robots file.
How to check whether the sitemap is Added Properly or Not in Google Search Console
Sitemap can be checked in 2 Methods.
- By Website
- By Search Console
Simply, In the browser, Enter your website and in the end add "/sitemap.xml". Thats it!! You can see all the Crawled URLs in the browser itself.
In Google Search Console, navigate towards Indexing section Sitemaps > Submitted Sitemaps. Thats it!! You can see all the Crawled URLs here.

How to Remove the URLs / Permalinks from the sitemap.xml in the Google search console ?
To remove the URL(s) from the sitemap in google search console, follow the below steps -
- Navigate towards Removals under Indexing section.
- Under "Temporary Removals" click on "New Request" and then Enter the URL which you wanted to remove then click "Next".
- That's all. This URL will be removed in the few Hours.


Terms and tags used in sitemap and their Definitions.
| Attribute | Description |
| <urlset> | Encapsulates the file and references the current protocol standard. |
| <url> | Parent tag for each URL entry. The remaining tags are children of this tag. |
| <loc> | URL of the page. This URL must begin with the protocol (such as http) and end with a trailing slash, if your web server requires it. This value must be less than 2,048 characters. |
| <lastmod> |
The date of last modification of the page. This date should be in W3C
Datetime format. This format allows you to omit the time portion, if
desired, and use YYYY-MM-DD. Note that the date must be set to the date the linked page was last modified, not when the sitemap is generated. Note also that this tag is separate from the If-Modified-Since (304) header the server can return, and search engines may use the information from both sources differently. |
| <changefreq> |
How frequently the page is likely to change. This value provides
general information to search engines and may not correlate exactly to
how often they crawl the page. Valid values are:
Please note that the value of this tag is considered a hint and not a command. Even though search engine crawlers may consider this information when making decisions, they may crawl pages marked "hourly" less frequently than that, and they may crawl pages marked "yearly" more frequently than that. Crawlers may periodically crawl pages marked "never" so that they can handle unexpected changes to those pages. |
| <priority> |
The priority of this URL relative to other URLs on your site. Valid
values range from 0.0 to 1.0. This value does not affect how your
pages are compared to pages on other sites—it only lets the search
engines know which pages you deem most important for the crawlers. The default priority of a page is 0.5. Please note that the priority you assign to a page is not likely to influence the position of your URLs in a search engine's result pages. Search engines may use this information when selecting between URLs on the same site, so you can use this tag to increase the likelihood that your most important pages are present in a search index. Also, please note that assigning a high priority to all of the URLs on your site is not likely to help you. Since the priority is relative, it is only used to select between URLs on your site. |
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Sitemap of [Your Blogger Site]">
<title>Sitemap - [Your Blogger Site]</title>
<style>
body {
font-family: Arial, sans-serif;
background-color: #f5f5f5;
padding: 20px;
}
h1 {
text-align: center;
}
ul {
list-style-type: none;
padding: 0;
}
li {
margin: 8px 0;
}
a {
text-decoration: none;
color: #2C3E50;
}
a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<h1>Blog Sitemap</h1>
<div>
<h2>Posts</h2>
<ul id="post-list"></ul>
/div>
<div>
<h2>Categories</h2>
<ul id="category-list"></ul>
</div>
<script>
// Function to fetch and generate the post list
function generateSitemap() {
const postList = document.getElementById('post-list');
const categoryList = document.getElementById('category-list');
// Replace with your Blogger RSS feed or JSON API endpoint
const rssUrl = 'https://yourblog.blogspot.com/feeds/posts/default?alt=json';
// Fetch the posts
fetch(rssUrl)
.then(response => response.json())
.then(data => {
const posts = data.feed.entry;
const categories = new Set();
// Loop through the posts and create list items
posts.forEach(post => {
const postTitle = post.title.$t;
const postUrl = post.link.find(link => link.rel === 'alternate').href;
// Add the post to the sitemap
const postItem = document.createElement('li');
const postLink = document.createElement('a');
postLink.href = postUrl;
postLink.textContent = postTitle;
postItem.appendChild(postLink);
postList.appendChild(postItem);
// Extract categories from each post and add them to the categories list
if (post.category) {
post.category.forEach(category => categories.add(category.term));
}
});
// Add the categories to the category list
categories.forEach(category => {
const categoryItem = document.createElement('li');
const categoryLink = document.createElement('a');
categoryLink.href = `https://yourblog.blogspot.com/search/label/${encodeURIComponent(category)}`;
categoryLink.textContent = category;
categoryItem.appendChild(categoryLink);
categoryList.appendChild(categoryItem);
});
})
.catch(error => {
console.error('Error fetching the posts:', error);
});
}
// Call the function to generate the sitemap
window.onload = generateSitemap;
</script>
</body>
</html>
Explanation of the Code:
1.HTML Structure: The page contains two sections, one for posts and one for categories.- Post list: Will display all the blog posts dynamically.
- Category list: Will display all the categories available on the Blogger site. 2.JavaScript Logic:
- Fetching Posts: The script fetches the blog posts via the JSON feed endpoint. (https://yourblog.blogspot.com/feeds/posts/default?alt=json).
- Processing Posts: For each post, it extracts the title, URL, and categories (if any) and adds them to the respective lists.
- Handling Categories: Categories are dynamically generated by scraping the categories from the posts and listed separately. 3.Styling: Simple CSS is used to make the sitemap look neat. Feel free to adjust the styles as you see fit.
- Replace the URL in const rssUrl with the actual RSS feed of your Blogger site.
- For Blogger, the format is: https://yourblog.blogspot.com/feeds/posts/default?alt=json.
- Make sure to replace yourblog.blogspot.com with your actual Blogger URL.
- Modify RSS Feed URL: If you want to pull additional data (such as comments, more fields, etc.), adjust the fetch URL accordingly.
- Error Handling: If fetching fails (e.g., due to network issues or invalid URL), the catch block will log the error.
- Pagination: If your blog has many posts, you might want to paginate the posts or loop over multiple RSS feed URLs to gather all posts.
- Once you open this page in a browser, it will fetch the post data from your Blogger RSS feed.
- The posts and categories will then be displayed in an organized list format on the page.
- LogIn to your Blogger account.
- Select your desired blogger website which you want to create sitemap.
- Copy your blogger website (root URL only) and paste into the above provided textbox. Click on "Generate XML Sitemap".
- Settings > Crawlers and indexing > Custom robots.txt
- Add the above generated Sitemap code to the Custom robots.txt area.
- Update
- Create a text file and copy the above generated sitemap with the .txt extension
- LogIn to your WordPress Hosting Dashboard.
- Softaculous > wordpress > htdocs
- Place the above generated sitemap
- Save changes
Step 2: Replace the RSS URL.
-
3.Step 3: Customization
How It Works:
NOTE : This is a simple sitemap for human use (HTML-based), but if you're looking for an XML sitemap for SEO purposes, you'd typically generate an XML file instead of an HTML one. This can be done with server-side technologies or through tools like xml-sitemap-generator.
0 Comments
If you have any Doubts please don't hesitate to comment us.