Problem: My blog, Adventures in Automation, has collected over 11,000 spam comments over the past ten years, and unfortunately bare-bones Blogger.com does not have a bulk delete function. Through the Blogger UI, you can only delete a hundred at a time.
Pair-programming with Claude.ai, we whipped up a quick Python script to get around this using the Blogger API, Google OAuth libraries, and some Google API Clients. The errors that appeared after running the code, I fed back to Claude, who then fixed the issues, and added some setup documentation I was able to muddle through.
- Blogger Spam Bulk Deleter: https://github.com/tjmaher/blogger-spam-bulk-deleter
So, now I have a Python project that works somehow, but one I don't really understand. Since becoming an automation developer, I have worked on-the-job with Java, Ruby, JavaScript, and TypeScript, but not yet with Python.
Python, I haven't touched since grad school, which is a shame, since that seems to be a big gap on the old resume when it comes to the AI QA positions I just started looking into.
Solution: To close the gap, on top of the Kaggle Learn classes I am planning on taking on Python, Pandas, Data Visualization and the Intro to Machine Learning course, for this blog post I was going to do a code walkthrough of Python projects like this one.
Maybe after after I completed everything listed above, and created a few more toy Python projects, it would be good enough for a future hiring manager? Who knows?
App Platform: Python + Windows 11 PC
Blogger Spam Bulk Deleter was developed by T.J. using GitHub Copilot with Claude.ai in VS Code on a Windows 11 PC and PowerShell as a Terminal.
This app expects Python 3.8 or later. It was tested on Python 3.14 on Windows.
- Python Windows Installer: https://www.python.org/downloads/windows/
About Python:
- Website: https://www.python.org/
- Docs: https://docs.python.org/3/
- Tutorial: https://docs.python.org/3/tutorial/index.html
... If you wish to read more about Python, see my blog post, Becoming AI QA: Why Python? How AI and Python Became Linked.
Setup #1: Install Google Dependencies and Python-DotEnv
Installs the dependencies using Python's Package Installed, fetching packages from the Python Package Index (PyPI):
- Pip Documentation: https://pip.pypa.io/en/stable/
- Pip: Getting Started: https://pip.pypa.io/en/stable/getting-started/
- PyPI, the Python Package Index: https://pypi.org/
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib python-dotenv
google-api-python-client:
- The official Google client library for Python. Provides access to Google's REST APIs -- including the Blogger API v3 -- handling HTTP requests, response parsing, and service discovery.
- PyPI: https://pypi.org/project/google-api-python-client/
- GitHub: https://github.com/googleapis/google-api-python-client
- Official docs: https://googleapis.github.io/google-api-python-client/docs/
google-auth-httplib2:
- Adapter that connects Google's authentication library (google-auth) to httplib2, the HTTP client used by google-api-python-client. API client is able to attach credentials to outgoing requests.
- PyPI: https://pypi.org/project/google-auth-httplib2/
- GitHub: https://github.com/googleapis/google-auth-library-python-httplib2
google-auth-oauthlib:
- Handles the OAuth 2.0 authorization flow for Google APIs. Manages the browser-based consent screen, token exchange, and token refresh.
- First time the script runs, it opens a browser window where logging in and clicking "Allow," grants the script permission to access only what it needs -- in this case, reading and deleting Blogger comments. Google then issues an access token (valid for roughly one hour) and a refresh token (long-lived) which are saved locally to a token.json file. On every subsequent run, the library reads that file and silently renews the access token in the background, so the browser prompt only ever appears once.
- PyPI: https://pypi.org/project/google-auth-oauthlib/
- GitHub: https://github.com/googleapis/google-cloud-python/tree/main/packages/google-auth-oauthlib
- Google Developers: OAuth 2.0 overview
python-dotenv:
- Loads environment variables from a .env file into your script's runtime environment.
- Used to keep sensitive values -- like API keys or client secrets -- out of source code and version control.
- PyPI: https://pypi.org/project/python-dotenv/
- GitHub: https://github.com/theskumar/python-dotenv
Claude says: "In the context of the Blogger API spam comment deletion work: google-api-python-client makes the API calls, google-auth-oauthlib + google-auth-httplib2 handle authenticating as your Google account, and python-dotenv keeps your OAuth credentials out of the script itself".
- Go to the Google Cloud Console and create a new project (or reuse an existing one).
- Enable the Blogger API v3: console.cloud.google.com/apis/library/blogger.googleapis.com
Setup #3: Store Credentials in Client Secrets JSON
- Navigate to APIs & Services > Credentials.
- Click + Create Credentials > OAuth 2.0 Client ID.
- Choose Desktop app as the application type.
- Download the generated JSON file and rename it to
client_secrets.json. - Place
client_secrets.jsonin the same directory as the script.
Example client_secrets.json structure:
{
"installed": {
"client_id": "1234.apps.googleusercontent.com",
"project_id": "maps-api-project-1234",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "ABC123",
"redirect_uris": ["http://localhost"]
}
}
Claude says: The client_id and client_secret values will be unique to your project. The "installed" key indicates this is for a desktop/installed application rather than a web application. Never share this file or commit it to version control - treat it like a password that identifies your specific application to Google's servers.
Setup #4: Configure OAuth Consent
Configure the consent screen at APIs & Services > OAuth consent screen:
| Field | Value |
|---|---|
| User type | External |
| App name | Blogger Comment Deleter |
| User support email | your Gmail address |
| App domain | your blog URL |
| Developer contact email | your Gmail address |
| Scopes | https://www.googleapis.com/auth/blogger |
| Test users | your Gmail address |
| Publishing status | Testing (no verification needed) |
Leave the app in Testing status. Google verification is only required when the app will be used by people outside the test users list. Since you are the only one running this script, Testing is sufficient indefinitely.
Setup #4: Get Your Blog ID
Create an API key at APIs & Services > Credentials > + Create Credentials > API key. Restrict it to the Blogger API v3 only. Then call this URL in your browser:
https://www.googleapis.com/blogger/v3/blogs/byurl?url=https://YOUR-BLOG-URL&key=YOUR_API_KEY
The id field in the returned JSON is your Blog ID. You will store it in your
.env file in the next step.
Tip: How To Keep Secrets Out of GitHub
Why this matters: Credentials committed to a public GitHub repository are exposed to anyone on the internet. Automated bots scan GitHub continuously for API keys, OAuth secrets, and tokens. A leaked credential can be used to make API calls on your behalf, exhaust your quotas, or access your account data. GitHub's own documentation warns that once a secret is pushed, it should be considered compromised, even if you delete it immediately, because the git history retains it.
This project uses three files that must never be committed:
| File | Why it is sensitive |
|---|---|
| client_secrets.json | Contains your OAuth client ID and client secret, downloaded from Google Cloud Console. Anyone with this file can impersonate your application. |
| token.json | Written automatically after first authentication. Contains your OAuth access and refresh tokens, which grant direct access to your Blogger account. |
| .env | Contains your Blog ID and file paths. Less sensitive than the above, but keeping all configuration out of source control is the correct habit. |
The industry standard approach for managing this kind of configuration is the twelve-factor app methodology, which states that config should be stored in the environment, strictly separated from code.
Sidenote:: Originally created by Heroku (2011), it's been adopted by cloud platforms (AWS, Google Cloud, Azure), major tech companies (Netflix, Spotify, Uber), and enterprise software providers (Salesforce, GitHub). It focuses on deployment and configuration practices rather than language-specific syntax.
The python-dotenv library
(pypi.org/project/python-dotenv)
implements this pattern for local development by loading values from a .env file
into environment variables at runtime.
Step 1 — Create a .env file
In the same folder as the script, create a file named .env:
BLOG_ID=your_blog_id_here CLIENT_SECRETS_FILE=client_secrets.json
No quotes around the values. No spaces around the = sign.
Step 2 — Create a .gitignore file
In the same folder, create a file named .gitignore:
.env client_secrets.json token.json
Git reads .gitignore before staging files and silently excludes anything listed
there. GitHub maintains a
collection of recommended .gitignore templates
for common languages. The
Python template
is a useful starting point for any Python project and represents community consensus on what Python files should never be committed (virtual environments, __pycache__ directories, .pyc files, etc.).
Before your first commit, rungit statusand confirm that.env,client_secrets.json, andtoken.jsondo not appear in the list of files to be staged. If they do appear, your.gitignoreis not in the right place or has a typo.
Step 3 — Create a .env.example file
This file is safe to commit. It tells anyone who clones the repository which values they need to supply, without exposing yours:
BLOG_ID=your_blog_id_here CLIENT_SECRETS_FILE=client_secrets.json
The .env.example convention is documented in the
python-dotenv documentation
and serves as both a template and a record of required configuration.
Step 4 — Update the script
The script reads from the .env file using python-dotenv. The relevant
block near the top of delete_blogger_comments.py looks like this:
from dotenv import load_dotenv
load_dotenv()
YOUR_BLOG_ID = os.getenv("BLOG_ID", "")
CLIENT_SECRETS_FILE = os.getenv("CLIENT_SECRETS_FILE", "client_secrets.json")
TOKEN_FILE = "token.json"
load_dotenv() reads the .env file and populates os.environ with its values.
os.getenv() then reads those values by name. The second argument is a fallback default
used if the variable is not set. See the
os.getenv documentation
for the full signature.
First Run — Authentication
The first time the script runs it opens a browser window to Google's login page. After you log in and grant the Blogger scope, Google redirects back to the local server the script is running and you will see:
The authentication flow has completed. You may close this window.
Close that browser tab. The script writes your credentials to token.json and proceeds.
Every subsequent run reads token.json directly and skips the browser step unless the
token has expired.
Usage
Dry run (recommended first step)
Prints every comment that would be deleted. Touches nothing.
python delete_blogger_comments.py --dry-run
Live run
python delete_blogger_comments.py
Live run with a longer delay between deletions
python delete_blogger_comments.py --delay 2.0
The --delay value is the number of seconds to sleep between each DELETE
request. The default is 0.5. A longer delay reduces the chance of hitting per-minute
quota limits during large runs.
Rate Limits
The Blogger API v3 has two quota constraints to be aware of.
Per-minute limits apply to all API calls including list operations. The script handles this automatically: every list call has a 0.5-second pause after it completes, and both list and delete operations use exponential backoff (5s, 10s, 20s… up to 120s) with up to 6 retries on HTTP 429 and 5xx responses. This follows Google's official recommendation for exponential backoff algorithms, which state that when you receive HTTP 403 or 429 responses, you should retry using exponentially increasing wait times.
Daily quotas vary by project and can be viewed in your Google Cloud Console → APIs & Services → Blogger API v3 → Quotas. The default limits are typically sufficient for small-scale personal use, but large-scale comment deletion may require requesting quota increases through the console. Google's quota documentation confirms that "not all projects have the same quotas" and that quota values can be increased based on resource usage patterns.
Performance expectations: The actual deletion speed depends on your quota allocation, the 0.5-second default delay between operations, and API response times. For large comment volumes, expect the process to take multiple sessions across several days unless you have increased quotas.
Expected Output
=== Blogger Comment Deletion [DRY RUN] === Blog ID : 3868566217808655382 Delay : 0.5s between deletes [WOULD DELETE] Post: 'Some post title' | Comment #12345 by SomeUser (2021-03-15) [WOULD DELETE] Post: 'Some post title' | Comment #12346 by AnotherUser (2021-03-16) -> Post 2702750598806372610: 2 comment(s) processed. === Summary === Posts scanned : 847 Comments found : 11243 No comments were deleted (dry-run mode).
File Layout
Blogger comment deleter/
delete_blogger_comments.py ✅ commit this
README.md ✅ commit this
.gitignore ✅ commit this
.env.example ✅ commit this
.env 🚫 never commit (listed in .gitignore)
client_secrets.json 🚫 never commit (listed in .gitignore)
token.json 🚫 never commit (listed in .gitignore)
Code Walkthrough: Python Language Features
Rather than copy-pasting the entire script here, let's walk through the key Python language features used in delete_blogger_comments.py. The complete source is available at github.com/tjmaher/blogger-spam-bulk-deleter.
The script starts with importing all required modules and libraries:
import argparse import time import sys from dotenv import load_dotenv
Python's has many standard library and third-party packages Claude selected for this app. Python's module system allowed it to break functionality into reusable components and access external libraries. (Python 3 Docs / Tutorial: Python Modules)
Libraries Used:
- argparse — Command-line argument parsing. Used for
--dry-run,--delay, and--debugflags - time — Time-related functions. Used for
time.sleep()rate limiting - sys — System-specific parameters. Used for
sys.exit()error handling
Claude says: The --dry-run flag follows a Unix convention dating back to the late 1980s, first popularized by rsync (1996) and make utilities. The term comes from "dry run" — a rehearsal without the actual performance, like aircraft training flights. It lets users preview destructive operations before committing, which is essential for bulk deletion scripts like this one.
load_dotenv()
YOUR_BLOG_ID = os.getenv("BLOG_ID")
CLIENT_SECRETS_FILE = os.getenv("CLIENT_SECRETS_FILE", "client_secrets.json")
Claude says: Configuration values come from environment variables rather than being hardcoded. This separates credentials from source code and follows the twelve-factor app methodology. The second parameter to os.getenv() provides a fallback default.
Claude says: Hard-coding sensitive values like blog IDs directly in the script creates security risks and makes the code less portable. By using .env files (which are git-ignored) and environment variables, the same script can work across different environments without code changes.
- os.getenv() — Read environment variables with optional defaults
- Default Parameters — The second argument provides a fallback value if the environment variable isn't set
DEBUG_MODE = False
def debug_log(message):
if DEBUG_MODE:
print(f"🔍 {message}")
def main():
global DEBUG_MODE
DEBUG_MODE = True
Claude says: A global flag controls debug output throughout the script. The global keyword tells Python that assignments to DEBUG_MODE inside functions should modify the module-level variable rather than creating a new local variable.
- Python Scopes and Namespaces — How Python resolves variable names across different scopes
- global Statement — Declares that a variable assignment refers to the global scope
- Local vs Global Variables — When Python creates a new local variable vs accessing a global one
def list_all_posts(service, blog_id):
"""Generator: yields every post dict for the given blog."""
page_token = None
while True:
# ... API call logic ...
for post in resp.get("items", []):
yield post # Generator yields one post at a time
Claude says: Python generators provide memory-efficient iteration over large datasets. Instead of loading all blog posts into memory at once, this function yields one post at a time, pausing execution between yields and resuming when the next item is requested.
- Generators — Functions that use
yieldinstead ofreturn. Memory-efficient for large datasets - yield Statement — Pauses function execution and returns a value, resuming where it left off on next call
- while Loop — Continues until
page_tokenis null (no more pages)
for post in resp.get("items", []):
post_title = post.get("title", "(no title)")
author = post.get("author", {}).get("displayName", "unknown")
page_token = resp.get("nextPageToken")
Claude says: This code demonstrates safe navigation through nested JSON data structures from API responses. Let me break down each line:
Line 1: for post in resp.get("items", []) - Iterates through the "items" array in the API response. If "items" doesn't exist, it defaults to an empty list [] to avoid errors.
Line 2: post_title = post.get("title", "(no title)") - Extracts the post title. If a post lacks a "title" field, it falls back to the string "(no title)" rather than crashing.
Line 3: author = post.get("author", {}).get("displayName", "unknown") - Demonstrates chained .get() calls for nested objects. First gets the "author" object (defaulting to empty dict {}), then gets "displayName" from that object (defaulting to "unknown").
Line 4: page_token = resp.get("nextPageToken") - Extracts the pagination token for loading the next batch of results. Returns None if there are no more pages.
Claude says: This defensive programming pattern prevents KeyError exceptions when API responses have missing or optional fields. It's especially important when working with external APIs that might change their response structure or when dealing with user-generated content that has inconsistent data.
- dict.get() Method — Safe dictionary access with default values, prevents KeyError exceptions
- Dictionary Operations — Working with key-value pairs and nested dictionaries
- Mapping Types — Complete reference for dictionary methods and operations
try:
resp = service.posts().list(...).execute()
except HttpError as e:
if e.resp.status == 429:
print("Daily API quota exceeded")
sys.exit(1)
else:
raise
Claude says: Exception handling makes the script resilient to API failures. Rather than crashing on errors, the code catches specific HTTP status codes and responds appropriately:
- 429 errors: Daily quota exceeded → exits gracefully with helpful message about quota increases
- 404 errors: Comment already deleted → treats as success and continues
- 500/503 errors: Server problems → waits 60 seconds and retries once before failing
The raise statement re-throws exceptions the script doesn't know how to handle, preserving the original error information.
- Exception Handling — Python's
try/exceptmechanism for error handling - try Statement — Complete reference for
try,except,else, andfinally - raise Statement — Re-raises exceptions you don't want to handle
import logging
import httplib2
# Enable HTTP debugging
httplib2.debuglevel = 1
logging.basicConfig(level=logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
Claude says: When troubleshooting API issues, the script can enable comprehensive debugging that shows all HTTP requests and responses. This is controlled by the --debug command-line flag and helps understand quota consumption patterns or investigate failures.
- Logging Documentation — Python's built-in logging framework for structured output
- Logging HOWTO — Practical guide to using logging in Python applications
- logging.basicConfig() — Simple configuration for basic logging setup
- logging.getLogger() — Get a logger instance for specific modules or libraries
print(f"Blog ID : {YOUR_BLOG_ID}")
print(f" [{action}] Post: {post_title[:50]!r:52} | Comment #{cid} by {author[:20]}")
Claude says: String formatting uses f-strings (formatted string literals) for readable variable interpolation. The format specification mini-language inside the braces provides precise control: [:50] slices strings to 50 characters, !r uses the repr() representation, and :52 sets field width. F-strings were introduced in Python 3.6 via PEP 498 and are now the preferred Python community standard for string formatting.
- f-strings — Formatted string literals for clean variable interpolation
- PEP 498 — The Python Enhancement Proposal that introduced f-string literals
- Format Specification — The mini-language inside f-string braces (
!rfor repr,:50for width) - String Slicing —
post_title[:50]takes the first 50 characters
def get_credentials():
"""Returns valid OAuth2 credentials, refreshing or prompting as needed."""
Claude says: Docstrings document function behavior using triple-quoted strings immediately after the function definition. Python IDEs can display these when hovering over function calls, and they're accessible via the help() function at runtime.
- Docstrings — Function documentation accessible via
help()and IDEs - PEP 257 — Docstring conventions for Python code
with open(TOKEN_FILE, "w") as token:
token.write(creds.to_json())
Claude says: File operations use context managers (the with statement) to ensure proper resource cleanup. Even if an exception occurs during file operations, the context manager guarantees the file will be closed properly. This prevents resource leaks and file corruption. Context managers were formalized in PEP 343 and are considered a Python best practice for resource management.
- File I/O — Reading and writing files in Python
- PEP 343 — The Python Enhancement Proposal that introduced context managers
- with Statement — Context managers that automatically close files
- open() Function — File opening modes and parameters
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
Claude says: Boolean logic in Python uses and, or, and not operators with short-circuit evaluation. Python objects have "truthiness" — empty collections, None, zero, and empty strings are falsy, while most other values are truthy. This enables concise conditional logic.
- Boolean Operations —
and,or,notoperators - Truth Value Testing — How Python determines if objects are "truthy" or "falsy"
- if Statement — Conditional execution and elif/else chains
if __name__ == "__main__":
main()
Claude says: The main guard pattern ensures code only runs when the script is executed directly, not when imported as a module. Python sets __name__ to "__main__" for the script being run and to the module name when imported. This allows Python files to work both as standalone scripts and importable libraries. This is a fundamental Python idiom documented in the Python Programming FAQ and widely considered essential for proper Python module design.
- __main__ Module — How Python determines if a script is being run directly
- Script Execution — The difference between importing and executing Python files
- Python Programming FAQ — Official documentation on the main guard pattern
Claude says: This script demonstrates idiomatic Python patterns that make code robust, readable, and maintainable:
- Generators handle large datasets without loading everything into memory
- Context managers ensure files are properly closed even if exceptions occur
- Exception handling makes the script robust against API failures
- F-strings provide readable string formatting
- Docstrings document function behavior for other developers
- Environment variables separate configuration from code
For an experienced programmer learning Python, these patterns show how Python's design philosophy ("readable, explicit code") translates into practical solutions. The complete source at github.com/tjmaher/blogger-spam-bulk-deleter demonstrates these concepts in a working application.
The combination of Google's well-documented APIs, Python's excellent HTTP libraries, and defensive programming creates a tool that's both powerful and safe to use. It demonstrates several key practices:
- Security: Credentials in environment variables, not source code
- Robustness: Handles rate limits, network errors, and edge cases
- Safety: Dry-run mode prevents accidents
- Transparency: Shows exactly what it's doing in real-time
- Resumability: If quota runs out, just run again tomorrow - it picks up where it left off
For someone new to Python, this script showcases how modern Python development works: leveraging well-maintained libraries, following security best practices, and building in safety mechanisms from the start.
Further Reading
Environment Variables and .env Files
- python-dotenv — Official Docs —
The authoritative reference for
load_dotenv(), file format rules, variable expansion, and the CLI interface. - python-dotenv — GitHub Repository —
Source code, issue tracker, and the most current usage examples including
recommended
.gitignoreguidance. - python-dotenv — PyPI — Install page with version history and dependency information.
- The Twelve-Factor App: Config — The methodology that established storing config in the environment as a best practice. Written by Heroku engineers in 2011.
- GitHub Secret Scanning: About Secret Scanning — GitHub’s official explanation of why committed secrets must be treated as permanently compromised, even after deletion.
- GitHub: gitignore templates —
The official collection of
.gitignoretemplates. The Python template covers virtual environments, build artifacts, and local config files.
Blogger API v3
- Blogger API v3 — Introduction — Overview of what the API can do and links to getting started materials.
- Blogger API v3 — Getting Started — Covers the five core resource types (Blogs, Posts, Comments, Pages, Users), supported operations, and URI structure.
- Blogger API v3 — Using the API — Practical guide to making requests, authenticating, and working with collections.
- Blogger API v3 — Reference — Full reference for all resource types and methods.
- Blogger API v3 — Comments: delete —
The specific endpoint this script calls. Documents required parameters
(
blogId,postId,commentId) and the required OAuth scope.
HTTP Request/Response Examples
Here are actual HTTP calls and responses from the Blogger API v3 that demonstrate how the script interacts with Google's servers:
Retrieving Comments from a Post
GET https://www.googleapis.com/blogger/v3/blogs/2399953/posts/6069922188027612413/comments?key=YOUR_API_KEY
HTTP/1.1 200 OK
Content-Type: application/json
{
"kind": "blogger#commentList",
"nextPageToken": "CgkIFBDwjvDXlyYQ0b2SARj9mZe9n8KsnlQ",
"items": [
{
"kind": "blogger#comment",
"id": "9200761938824362519",
"post": {
"id": "6069922188027612413"
},
"blog": {
"id": "2399953"
},
"published": "2011-07-28T19:19:57.740Z",
"updated": "2011-07-28T21:29:42.015Z",
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953/posts/6069922188027612413/comments/9200761938824362519",
"content": "<span>Great article! Thanks for sharing.</span>",
"author": {
"id": "530579030283",
"displayName": "Example User",
"url": "http://www.blogger.com/profile/530579030283"
}
}
]
}
Deleting a Comment (What the Script Does)
DELETE https://www.googleapis.com/blogger/v3/blogs/2399953/posts/6069922188027612413/comments/9200761938824362519 Authorization: Bearer ya29.a0AfH6SMC... Content-Length: 0 HTTP/1.1 200 OK Content-Length: 0 Date: Sun, 30 Mar 2026 15:42:33 GMT
Claude says: Notice that comment deletion returns an empty response body with HTTP 200 OK status. This is typical for DELETE operations in REST APIs - success is indicated by the status code, not response content. The script checks for specific HTTP status codes like 404 (already deleted) or 429 (quota exceeded) to handle different scenarios gracefully.
Blogger API Integration Guide
- Blogger API Essential Guide — Rollout.com —
Comprehensive third-party integration guide covering REST API basics, authentication patterns,
rate limits (10,000 requests/day, 100 per 100 seconds), and supported operations.
Includes practical examples for working with Blogs, Posts, Comments, Pages, and Users resources.
The API uses standard REST endpoints, for example:
GET https://www.googleapis.com/blogger/v3/blogs/{blogId}to retrieve a blog,GET https://www.googleapis.com/blogger/v3/blogs/{blogId}/poststo retrieve posts. - Client Libraries — Google Developers — Official client libraries for multiple languages including the google-api-python-client used in this script.
Rate Limits and Quotas
- Google Cloud Console — Quotas — The live view of your project’s current quota usage and the interface for requesting increases. Filter by “Blogger” to find the relevant limits.
- Google APIs — Handling Errors — The Blogger API’s error handling guide, covering 400, 401, 403, and 503 responses and recommended retry behavior.
- Google APIs — Exponential Backoff — Google’s own recommendation for handling quota errors with exponential backoff.
Python Standard Library
os.getenv— Reads an environment variable by name, with an optional default value if the variable is not set.time.sleep— Pauses execution for a given number of seconds. Used here to throttle request rate.argparse— The standard library module used to parse--dry-runand--delayfrom the command line.- Exception Handling (
try/except) — Python's official tutorial on catching and handling exceptions, including the pattern used inapi_call_with_retry.
Google Auth Libraries for Python
- google-auth — The base authentication library. Handles token refresh and credential management.
- google-auth-oauthlib —
Provides
InstalledAppFlow, which opens the browser for the initial OAuth consent and writestoken.json. - google-api-python-client —
The client library that wraps the Blogger REST API into Python method calls like
service.comments().delete().
And, as always, Happy Testing!
-T.J. Maher
Software Engineer in Test
BlueSky |
LinkedIn |
GitHub |
Articles
Related posts from Adventures in Automation:
No comments:
Post a Comment