Hey Clawgo fam, Jake Morrison here, bringing you another deep explore the world of AI agents. Today, I want to talk about something that’s been buzzing in my own home lab for the past few weeks: how to actually get an AI agent to do something useful, not just theoretically interesting. Specifically, we’re going to explore how to make an agent proactively monitor and report on changes on a website. Forget the fancy “reshaping the industry” talk for a minute; let’s get our hands dirty with a practical problem.
The Problem: Website Change Detection (The Hard Way)
So, here’s the scenario: I’m always on the lookout for new components for my various home automation projects. Sometimes, a specific sensor or a micro-controller goes out of stock, and I need to know the second it’s back. Refreshing a page every hour is tedious and, frankly, I’m too lazy for that. I’ve tried those generic website change notification services, but they’re often too broad, too slow, or they trigger on irrelevant changes like a footer update.
What I really needed was something smarter. Something that understood the *intent* of what I was looking for. This is where an AI agent comes in. Instead of just “diffing” the HTML, I wanted an agent that could read the page, understand the product’s availability status, and tell me specifically when that status changed to “in stock.”
My First Foray: Simple Python Script (The Brute Force Way)
My first attempt, as it often is, involved a simple Python script. I used requests to fetch the page and BeautifulSoup to parse it. It looked something like this (simplified, of course):
import requests
from bs4 import BeautifulSoup
import time
URL = "https://example-store.com/product-x-123"
KEYWORD = "Out of Stock" # Or "In Stock"
def check_stock():
try:
response = requests.get(URL)
response.raise_for_status() # Raise an exception for HTTP errors
soup = BeautifulSoup(response.text, 'html.parser')
# This is the tricky part: finding the specific element
# Let's say the stock status is in a div with class "product-status"
status_div = soup.find('div', class_='product-status')
if status_div and KEYWORD in status_div.text:
print(f"[{time.ctime()}] Product is {KEYWORD}.")
return False # Still out of stock
elif status_div:
print(f"[{time.ctime()}] Product status changed! Current: {status_div.text.strip()}")
return True # Stock changed!
else:
print(f"[{time.ctime()}] Could not find status div.")
return False
except requests.exceptions.RequestException as e:
print(f"[{time.ctime()}] Error fetching URL: {e}")
return False
# Basic loop (this would run forever or until stopped)
# while True:
# if check_stock():
# print("SENDING ALERT: Product is back in stock!")
# # Add notification logic here (email, SMS, etc.)
# break # Stop checking once found
# time.sleep(3600) # Check every hour
This worked, to a point. The problem? If the website updated its HTML structure, even slightly, my specific soup.find('div', class_='product-status') line would break. Or, if the wording changed from “Out of Stock” to “Currently Unavailable,” my KEYWORD check would fail. This required constant maintenance, which defeated the purpose of automation.
Enter the AI Agent: A Smarter Approach
This is where I started thinking about a true AI agent. Not just a script that follows exact instructions, but something that can *interpret* the page. My goal was an agent that could:
- Visit a URL.
- Understand what a “product availability status” looks like.
- Identify if the product is in or out of stock based on the *meaning* of the text, not just exact keywords.
- Report only when the status I care about (e.g., “in stock”) is detected.
Agent Setup: Using a Basic LLM for Interpretation
For this experiment, I decided to keep it relatively simple. I’m using a local LLM (specifically, a fine-tuned Llama 3 model running on my home server, thanks to Ollama) and a Python script to orchestrate the agent’s actions. The agent itself doesn’t “live” in a separate environment; it’s the combination of the script, the LLM, and the tools it can use.
Agent’s “Tools”
- Web Scraper: A function to fetch the HTML content of a URL (similar to my initial Python script, but now just providing raw HTML to the LLM).
- LLM Interface: A function to send prompts to my local Llama 3 instance and get responses.
- Notification System: A simple email sender (or even just a print statement for now).
The Agent’s Workflow (Simplified)
Here’s how I designed the agent’s core loop:
- Fetch Page: Use the web scraper tool to get the raw HTML of the target product page.
- Analyze with LLM: Send the HTML to the LLM with a specific prompt.
- Interpret and Decide: The LLM analyzes the HTML and decides if the product is in stock.
- Report Change: If the status has changed to “in stock” (and was previously out), trigger a notification.
- Repeat: Wait for a set interval and repeat the process.
Prompt Engineering for Stock Detection
This was the crucial part. My prompt to the LLM needed to be clear and solid. Here’s an example of what I used:
"You are an AI assistant designed to detect the stock status of a product on a webpage.
I will provide you with the HTML content of a product page.
Your task is to analyze the HTML and determine if the product is currently 'in stock' or 'out of stock'.
Do not just search for exact phrases; understand the context. Look for common indicators like:
- 'Add to Cart' buttons (usually indicates in stock)
- 'Out of Stock' messages
- 'Currently Unavailable' messages
- 'Pre-order' (consider this out of stock for immediate purchase)
- Any variations of stock status messaging.
Return ONLY one of the following words: 'IN_STOCK' or 'OUT_OF_STOCK'.
Do not provide any other text, explanations, or formatting.
HTML Content:
" + html_content
The “Return ONLY one of the following words” instruction is key for programmatic parsing of the LLM’s output. This allows my Python script to easily read the LLM’s decision and act upon it.
Putting It Together (Conceptual Code)
While the full implementation involves more error handling and state management, here’s the core idea in Python:
import requests
import time
# Assume you have a function to interact with your LLM
# e.g., from my_llm_interface import query_llm
# And a function to send notifications
# e.g., from my_notifier import send_alert
PRODUCT_URL = "https://example-store.com/product-x-123"
CHECK_INTERVAL_SECONDS = 3600 # Check every hour
last_known_status = "UNKNOWN" # To track changes
def get_html(url):
try:
response = requests.get(url, timeout=10)
response.raise_for_status()
return response.text
except requests.exceptions.RequestException as e:
print(f"Error fetching URL: {e}")
return None
def determine_stock_status(html_content):
if not html_content:
return "ERROR"
prompt = f"""
You are an AI assistant designed to detect the stock status of a product on a webpage.
I will provide you with the HTML content of a product page.
Your task is to analyze the HTML and determine if the product is currently 'in stock' or 'out of stock'.
Do not just search for exact phrases; understand the context. Look for common indicators like:
- 'Add to Cart' buttons (usually indicates in stock)
- 'Out of Stock' messages
- 'Currently Unavailable' messages
- 'Pre-order' (consider this out of stock for immediate purchase)
- Any variations of stock status messaging.
Return ONLY one of the following words: 'IN_STOCK' or 'OUT_OF_STOCK'.
Do not provide any other text, explanations, or formatting.
HTML Content:
{html_content}
"""
# This is where you'd call your LLM
# For demonstration, let's simulate a response
# real_llm_response = query_llm(prompt)
# Simulate LLM response for testing
if "Add to Cart" in html_content: # Simple heuristic for demo
real_llm_response = "IN_STOCK"
elif "Out of Stock" in html_content or "Unavailable" in html_content:
real_llm_response = "OUT_OF_STOCK"
else:
real_llm_response = "UNKNOWN"
return real_llm_response.strip().upper()
# Main agent loop
while True:
print(f"[{time.ctime()}] Checking product stock...")
html = get_html(PRODUCT_URL)
current_status = determine_stock_status(html)
if current_status == "ERROR":
print("Could not determine status due to fetching error. Retrying later.")
elif current_status == "IN_STOCK" and last_known_status != "IN_STOCK":
print("ALERT: Product is now IN STOCK!")
# send_alert(f"Product {PRODUCT_URL} is back in stock!") # Your notification logic
last_known_status = "IN_STOCK"
elif current_status == "OUT_OF_STOCK" and last_known_status != "OUT_OF_STOCK":
print("Product is OUT OF STOCK.")
last_known_status = "OUT_OF_STOCK"
else:
print(f"Product status is {current_status}. No change detected.")
time.sleep(CHECK_INTERVAL_SECONDS)
This setup allows the agent to be much more resilient to changes in the website’s layout or wording. The LLM, with its understanding of natural language, can interpret variations of “out of stock” without me having to update a keyword list every time.
The Results: A More solid Monitor
After letting this agent run for a few weeks, I can confirm it’s significantly more reliable than my initial BeautifulSoup script. I’ve seen product pages change their “out of stock” messaging, move elements around, and even switch from buttons to text links for purchasing. My AI agent, relying on the LLM’s interpretive power, has consistently given me accurate stock status updates.
One time, a store changed their “Add to Cart” button to “Notify Me When Available.” My old script would have seen “Add to Cart” disappear and gotten confused. The LLM, however, correctly interpreted “Notify Me When Available” as an “OUT_OF_STOCK” status, because it understands the *meaning* behind those phrases.
This isn’t about building a super-intelligent general AI to run your life. It’s about taking a specific, annoying problem and applying a bit of AI smarts to solve it in a more solid way than traditional scripting allows. It’s practical AI, and that’s what I love about this stuff.
Actionable Takeaways
If you’re looking to start playing with AI agents for practical automation, here are my top tips:
- Start Small, Solve a Real Problem: Don’t try to build Skynet on your first go. Pick a specific task that you do manually and find annoying. My website monitoring is a perfect example.
- use Local LLMs: Tools like Ollama make it incredibly easy to run powerful LLMs on your own hardware. This keeps costs down and data private, which is great for personal projects.
- Focus on Clear Prompts: The agent’s “intelligence” often comes down to how well you instruct the LLM. Be explicit about what you want it to do and what format you expect the output in.
- Give Your Agent “Tools”: An LLM is powerful, but it needs to interact with the real world. Provide it with functions to fetch data, send messages, or manipulate files.
- Iterate and Refine: Your first agent won’t be perfect. Test it, see where it fails, and refine your prompts or its tools.
The world of AI agents is still early, but the potential for practical, everyday automation is huge. Don’t wait for some “game-changing” product; start building your own small, useful agents now. You’ll be surprised at what you can automate away from your to-do list.
That’s it for this one, Clawgo crew. Let me know what practical agents you’re building in the comments!
🕒 Last updated: · Originally published: March 14, 2026