My client's analytics team was spending 15+ hours every week on manual Power BI tasks — triggering refreshes, documenting datasets, checking data quality, exporting reports, updating stakeholders. I built 8 Python scripts that automated 80% of it. Now 15 hours became 3. Here are all 8 scripts, copy-paste ready, with the exact workflow that saved a 4-person team an entire workday every week.

I need to tell you about the Monday morning ritual at Ravi's manufacturing company.

Every Monday at 7:30 AM, Deepak — the most senior analyst on Ravi's team — would sit down at his desk and begin what he called "The Checklist." His colleagues called it something less polite.

The Checklist looked like this:

  1. Open Power BI Service. Check if overnight refresh succeeded for all 14 datasets. (15 minutes)
  2. If any failed, investigate why. Check gateway status, source connectivity, error logs. (20–45 minutes)
  3. Manually trigger re-refresh for failed datasets. Wait. Check again. (10–30 minutes)
  4. Open each of the 8 production reports. Verify key numbers against ERP. (25 minutes)
  5. Screenshot key dashboard pages. Paste into weekly email to management. (20 minutes)
  6. Update the "Dataset Documentation" spreadsheet with any model changes from last week. (15 minutes)
  7. Check workspace permissions — anyone new join who needs access? Anyone left who should be removed? (10 minutes)
  8. Respond to overnight Slack messages from stakeholders asking "why is my report showing yesterday's data?" (varies)

Total: approximately 2–3 hours. Every Monday. Fifty-two weeks a year.

And that was just Deepak. The other 3 analysts had their own versions of The Checklist for their domains.

When I calculated it, the team was spending 15.4 hours per week on tasks that were repetitive, predictable, and — most importantly — automatable.

I told Ravi: "Give me two weeks. I'll give your team their Mondays back."

Here are the 8 Python scripts I built. Every single one is real, tested in production, and copy-paste ready.

None
The Monday Checklist Problem

Script 0: The Foundation — Authenticating with Power BI REST API

Every script needs this. I built a reusable authentication module that all 8 scripts share:

# pbi_auth.py — Reusable Power BI authentication module
import msal
import requests

class PowerBIClient:
    """Reusable Power BI REST API client with Service Principal auth"""
    
    def __init__(self, tenant_id, client_id, client_secret):
        self.tenant_id = tenant_id
        self.client_id = client_id
        self.client_secret = client_secret
        self.base_url = "https://api.powerbi.com/v1.0/myorg"
        self.token = self._authenticate()
    
    def _authenticate(self):
        authority = f"https://login.microsoftonline.com/{self.tenant_id}"
        app = msal.ConfidentialClientApplication(
            self.client_id,
            authority=authority,
            client_credential=self.client_secret
        )
        result = app.acquire_token_for_client(
            scopes=["https://analysis.windows.net/powerbi/api/.default"]
        )
        if "access_token" in result:
            return result["access_token"]
        raise Exception(f"Auth failed: {result.get('error_description')}")
    
    def get(self, endpoint):
        headers = {"Authorization": f"Bearer {self.token}"}
        response = requests.get(f"{self.base_url}/{endpoint}", headers=headers)
        response.raise_for_status()
        return response.json()
    
    def post(self, endpoint, data=None):
        headers = {
            "Authorization": f"Bearer {self.token}",
            "Content-Type": "application/json"
        }
        response = requests.post(
            f"{self.base_url}/{endpoint}", 
            headers=headers, json=data
        )
        return response

Setup required (one-time):

  1. Register an App in Azure Entra ID (formerly Azure AD)
  2. Generate a client secret
  3. Add the Service Principal as Admin in your Power BI workspaces
  4. Enable "Service principals can use Fabric APIs" in Power BI Admin → Tenant Settings

This takes about 20 minutes. After that, every script works automatically.

Script 1: Refresh Monitor — "Did Everything Refresh Successfully?"

Time saved: 35 minutes/day → 0 minutes

This is the script that replaced step 1–3 of Deepak's Monday Checklist. It runs every morning at 6:30 AM (before anyone opens their laptop), checks every dataset's refresh status, and sends a Slack/email alert if anything failed.

# script1_refresh_monitor.py
from pbi_auth import PowerBIClient
from datetime import datetime, timedelta
import json

def check_all_refreshes(client, workspace_id):
    """Check refresh status for all datasets in a workspace"""
    datasets = client.get(f"groups/{workspace_id}/datasets")
    
    results = {"success": [], "failed": [], "stale": []}
    
    for dataset in datasets["value"]:
        dataset_id = dataset["id"]
        dataset_name = dataset["name"]
        
        # Get refresh history
        try:
            history = client.get(
                f"groups/{workspace_id}/datasets/{dataset_id}/refreshes?$top=1"
            )
        except Exception:
            results["failed"].append({
                "name": dataset_name, "error": "Could not retrieve refresh history"
            })
            continue
        
        if not history.get("value"):
            results["stale"].append({"name": dataset_name, "reason": "No refresh history"})
            continue
        
        last_refresh = history["value"][0]
        status = last_refresh.get("status", "Unknown")
        end_time = last_refresh.get("endTime", "")
        
        if status == "Completed":
            # Check if refresh is recent (within 24 hours)
            if end_time:
                refresh_dt = datetime.fromisoformat(end_time.replace("Z", "+00:00"))
                if datetime.now(refresh_dt.tzinfo) - refresh_dt > timedelta(hours=24):
                    results["stale"].append({
                        "name": dataset_name,
                        "last_refresh": end_time,
                        "reason": "Last refresh > 24 hours ago"
                    })
                else:
                    results["success"].append({
                        "name": dataset_name, "refreshed_at": end_time
                    })
            else:
                results["success"].append({"name": dataset_name})
        else:
            error_msg = last_refresh.get("serviceExceptionJson", "No details")
            results["failed"].append({
                "name": dataset_name, "status": status, "error": str(error_msg)[:200]
            })
    
    return results

def send_alert(results, webhook_url):
    """Send Slack alert with refresh status summary"""
    failed_count = len(results["failed"])
    stale_count = len(results["stale"])
    success_count = len(results["success"])
    
    if failed_count == 0 and stale_count == 0:
        emoji = "✅"
        text = f"{emoji} All {success_count} datasets refreshed successfully."
    else:
        emoji = "🚨" if failed_count > 0 else "⚠️"
        lines = [f"{emoji} *Refresh Status Report* — {datetime.now().strftime('%Y-%m-%d %H:%M')}"]
        lines.append(f"✅ Succeeded: {success_count}")
        
        if failed_count:
            lines.append(f"❌ Failed: {failed_count}")
            for f in results["failed"]:
                lines.append(f"  → {f['name']}: {f.get('error', 'Unknown')[:100]}")
        
        if stale_count:
            lines.append(f"⏰ Stale: {stale_count}")
            for s in results["stale"]:
                lines.append(f"  → {s['name']}: {s['reason']}")
        
        text = "\n".join(lines)
    
    requests.post(webhook_url, json={"text": text})

# Usage
client = PowerBIClient(TENANT_ID, CLIENT_ID, CLIENT_SECRET)
results = check_all_refreshes(client, WORKSPACE_ID)
send_alert(results, SLACK_WEBHOOK_URL)

What Deepak used to do: Manually open Power BI Service → navigate to each workspace → check each dataset → read the error → try to fix it.

What happens now: At 6:30 AM, the script checks everything. By 7:30 AM, the team gets a Slack message: either "All clear" or "Dataset X failed: gateway timeout." They fix only what's broken. Zero time spent on the 90% that worked.

None
The 8 Scripts Overview

Script 2: Smart Re-Refresh — "Fix It Automatically"

Time saved: 20 minutes/occurrence → 0 minutes

Some refresh failures are transient — gateway timeouts, temporary network issues. This script automatically retries failed refreshes up to 2 times before alerting a human.

# script2_smart_rerefresh.py
from pbi_auth import PowerBIClient
import time

def auto_retry_failed_refreshes(client, workspace_id, max_retries=2):
    """Automatically retry failed dataset refreshes"""
    datasets = client.get(f"groups/{workspace_id}/datasets")
    retry_results = []
    
    for dataset in datasets["value"]:
        dataset_id = dataset["id"]
        name = dataset["name"]
        
        history = client.get(
            f"groups/{workspace_id}/datasets/{dataset_id}/refreshes?$top=1"
        )
        
        if not history.get("value"):
            continue
        
        last = history["value"][0]
        if last.get("status") != "Completed":
            # Attempt retry
            for attempt in range(1, max_retries + 1):
                print(f"Retrying {name} (attempt {attempt}/{max_retries})...")
                
                response = client.post(
                    f"groups/{workspace_id}/datasets/{dataset_id}/refreshes",
                    data={"notifyOption": "NoNotification"}
                )
                
                if response.status_code == 202:
                    # Wait and check
                    time.sleep(120)  # Wait 2 minutes
                    
                    new_history = client.get(
                        f"groups/{workspace_id}/datasets/{dataset_id}/refreshes?$top=1"
                    )
                    new_status = new_history["value"][0].get("status")
                    
                    if new_status == "Completed":
                        retry_results.append({
                            "name": name, "status": "RECOVERED", 
                            "attempts": attempt
                        })
                        break
                    elif attempt == max_retries:
                        retry_results.append({
                            "name": name, "status": "STILL_FAILED",
                            "attempts": max_retries
                        })
                else:
                    retry_results.append({
                        "name": name, "status": "RETRY_ERROR",
                        "error": response.text[:200]
                    })
                    break
    
    return retry_results

In Deepak's case, 4 out of 5 refresh failures in the first month were transient. The script recovered them automatically. Only 1 required human investigation.

Script 3: Data Quality Validator — "Are the Numbers Right?"

Time saved: 25 minutes/day → 0 minutes

This script executes DAX queries against your semantic models and compares key metrics against expected ranges or external sources.

# script3_data_quality.py
from pbi_auth import PowerBIClient
from datetime import datetime
import numpy as np

def validate_data_quality(client, workspace_id, dataset_id, rules):
    """Run DAX validation queries against a dataset"""
    results = []
    
    for rule in rules:
        # Execute DAX query
        response = client.post(
            f"groups/{workspace_id}/datasets/{dataset_id}/executeQueries",
            data={
                "queries": [{"query": rule["dax_query"]}],
                "serializerSettings": {"includeNulls": True}
            }
        )
        
        if response.status_code == 200:
            data = response.json()
            rows = data["results"][0]["tables"][0]["rows"]
            actual_value = list(rows[0].values())[0] if rows else None
            
            # Check against expected
            passed = True
            message = "OK"
            
            if rule["check_type"] == "min_threshold":
                if actual_value < rule["threshold"]:
                    passed = False
                    message = f"Below minimum: {actual_value} < {rule['threshold']}"
            
            elif rule["check_type"] == "max_threshold":
                if actual_value > rule["threshold"]:
                    passed = False
                    message = f"Above maximum: {actual_value} > {rule['threshold']}"
            
            elif rule["check_type"] == "range":
                if not (rule["min"] <= actual_value <= rule["max"]):
                    passed = False
                    message = f"Out of range: {actual_value} not in [{rule['min']}, {rule['max']}]"
            
            elif rule["check_type"] == "not_null_pct":
                if actual_value > rule["threshold"]:
                    passed = False
                    message = f"Null rate too high: {actual_value}% > {rule['threshold']}%"
            
            results.append({
                "rule_name": rule["name"],
                "actual_value": actual_value,
                "passed": passed,
                "message": message
            })
    
    return results

# Example usage — Ravi's manufacturing rules
manufacturing_rules = [
    {
        "name": "Daily Revenue > ₹0",
        "dax_query": "EVALUATE ROW(\"Rev\", [Total Revenue])",
        "check_type": "min_threshold",
        "threshold": 0
    },
    {
        "name": "All 12 Plants Reporting",
        "dax_query": "EVALUATE ROW(\"Plants\", DISTINCTCOUNT(Plants[PlantID]))",
        "check_type": "min_threshold",
        "threshold": 12
    },
    {
        "name": "Null Rate in Revenue < 1%",
        "dax_query": """
            EVALUATE ROW("NullPct", 
                DIVIDE(
                    COUNTBLANK(Sales[Revenue]),
                    COUNTROWS(Sales)
                ) * 100
            )
        """,
        "check_type": "not_null_pct",
        "threshold": 1.0
    },
    {
        "name": "Margin Between 15-45%",
        "dax_query": "EVALUATE ROW(\"Margin\", [Gross Margin %])",
        "check_type": "range",
        "min": 15.0,
        "max": 45.0
    }
]

This is a simplified version of the DashGuard agent I built (Blog 10). But even this basic version catches the issues that cost Ravi's company $180K the first time around.

Script 4: Workspace Inventory — "What Do We Actually Have?"

Time saved: 2 hours/month → 5 minutes

Every Power BI admin has been asked: "How many reports do we have? How many datasets? Who owns them? When were they last refreshed?" This script generates a complete workspace inventory in seconds.

# script4_workspace_inventory.py
from pbi_auth import PowerBIClient
import pandas as pd
from datetime import datetime

def generate_workspace_inventory(client, workspace_ids):
    """Generate complete inventory of all PBI assets across workspaces"""
    all_assets = []
    
    for ws_id in workspace_ids:
        # Get workspace info
        ws_info = client.get(f"groups/{ws_id}")
        ws_name = ws_info.get("name", "Unknown")
        
        # Get datasets
        datasets = client.get(f"groups/{ws_id}/datasets")
        for ds in datasets.get("value", []):
            # Get refresh history
            try:
                history = client.get(
                    f"groups/{ws_id}/datasets/{ds['id']}/refreshes?$top=1"
                )
                last_refresh = history["value"][0] if history.get("value") else {}
            except:
                last_refresh = {}
            
            all_assets.append({
                "workspace": ws_name,
                "asset_type": "Dataset",
                "name": ds.get("name"),
                "id": ds.get("id"),
                "configured_by": ds.get("configuredBy", "Unknown"),
                "is_refreshable": ds.get("isRefreshable", False),
                "last_refresh_status": last_refresh.get("status", "N/A"),
                "last_refresh_time": last_refresh.get("endTime", "N/A"),
                "created": ds.get("createdDate", "N/A")
            })
        
        # Get reports
        reports = client.get(f"groups/{ws_id}/reports")
        for rpt in reports.get("value", []):
            all_assets.append({
                "workspace": ws_name,
                "asset_type": "Report",
                "name": rpt.get("name"),
                "id": rpt.get("id"),
                "configured_by": "N/A",
                "is_refreshable": False,
                "last_refresh_status": "N/A",
                "last_refresh_time": "N/A",
                "created": rpt.get("createdDateTime", "N/A")
            })
    
    df = pd.DataFrame(all_assets)
    
    # Save to Excel
    filename = f"PBI_Inventory_{datetime.now().strftime('%Y%m%d')}.xlsx"
    df.to_excel(filename, index=False)
    
    # Print summary
    print(f"\n📊 Power BI Inventory Summary")
    print(f"{'='*40}")
    print(f"Workspaces scanned: {len(workspace_ids)}")
    print(f"Total datasets: {len(df[df['asset_type']=='Dataset'])}")
    print(f"Total reports: {len(df[df['asset_type']=='Report'])}")
    print(f"Refreshable datasets: {len(df[(df['asset_type']=='Dataset') & (df['is_refreshable']==True)])}")
    print(f"\nSaved to: {filename}")
    
    return df

The first time I ran this for Ravi's company, we discovered 23 reports that hadn't been refreshed in over 90 days. 11 of them had zero views in the last quarter. We retired them and reduced the refresh load by 18%.

None
Time Saving : Before vs After per Week

Script 5: Automated Report Export — "Send the CEO Their Numbers"

Time saved: 20 minutes/day → 0 minutes

Ravi's CEO wanted a PDF of the executive dashboard in his email every morning at 8 AM. Deepak was manually exporting and emailing it. Every. Single. Day.

# script5_report_export.py
from pbi_auth import PowerBIClient
import requests
import time
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.text import MIMEText
from email import encoders

def export_report_to_pdf(client, workspace_id, report_id):
    """Export a Power BI report to PDF"""
    # Trigger export
    export_response = client.post(
        f"groups/{workspace_id}/reports/{report_id}/ExportTo",
        data={
            "format": "PDF",
            "powerBIReportConfiguration": {
                "defaultBookmark": {"name": ""}  # Export default view
            }
        }
    )
    
    if export_response.status_code != 202:
        raise Exception(f"Export failed: {export_response.text}")
    
    export_id = export_response.json().get("id")
    
    # Poll for completion
    for _ in range(30):  # Max 5 minutes wait
        status = client.get(
            f"groups/{workspace_id}/reports/{report_id}/exports/{export_id}"
        )
        if status.get("status") == "Succeeded":
            # Download the file
            file_url = status.get("resourceLocation")
            headers = {"Authorization": f"Bearer {client.token}"}
            file_response = requests.get(file_url, headers=headers)
            return file_response.content
        elif status.get("status") == "Failed":
            raise Exception("Export failed")
        time.sleep(10)
    
    raise Exception("Export timed out")

def email_report(pdf_content, recipients, subject, body):
    """Email the exported PDF"""
    msg = MIMEMultipart()
    msg["Subject"] = subject
    msg["From"] = SENDER_EMAIL
    msg["To"] = ", ".join(recipients)
    msg.attach(MIMEText(body, "html"))
    
    attachment = MIMEBase("application", "pdf")
    attachment.set_payload(pdf_content)
    encoders.encode_base64(attachment)
    attachment.add_header(
        "Content-Disposition", 
        f"attachment; filename=Executive_Dashboard_{datetime.now().strftime('%Y%m%d')}.pdf"
    )
    msg.attach(attachment)
    
    with smtplib.SMTP(SMTP_SERVER, SMTP_PORT) as server:
        server.starttls()
        server.login(SENDER_EMAIL, SENDER_PASSWORD)
        server.send_message(msg)

Now the CEO gets his PDF at 8:00 AM sharp. No human involved. Deepak's morning just got 20 minutes shorter.

Script 6: Activity Log Analyzer — "Who's Actually Using Our Reports?"

Time saved: 4 hours/month → 10 minutes

This script pulls Power BI activity logs and tells you exactly which reports are being used, by whom, and how often. The insights are gold for governance.

# script6_activity_analyzer.py
from pbi_auth import PowerBIClient
from datetime import datetime, timedelta
import pandas as pd

def get_activity_logs(client, days_back=30):
    """Pull Power BI activity logs for the last N days"""
    all_events = []
    
    for day_offset in range(days_back):
        date = (datetime.now() - timedelta(days=day_offset)).strftime("%Y-%m-%dT00:00:00Z")
        end_date = (datetime.now() - timedelta(days=day_offset)).strftime("%Y-%m-%dT23:59:59Z")
        
        try:
            events = client.get(
                f"admin/activityevents?"
                f"startDateTime='{date}'&endDateTime='{end_date}'"
            )
            if events.get("activityEventEntities"):
                all_events.extend(events["activityEventEntities"])
            
            # Handle pagination
            while events.get("continuationUri"):
                events = requests.get(
                    events["continuationUri"],
                    headers={"Authorization": f"Bearer {client.token}"}
                ).json()
                if events.get("activityEventEntities"):
                    all_events.extend(events["activityEventEntities"])
        except Exception as e:
            print(f"Error for {date}: {e}")
    
    return pd.DataFrame(all_events)

def analyze_report_usage(df):
    """Analyze which reports are most/least used"""
    # Filter to report views
    views = df[df["Activity"] == "ViewReport"].copy()
    
    if views.empty:
        return {"message": "No report view data found"}
    
    # Most viewed reports
    top_reports = (views.groupby("ReportName")["UserId"]
                   .count()
                   .sort_values(ascending=False)
                   .head(20))
    
    # Least viewed (candidates for retirement)
    all_reports_with_views = set(views["ReportName"].unique())
    
    # Unique users per report
    users_per_report = (views.groupby("ReportName")["UserId"]
                        .nunique()
                        .sort_values(ascending=False))
    
    # Peak usage hours
    views["Hour"] = pd.to_datetime(views["CreationTime"]).dt.hour
    peak_hours = views["Hour"].value_counts().sort_index()
    
    return {
        "total_views": len(views),
        "unique_users": views["UserId"].nunique(),
        "unique_reports_viewed": len(all_reports_with_views),
        "top_10_reports": top_reports.to_dict(),
        "users_per_report": users_per_report.head(10).to_dict(),
        "peak_hours": peak_hours.to_dict()
    }

When I ran this for Ravi's company: 34 published reports. Only 19 had been viewed in the last 30 days. 8 reports had a single viewer — the person who built them. We identified ₹3.2 lakh/year in wasted capacity refreshing reports nobody used.

Script 7: Bulk Workspace Permissions — "Access Management at Scale"

Time saved: 30 minutes/week → 2 minutes

When employees join, move teams, or leave, someone has to update Power BI workspace access. This script handles it in bulk.

# script7_permissions.py
from pbi_auth import PowerBIClient
import pandas as pd

def audit_workspace_access(client, workspace_ids):
    """Audit who has access to each workspace"""
    all_users = []
    
    for ws_id in workspace_ids:
        users = client.get(f"groups/{ws_id}/users")
        ws_info = client.get(f"groups/{ws_id}")
        
        for user in users.get("value", []):
            all_users.append({
                "workspace": ws_info.get("name"),
                "workspace_id": ws_id,
                "user": user.get("emailAddress", user.get("displayName")),
                "role": user.get("groupUserAccessRight"),
                "principal_type": user.get("principalType")
            })
    
    df = pd.DataFrame(all_users)
    df.to_excel("PBI_Access_Audit.xlsx", index=False)
    
    # Flag potential issues
    admin_count = len(df[df["role"] == "Admin"])
    print(f"⚠️ Total Admin users across all workspaces: {admin_count}")
    print(f"   (Best practice: minimize admin access)")
    
    return df

def add_users_bulk(client, workspace_id, users_csv_path):
    """Add multiple users from a CSV file"""
    users_df = pd.read_csv(users_csv_path)
    
    for _, row in users_df.iterrows():
        response = client.post(
            f"groups/{workspace_id}/users",
            data={
                "emailAddress": row["email"],
                "groupUserAccessRight": row.get("role", "Viewer")
            }
        )
        status = "✓" if response.status_code == 200 else "✗"
        print(f"{status} {row['email']} → {row.get('role', 'Viewer')}")

Script 8: Weekly Health Report — "Everything in One Email"

Time saved: 45 minutes/week → 0 minutes

This is the script that ties everything together. It runs every Friday at 5 PM and sends a single comprehensive health report to the analytics team and management.

# script8_weekly_health.py
from pbi_auth import PowerBIClient
from datetime import datetime
import json

def generate_weekly_health_report(client, workspace_ids):
    """Generate comprehensive weekly Power BI health report"""
    report = {
        "generated_at": datetime.now().isoformat(),
        "period": "Last 7 days",
        "workspaces": []
    }
    
    total_datasets = 0
    total_reports = 0
    total_failures = 0
    total_refreshes = 0
    
    for ws_id in workspace_ids:
        ws_info = client.get(f"groups/{ws_id}")
        ws_name = ws_info.get("name")
        
        datasets = client.get(f"groups/{ws_id}/datasets")
        reports = client.get(f"groups/{ws_id}/reports")
        
        ws_summary = {
            "name": ws_name,
            "datasets": len(datasets.get("value", [])),
            "reports": len(reports.get("value", [])),
            "refresh_failures": 0,
            "refresh_successes": 0
        }
        
        for ds in datasets.get("value", []):
            try:
                history = client.get(
                    f"groups/{ws_id}/datasets/{ds['id']}/refreshes?$top=7"
                )
                for refresh in history.get("value", []):
                    total_refreshes += 1
                    if refresh.get("status") == "Completed":
                        ws_summary["refresh_successes"] += 1
                    else:
                        ws_summary["refresh_failures"] += 1
                        total_failures += 1
            except:
                pass
        
        total_datasets += ws_summary["datasets"]
        total_reports += ws_summary["reports"]
        report["workspaces"].append(ws_summary)
    
    # Calculate health score
    if total_refreshes > 0:
        success_rate = ((total_refreshes - total_failures) / total_refreshes) * 100
    else:
        success_rate = 100
    
    report["summary"] = {
        "total_workspaces": len(workspace_ids),
        "total_datasets": total_datasets,
        "total_reports": total_reports,
        "total_refreshes_7d": total_refreshes,
        "total_failures_7d": total_failures,
        "refresh_success_rate": round(success_rate, 1),
        "health_grade": (
            "A" if success_rate >= 98 else
            "B" if success_rate >= 95 else
            "C" if success_rate >= 90 else
            "D" if success_rate >= 80 else "F"
        )
    }
    
    return report

Ravi's management now gets a single Friday email: "Power BI Health Grade: A (99.2% refresh success rate). 42 datasets across 6 workspaces. 2 minor failures auto-recovered. Zero outstanding issues."

No more Monday Checklist. No more "is the data up to date?" messages. Just confidence.

None
Monday Before vs After

The Results: 15 Hours → 3 Hours Per Week

After deploying all 8 scripts, here's what changed for Ravi's 4-person analytics team:

None

12.4 hours saved per week.

That's 645 hours per year across the 4-person team. At an average fully loaded cost of ₹800/hour, that's ₹5.16 lakh/year ($6,200) in recovered productivity — from scripts that took 2 weeks to build and cost ₹0 to run.

But the real value isn't the time saved. It's what the team did with the recovered time.

Deepak — freed from The Checklist — spent his Monday mornings building the predictive maintenance dashboard that Ravi's operations team had been requesting for 8 months. It launched in March. The operations manager told Ravi it prevented 2 unplanned equipment shutdowns in the first quarter, saving an estimated ₹28 lakh.

The junior analyst who used to spend 20 minutes screenshotting reports every day? She built an automated anomaly detection system for the supply chain data. It caught a vendor pricing error worth ₹4.2 lakh in week 3.

Automation doesn't replace analysts. It frees them to do analyst work.

None
The Result What Automation Actually Delivered

How to Deploy This in Your Organization

Step 1: Set Up Authentication (30 minutes, one-time)

Register an App in Azure Entra ID. Get client ID and secret. Add the Service Principal to your workspaces. Enable API access in tenant settings. Use the pbi_auth.py module from Script 0.

Step 2: Start with Script 1 (Refresh Monitor)

This gives you the fastest win. Deploy it, schedule it with cron or Windows Task Scheduler, and point alerts to your team's Slack/Teams channel. Your team will trust the system within a week.

Step 3: Add Scripts Gradually

Don't deploy all 8 at once. Add one per week. Let the team adjust. Tune thresholds. Each script builds confidence for the next one.

Step 4: Run on a Schedule

None

Step 5: Hosting Options

  • Simplest: Windows Task Scheduler on a dedicated machine
  • Better: Azure VM with cron jobs (~$30/month)
  • Best: Azure Functions (serverless, cost per execution)
  • Fabric-native: Fabric Notebooks (if you're in the Fabric ecosystem)
None
Deployment Guide : How to SetThis UP

The 5 Lessons I Learned Building These Scripts

Lesson 1: Start with the most annoying task, not the most important

I asked each team member: "What's the ONE task you dread every week?" Every single one said refresh monitoring. That's where I started. The immediate relief built buy-in for everything else.

Lesson 2: Alert fatigue is real — tune before you scale

Script 1 initially alerted on every single non-"Completed" status, including "Unknown" and "InProgress." The team got 47 false alerts in the first week. I added status filtering and timing windows. False alerts dropped to 2–3 per week.

Lesson 3: Log everything

Every script writes to a log file. When something goes wrong (and it will), the log tells you exactly what happened, when, and why. Without logs, you're debugging blind.

Lesson 4: The API has rate limits — respect them

The Power BI REST API has rate limits. Making 200 calls in rapid succession will get you throttled. I added time.sleep() delays between batch operations and implemented exponential backoff for retries.

Lesson 5: Security is non-negotiable

Never store credentials in your scripts. Use environment variables, Azure Key Vault, or a .env file that's in your .gitignore. The Service Principal should have the minimum permissions necessary. Audit access regularly.

None
5 Lessons From Building These Scripts

What's Next: The Scripts I'm Building Now

These 8 scripts are the foundation. Here's what's coming:

Script 9: Semantic Link Integration — Using Python notebooks in Fabric to read Power BI semantic models directly, preserving all DAX logic. This is the 2026 game-changer: your data science team and your BI team share the same source of truth.

Script 10: Automated Data Lineage — Mapping which data sources feed which datasets, which datasets power which reports, and which reports are used by which stakeholders. End-to-end visibility.

Script 11: Cost Optimizer — Analyzing capacity consumption patterns and recommending right-sizing. Identifying overprovisioned workspaces and underused Premium features.

The goal is simple: let Python handle the plumbing so humans can handle the thinking.

None
Whats Next : The Script I am building Now

Which script are you going to try first? And what manual Power BI task drives you crazy every week? Drop a comment below — I might build the script for it.

All scripts in this post are production-tested and copy-paste ready. If you need help adapting them to your environment, reach out.