← Back to Blog
5 Ways to Set Up Alert24: Real Scenarios from Solo Founders to Multi-Product Teams

5 Ways to Set Up Alert24: Real Scenarios from Solo Founders to Multi-Product Teams

Same Tool, Different Setups

Alert24 has three layers: checks (the things that monitor), services (the things your team owns), and applications (the products your customers use). How you wire these together depends on your team size, architecture, and who needs to know what when something breaks.

This post walks through five real-world setups, from a solo founder monitoring a single product to a multi-product company with separate teams and stakeholders. Each one shows exactly which pieces to configure and why.

How the Layers Work

Before diving into scenarios, here is how the three layers relate to each other:

Checks are the detection mechanism. An HTTP check pings a URL every 30 to 60 seconds and verifies the response. A DNS check confirms your domain resolves correctly. An SSL check watches your certificate expiry. Checks do one thing: detect that something is wrong.

Services represent the things your engineering team owns and operates. "Payment API," "Authentication Service," "Marketing Website." Each service can have multiple checks feeding into it. When a check fails, the linked service's status changes automatically. Services are where you define who gets paged -- the team that owns the service and can actually fix it.

Applications represent the products your customers experience. "Checkout App," "Customer Dashboard," "Mobile API." An application groups multiple services together. When one of its services degrades, the application's status reflects that. Applications are where you notify stakeholders -- product managers, support teams, executives -- who need awareness but are not the ones fixing the problem.

The key insight: the same event flows through all three layers, but different people care at different levels.

Scenario 1: Solo Founder with One Product

Team: 1 person Architecture: A web app and an API, both deployed on a single cloud provider Goal: Know when things break, give customers a page to check

This is the simplest setup. You do not need the full layered model yet. You need checks, a service, and a status page.

Checks to create:

  • HTTP check on your web app (e.g., https://yourapp.com) -- verify status 200 and a keyword in the response body
  • HTTP check on your API health endpoint (e.g., https://api.yourapp.com/health)
  • SSL certificate check on your domain
  • Third-party dependency checks for your critical providers (e.g., Stripe's API status, your cloud provider)

Services to create:

  • One service: "YourApp" -- link all your checks to it

Alerting:

  • Set the service's escalation policy to notify you directly via SMS and email
  • No on-call schedule needed -- you are always on call

Status page:

  • Create a public status page showing your one service
  • Let customers subscribe for email updates
  • When a check fails, the service status updates automatically, and the status page reflects it

What you skip: Applications, on-call schedules, teams, app-level alert rules. You can add all of these later as you grow.

Monthly cost: Free tier (10 monitors, 1 status page, 1 team member)

Scenario 2: Small SaaS Team with Shared On-Call

Team: 5 engineers, 1 product manager Architecture: A web app, a REST API, a background worker, all backed by a database and a few third-party integrations Goal: Rotate on-call fairly, keep the product manager informed without paging them

This is where the layered model starts to earn its keep. You have enough people to rotate on-call, and you have someone (the product manager) who needs to know about outages but should not be woken up at 3 AM.

Checks to create:

  • HTTP checks on your web app, API, and any critical API endpoints (login, checkout, dashboard)
  • A check against your background worker's health endpoint or job queue status URL
  • SSL checks on all your domains
  • Dependency checks for Stripe, SendGrid, your cloud provider, and any other critical third parties

Services to create:

  • "Web Application" -- linked to your frontend checks
  • "API" -- linked to your API health and endpoint checks
  • "Background Workers" -- linked to your worker health check
  • "Payments" -- linked to your Stripe dependency check and any payment-specific endpoint checks

On-call schedule:

  • Create a weekly rotation across your 5 engineers
  • Set each service's escalation policy to: (1) notify the on-call engineer, (2) after 10 minutes with no acknowledgment, notify the engineering team

Application to create:

  • "YourProduct" -- groups all four services
  • Set up an app-level alert rule: when the application status degrades, send a low-priority email to the product manager
  • The product manager gets awareness without getting paged

Status pages:

  • Public status page showing all four services for customers
  • Private status page for internal stakeholders (product manager, customer support) with more detail

The flow in practice: Your API health check fails three times in a row. The "API" service status changes to "down." The on-call engineer gets an SMS. The "YourProduct" application detects that one of its services is down and fires a low-priority email to the product manager. The public status page updates automatically. Customers who subscribed get notified. All of this happens without anyone manually updating anything.

Monthly cost: 5 units at $9/unit = $45/month

Scenario 3: E-Commerce with Separate Service Teams

Team: 12 engineers split across 3 teams (Platform, Payments, Storefront), plus a VP of Engineering Architecture: Microservices -- storefront, payment processing, inventory, shipping, email notifications Goal: Page the right team for the right problem, give leadership a high-level view

This is where service-level vs. app-level alerting becomes critical. When the email notification service goes down, the Storefront team does not need to be woken up. But the VP of Engineering wants to know the checkout experience is degraded.

Services to create (each with their own checks):

Service Owning Team Checks
Storefront Storefront Web app HTTP, product page keyword check, CDN check
Payment Processing Payments Payment API health, Stripe dependency, PayPal dependency
Inventory API Platform Inventory endpoint check, database health
Shipping Integration Platform Shipping API check, carrier API dependencies
Email Notifications Platform Email endpoint check, SendGrid dependency

On-call schedules:

  • "Platform On-Call" -- rotates across the 4 Platform engineers
  • "Payments On-Call" -- rotates across the 3 Payments engineers
  • "Storefront On-Call" -- rotates across the 5 Storefront engineers

Escalation policies per service:

  • Each service's escalation policy targets its owning team's on-call schedule
  • Second escalation step: notify the full owning team
  • Final escalation: notify the VP of Engineering

Applications to create:

Application Services Included Stakeholders
Checkout Experience Storefront, Payment Processing, Inventory API VP of Eng, Head of Product
Order Fulfillment Inventory API, Shipping Integration, Email Notifications VP of Eng, Operations Manager

App-level alert rules:

  • "Checkout Experience" degraded → medium-priority email to VP of Engineering and Head of Product
  • "Order Fulfillment" degraded → low-priority email to Operations Manager

The flow in practice: The SendGrid dependency check detects that SendGrid is having issues. The "Email Notifications" service status changes to degraded. The Platform on-call engineer gets paged -- they own this service. Meanwhile, the "Order Fulfillment" application sees that one of its three services is degraded and fires a low-priority notification to the Operations Manager. The "Checkout Experience" application is unaffected because Email Notifications is not one of its services, so its stakeholders hear nothing. The VP of Engineering does not get paged because this is a degradation, not a full outage of a customer-facing flow.

What this prevents: The old way would either page everyone (alert fatigue) or page no one except the person who happens to notice (missed incidents). The layered model ensures the Platform team gets the page (they fix it), the Operations Manager gets awareness (orders might be delayed), and the Checkout stakeholders are not bothered (their flow still works).

Monthly cost: 12 units at $8/unit = $96/month

Scenario 4: Multi-Product Company with Shared Infrastructure

Team: 25 engineers across 5 teams, managing 3 separate products that share common infrastructure Architecture: Shared authentication, shared database cluster, shared CDN -- each product has its own services on top Goal: Isolate blast radius in alerting, give each product's stakeholders their own view

When you run multiple products on shared infrastructure, an outage in the shared layer affects everything -- but each product team and its stakeholders need to be notified differently.

Shared services (owned by Platform team):

  • "Authentication" -- SSO, OAuth, session management
  • "Database Cluster" -- shared PostgreSQL cluster
  • "CDN & Edge" -- Cloudflare, static assets
  • "Cloud Infrastructure" -- AWS dependency checks for the regions you use

Product-specific services:

Product A ("Analytics Dashboard"):

  • "Analytics API" -- owned by Analytics team
  • "Data Pipeline" -- owned by Analytics team
  • "Report Generator" -- owned by Analytics team

Product B ("Customer Portal"):

  • "Portal API" -- owned by Portal team
  • "Notification Engine" -- owned by Portal team

Product C ("Mobile API"):

  • "Mobile Gateway" -- owned by Mobile team
  • "Push Notification Service" -- owned by Mobile team

Applications:

Application Shared Services Product Services Product Owner
Analytics Dashboard Auth, Database, CDN Analytics API, Data Pipeline, Report Generator VP of Analytics
Customer Portal Auth, Database, CDN Portal API, Notification Engine Head of Customer Success
Mobile App Auth, Database, CDN Mobile Gateway, Push Notifications Mobile Product Manager

Why this works: When the Database Cluster service degrades, all three applications show as degraded. Each product owner gets their own notification: "Analytics Dashboard is degraded." They do not need to understand the root cause -- they just know their product is affected and can communicate accordingly to their users. Meanwhile, the Platform on-call engineer is already paged at the service level and working the fix.

Status pages:

  • One public status page per product, each on its own custom domain
  • One internal status page showing all shared infrastructure and all products (for the engineering org)
  • Each product's status page only shows that product's services -- customers of the Analytics Dashboard do not see Mobile App status

Monthly cost: 25 units at $8/unit = $200/month

Scenario 5: Agency or MSP Managing Client Infrastructure

Team: 8 engineers managing infrastructure for 10-15 client projects Architecture: Each client has their own web app, API, and dependencies -- but your team manages all of them Goal: Monitor everything from one account, alert the right engineer per client, give each client their own status page

This is a horizontal scaling scenario rather than a vertical one. You do not have deep service hierarchies -- you have many independent setups that need to be managed centrally.

Per client, create:

  • A service for each component (e.g., "ClientName - Web," "ClientName - API," "ClientName - Database")
  • HTTP checks, SSL checks, and dependency checks linked to each service
  • An application grouping that client's services (e.g., "ClientName Platform")

On-call schedules:

  • You might have 2-3 on-call schedules based on client groupings or engineer expertise
  • "Client Group A On-Call" handles clients in one timezone or tech stack
  • "Client Group B On-Call" handles the rest

Status pages:

  • One branded status page per client, on their custom domain, showing only their services
  • Each client's customers see a professional, branded status page without knowing you are behind it

App-level alerts:

  • When a client's application degrades, send a notification to that client's primary contact (an email to their CTO or ops lead)
  • This gives clients proactive awareness without them having to check the status page manually

The value here: You manage 40-60 services across all clients from a single Alert24 account. Each client gets their own isolated view via their status page. Your team gets paged based on which client is affected. No client sees another client's infrastructure. When you onboard a new client, you duplicate the pattern: create services, link checks, create an application, spin up a status page.

Monthly cost: 8 team members + 12 status pages = ~14 units at $8/unit = $112/month

Choosing Your Setup

You do not need to start with the most complex scenario. Most teams should start with Scenario 1 or 2 and grow into the others as their team and architecture expand.

Signal Move to
You have paying customers and more than 2 engineers Scenario 2
Different teams own different services Scenario 3
Non-engineers need to know about outages (PMs, execs, support) Scenario 3 or 4
You run multiple products on shared infrastructure Scenario 4
You manage infrastructure for clients Scenario 5

The layered model -- checks feed services, services compose into applications -- is not something you need to think about on day one. But when your first "the wrong person got paged" or "the stakeholder did not find out until the standup" incident happens, you will be glad the structure is there to solve it.

Start simple. Add layers when the pain arrives. The tool grows with you.