Use Cases

Table of Content

Table of Content

Table of Content

Analytics & Advertising

How we measure usage and show ads, and the controls you have over both.

AI & Automated Decisions

Purpose of this page

This page explains where AIM uses automation and AI, what that means for you, and how to request human review or contest a decision. It complements Data & Sources, Purposes & Legal Bases, Retention, Recipients & Transfers, and Consent & Cookies.

Automation where it helps; humans where it matters

Plain-language definitions

  • Automated decision-making — results produced by systems with little or no human involvement (e.g., model routing, abuse/fraud signals).

  • Profiling — automated analysis of behavior to evaluate aspects about a user.

  • Solely automated decisions with legal or similarly significant effects — decisions that significantly affect you without meaningful human involvement (e.g., permanent account denial).

Where AIM uses automation

1) Model routing (core service)

AIM may route your request to different AI model providers to optimize safety, quality, latency, and cost. Signals include prompt characteristics (e.g., length, modality), system metrics (availability/latency), and workspace configuration where applicable. Sensitive attributes are not used.
Training is on by default with opt-out, and we propagate your choice to supported vendors.
Bubble: Best model for the job, not your identity.

2) Safety, abuse, and fraud safeguards

To protect users and the platform, AIM applies automated rate-limits, spam/abuse detection, and security anomaly checks. These may trigger temporary safeguards (for example, a short-lived block on suspicious bursts). You can request human review at any time.
Bubble: Temporary by default, review on request.

3) Integrations you trigger

When you invoke actions with connected services (e.g., Google integrations), automation executes exactly what you ask—nothing more—using the minimum data necessary and honoring your ability to revoke access at any time.

What AIM does not do

  • No solely automated decisions with legal or similarly significant effects without meaningful human involvement.

  • No use of sensitive attributes (e.g., health, biometrics) in routing, safeguards, or risk evaluation.

  • No personalization of outputs or UI based on behavioral profiling. If this ever changes, we will update this page and provide appropriate controls.

Your controls & rights (AI context)

  • Human review — If an automated safeguard affects you, email support@aim-ai.tech to request human intervention.

  • Contest a decision — Email support@aim-ai.tech with the subject “Contest Automated Decision” and include context (timestamps, request ID if available).

  • High-level explanation — You can ask for a concise explanation of the main factors behind a routing or safeguard decision.

  • Training opt-out — Toggle in settings; AIM honors and (where supported) propagates your choice to model vendors.

  • Consent choices — Choose tracking or the tracking-free subscription at entry; withdraw consent any time via the Privacy link.

Signals we rely on (clear boundaries)

  • You provide — prompts, files, and outputs needed to generate results.

  • System generates — minimized telemetry (latency/error codes), safety indicators, approximate IP for performance.

  • Configuration — certain workspace configurations may influence available models or connectors.

  • We avoid — using sensitive categories for automated decisions.

Accountability & safeguards

  • Human-in-the-loop for impactful or escalated cases.

  • Evaluation & monitoring of automated components (offline tests, bias checks; live guardrails for safety).

  • Security — encryption in transit and at rest; least-privilege access; incident response.

  • Documentation — internal logs of safeguards, overrides, and escalations, with retention aligned to our Retention page.

We align with GDPR Article 22 (automated decision-making and profiling) and applicable US state privacy laws on access, deletion, correction, and appeal rights.

Conclusion

AIM uses automation and AI where it helps—primarily model routing driven by prompt characteristics, system metrics, and workspace configuration—and keeps humans where it matters. We apply safety, abuse, and fraud safeguards that are temporary by default with the option to request human review or contest a decision at support@aim-ai.tech. AIM makes no solely automated decisions with legal or similarly significant effects, uses no sensitive attributes, and does not personalize outputs or UI. Your training opt-out is honored and, where supported, propagated to vendors; your consent choices (tracking or tracking-free subscription) can be changed anytime via the Privacy link. Controls, encryption in transit and at rest, least-privilege access, and documented retention (see Retention) keep the system accountable.

Human review / contest requests: support@aim-ai.tech
Effective date: {YYYY-MM-DD} • Last updated: {YYYY-MM-DD}



Get Template for free

Get Template for free

Get Template for free

Create a free website with Framer, the website builder loved by startups, designers and agencies.