Responsible AI framework
Our commitment to safe, ethical and trustworthy use of Artificial Intelligence
Last updated: 25 March, 2026.
Introduction
Sitback uses Artificial Intelligence to enhance the quality, efficiency and impact of the digital experiences we deliver for clients.
Our Responsible AI Framework ensures that all AI use—internal or client-facing—is:
- safe
- ethical
- transparent
- human-led
- secure
- aligned to privacy obligations
This public document provides an overview of Sitback’s AI approach and governance model.
More detailed policy documents exist internally and can be made available on request for procurement, legal or compliance review.
AI governance model
Sitback’s AI operations are governed through four aligned components:
Responsible AI use policy
Sets standards for everyday use of AI tools by Sitback staff, including productivity tools like ChatGPT Teams, Copilot and Atlassian Intelligence.
AI development policy
Defines how AI may be used in software engineering, including:
- secure handling of code
- safe log analysis
- redaction rules
- MCP tooling safeguards
- human oversight for all code-level outputs
Safe prompting standards
A universal set of rules that apply to all AI interactions:
- minimal context
- redaction of sensitive data
- no PII or credentials
- ethical awareness
- human validation
- transparency and accountability
Customer-facing AI products & services policy
Covers AI-driven features delivered to clients, including:
- GEO/SEO recommendations
- AI content tools
- automated audits
- optimisation dashboards and insights
- client portal assistants
Ensures class 4 tools are ethical, explainable, safe, and aligned to client governance.
How Sitback uses AI
AI is applied across three primary areas:
Internal efficiency & quality
AI enhances research, analysis, documentation, productivity and communication.
Delivery excellence
AI strengthens:
- technical audits
- content quality analysis
- SEO insights
- accessibility checks
- optimisation recommendations
Client-facing AI capabilities
AI powers select Sitback products including:
- content improvement suggestions
- SEO/GEO recommendations
- generative summaries and insights
- structured optimisation recommendations
- portal-based assistants
All AI-generated outputs are advisory and supported by human review.
Our responsible AI principles
Sitback is committed to six core principles:
- Human-in-the-Loop: Humans verify all impactful AI outputs.
- Ethical & inclusive: AI must support fairness, accessibility and inclusivity.
- Privacy & data protection: No client data is used for model training. Prohibited: PII, credentials, un-redacted production data.
- Transparency: We explain where and how AI is used, and what its limitations are.
- Security by design: We restrict AI use to approved tools with strong safeguards.
- Continuous improvement: Our practices evolve with legislation, risk standards and emerging best practice.
Data handling & safety controls
Sitback applies strict safeguards:
- AI may process:
- public content
- non-sensitive CMS content
- anonymised or synthetic examples
- client-approved materials for analysis
- AI may not process:
- personal data (PII)
- client financial data
- credentials or secrets
- un-redacted logs or production payloads
- restricted or confidential third-party information
- AI outputs include guardrails, ensuring:
- relevance
- safety
- contextual accuracy
- alignment with client frameworks and RAG scoring
- compliance with accessibility and ethical standards
Tool classification
Sitback categorises AI tools into four safety classes:
Class 1 — AI for Development
Heavily safeguarded tools used by engineers (e.g., Cursor, Claude Code).
Class 2a — Centrally managed AI tools
Productivity and analysis tools under Sitback governance (ChatGPT Teams, Copilot).
Class 2b — Individually licensed AI tools
Restricted-use tools for generic exploration only (e.g. Perplexity Pro).
Class 3 — Experimental tools
Evaluated in sandbox environments only.
Class 4 — Customer-facing AI
AI-enabled features delivered to clients with specialised guardrails.
This classification system ensures consistency, clarity and control.
Shared responsibility model
Sitback is responsible for:
- secure and ethical design
- safety guardrails and explainability
- validation of outputs in managed services
- transparent communication
- alignment with client frameworks where provided
Clients are responsible for:
- providing relevant governance (e.g., RAG, risk rules, constraints)
- validating outputs within their business context
- using AI features safely and appropriately
- internal approvals before applying AI recommendations
Shared responsibilities include:
- interpretation of AI insights
- monitoring for unexpected results
- escalating issues for review
Security & compliance alignment
Our Responsible AI Framework aligns with:
- ISO 27001
- Australian Privacy Principles
- NSW Government (AIAF) and VIC Government AI assurance expectations
- Vendor risk assessment and due diligence
- Accessibility and ethical design standards
We regularly review and update our approach as the regulatory landscape evolves.
Availability of detailed policies
Sitback maintains internal detailed policies covering:
- AI development
- Responsible AI use
- Safe prompting
- Customer-facing AI
- AI asset register
- Technical controls and safeguards
These can be provided under NDA to:
- procurement teams
- risk and compliance groups
- government agencies
- enterprise governance bodies
Why responsible AI frameworks are important
Sitback is one of the few Australian digital agencies with a mature, structured and transparent AI governance framework, ensuring:
- safer AI
- ethically aligned outputs
- privacy-by-design
- human oversight
- secure processes
- trust and accountability
Clients can adopt AI confidently knowing Sitback takes responsibility seriously.
If you would like help developing your own responsible AI framework, we’d be happy to assist.
Contact us
If you have any questions about this AI Policy, or would like to develop one of your own, you can contact us:
- By email: [email protected]
- By visiting this page on our website: https://www.sitback.com.au/contact/
- By telephone: +61 (0) 292 472 223