Business Analysts (BAs) are currently caught between two powerful, opposing forces in the modern workplace. On one side, business sponsors and senior leaders demand “faster, smarter delivery” fuelled by generative AI. On the other, Information Security (InfoSec) and risk teams are locking down or removing AI features from essential collaborative platforms.
This tension results in frustration, “Shadow AI” usage, and a widening gap between expectation and reality. This article explores why this dilemma exists and, crucially, how BAs and delivery teams can constructively partner with InfoSec to safely unlock AI’s potential.
The AI Transformation of the Business Analyst Role
In today’s environment, the BA role has evolved. We are expected to be AI-orchestrators, leveraging Generative AI (embedded in tools like Miro, Confluence, ClickUp, GitHub and Jira etc.) to automate and accelerate critical tasks:
- Summarizing workshops and large meetings.
- Clustering and refining requirements.
- Drafting user stories and acceptance criteria.
- Synthesizing large volumes of qualitative feedback.
For delivery teams, these features are no longer optional, they are quickly becoming table stakes for effective and competitive delivery. Yet, the common organisational response is often the Great Feature Freeze: disabling AI assistants, blocking external tools and neutering the efficiency saving embedded assistants.
Why Information Security Teams Push Back
InfoSec’s resistance is not arbitrary; it is grounded in real, high-stakes risks. Their common fears include:
- Data Leakage: Sensitive business or customer data being used to train external AI models.
- Regulatory Risk: Breaches of GDPR, PCI-DSS, or internal data handling policies.
- Lack of Transparency: Unclear where data is stored, processed, or retained by third-party AI providers.
- Shadow AI Usage: Staff using unapproved personal AI accounts, bypassing enterprise controls and monitoring.
From their viewpoint, blocking AI feels like the safest, most immediate mitigation. However, this approach carries a hidden, often greater, risk.
The Paradox: Blocking AI Doesn’t Eliminate Risk infact it often increases it
When approved tools fail to meet delivery pressures, teams inevitably seek workarounds. This usually means:
- Copying sensitive content into personal, unmanaged AI accounts.
- Relying on slower, manual workarounds that introduce errors.
- Creating “shadow AI” usage that is invisible to security teams.
Ironically, a blanket ban often increases organizational risk by pushing AI interactions underground, where they cannot be governed or audited.
Reframing the Conversation: From “Can We Use AI?” to “How Do We Use AI Safely?”
Business Analysts are uniquely positioned to be the key facilitators in this debate. We can translate technical risk into business impact and bridge the gap between governance and delivery.
The focus must shift from demanding unrestricted access to building a framework for controlled enablement.
Practical Steps for Responsible AI Enablement
Here are five practical ways BAs can engage InfoSec and leadership to move toward responsible AI adoption:
1. Start With Low-Risk, High-Value Use Cases
Propose scenarios that offer immediate productivity gains without touching the most sensitive data.
- Summarizing non-sensitive workshop transcripts.
- Drafting templates or generic user stories.
- Refining requirements that have already gone through approval.
This demonstrates maturity and allows InfoSec to start small.
2. Define Clear Data Guardrails
Work with data governance and InfoSec teams to define explicit rules:
- What types of data must never be entered into AI prompts?
- What is considered low sensitivity and safe to use?
- Clear guidelines for redaction and anonymisation.
Establishing these boundaries moves the discussion from blanket fear to precise, manageable governance.
3. Advocate for Enterprise-Grade AI Solutions
There is a significant security difference between a personal, public AI account and an enterprise-managed solution (like an organization-managed AI tenant or Enterprise Copilot offerings).
Advocate for tools that provide:
- Corporate identity and access controls.
- Clear data retention guarantees.
- Centralized audit logs and compliance reporting.
These features help InfoSec maintain visibility and control.
4. Propose Managed Access and Policies
Instead of an all-or-nothing approach, propose specific guardrails:
- Role-based access (e.g., AI enabled only for BAs, Product, and Delivery roles).
- Clear usage policies and mandatory training for staff.
- Periodic reviews of AI interaction patterns.
InfoSec teams are far more receptive to managed risk than unknown risk. Strong governance enables innovation rather than killing it.
5. Quantify the Delivery Impact
BAs must frame the lack of AI enablement in clear business terms. Quantify the cost of the slowdown:
- Requirements Cycle Time: Increased time for documentation and synthesis.
- Rework and Errors: Higher error rates from manual workarounds.
- Stakeholder Alignment: Slower synthesis leads to delayed alignment.
- Competitive Advantage: Loss of speed compared to competitors with enabled tools.
When leadership understands the measurable cost of inaction, the conversation shifts quickly from risk avoidance to responsible risk management.
The BA’s Essential Role Going Forward
The future of business analysis is inextricably linked with AI. The BAs who succeed will be those who act as responsible stewards of data and strategic partners to their InfoSec teams.
This is not about fighting InfoSec. It is about partnering with them to develop the essential AI governance frameworks needed to modernise safely. By turning friction into dialogue and fear into frameworks, BAs can help their organizations unlock the productivity gains AI promises—smarter, safer, and with confidence.
🔗 Read more insights at analyst-ally.com
🔗 Join the conversation by following our LinkedIn or YouTube Channels