4.30qq Artificial Intelligence (AI) Systems Use and Governance Procedure

Procedure Owner

Organization Effectiveness – Performance Analytics and Information Technology – Data and GIS

Legal References

City Use of AI Regulation (4.30q, Revised 2025)
Technology, Print and Digitization Procurement Regulation (B 8.04e, Revised 2025)
Information Security Regulation (4.30p)
Data Protection and Classification Standard
Acct–05.101 Technology, Print, Digitalization Procedure

Purpose

This procedure ensures Artificial Intelligence (AI) systems are used responsibly, safely, and in compliance with City of Boise policy once they are available for use under the Technology, Print and Digitization Procurement Regulation and Technology Procurement Procedure. It supplements the City Use of AI Regulation by describing how departments, IT, and Organizational Effectiveness (OE) propose, assess, implement, review, and decommission AI use cases on systems that are already available to the City (including AI capabilities embedded in existing systems).

Scope

This procedure applies to all AI use cases—including both Generative AI and other AI and Machine Learning (ML) capabilities—once an AI-capable system or feature has been approved and procured under the Technology Procurement Procedure, regardless of whether the AI capability is:

  • stand-alone software
  • embedded in existing technology,
  • delivered by a vendor as part of a service.

This procedure does not duplicate procurement steps. When procurement, renewal, or contract changes are required, departments follow the Technology, Print and Digitization Procurement Regulation and then apply this Procedure for ongoing use and governance of specific AI use cases.

Definitions

Terms used in this Procedure are defined in the City Use of AI Regulation and, where applicable, the Information Security Regulation and Data Protection and Classification Standard. This Procedure defines only workflow-specific terms used for intake, review, and governance steps

  • Risk Categories: Low, Medium, High-risk classification as defined in the AI Regulation and related standards.
  • Human-in-the-loop: Requirement that AI-supported decisions must be overseen and validated by a human employee.

Responsibilities

Departments / AI Liaisons

  • Identify and propose potential AI use cases (Generative AI and ML) on systems that have already been approved and procured.
  • Work with their Business Relationship Manager (BRM) to initiate intake for new or significantly changed AI use cases.
  • Provide the information described in Section 1.3 (AI Use Case Intake) when starting or significantly changing an AI use case.
  • Discuss AI tool access and licensing needs with the employee’s manager and follow standard software request channels (for example, manager-initiated BRM requests for groups or Service Hub tickets for individual users).
  • Comply with the City Use of AI Regulation and AI Handbook, including training, responsible use, and data handling rules.
  • Use AI systems in line with approved use cases and escalate concerns or issues to their BRM, IT Tech Owners, or OE Performance Analytics team as needed.

Information Technology (IT)

  • Business Relationship Managers (BRMs):
    • Serve as the primary intake point for new or significantly changed AI use cases.
    • Confirm the problem, objectives, users, data involved, and potential impacts with departments.
    • Identify whether the proposed use involves Generative AI, ML, or other AI capability, and whether it appears Low, Medium, or High-risk under the AI Regulation.
    • Pull in the right partners, including IT Technology Owners, OE Performance Analytics, Enterprise Architecture, Cybersecurity, Legal, and the City Clerk as needed.
    • Coordinate with IT Technology Owners to align AI license requests with existing software licensing processes, including when to use centrally funded enterprise licenses and when department-funded licenses are appropriate.
  • Technology Owners (including GenAI platform owners):
    • Own and manage AI-capable platforms and solutions (including Generative AI solutions such as productivity tools, chat assistants, or other enterprise AI services).
    • Ensure configuration and technical setup align with approved AI use cases and City standards.
    • Advise BRMs and departments on what is technically feasible, available configurations/guardrails, and platform-specific risk mitigations.
    • Manage allocation and lifecycle of centrally funded AI licenses for their platforms in line with City software licensing practices and advise departments on licensing options for their users.
  • Cybersecurity Analysts:
    • Advise on security and privacy risks for AI use, including Generative AI features.
    • Approve or deny use of Restricted or Private data in AI systems in alignment with the Information Security Regulation and Data Protection and Classification Standard.
  • Architecture Review Council (ARC):
    • For highly complex or High-risk AI use cases, review proposed approaches for enterprise fit and risk alignment and provide advisory recommendations.
  • CIO:
    • Serve as an escalation point for High-risk AI uses.
    • Consider recommendations from IT, OE Performance Analytics, Legal, and the City Clerk and make final decisions where escalation is warranted.
    • May escalate extraordinary cases to City Council or executive leadership when risk is significant.

Organizational Effectiveness (OE)

  • Performance Analytics Team
    • Partner with departments, BRMs, and IT Tech Owners to assess and advise on Medium and High-risk AI use cases, including both Generative AI and ML solutions.
    • Help departments and BRMs understand:
      • data needs and data quality,
      • potential fairness and equity considerations,
      • likely impacts on services and employees.
    • Work with users to explore options, design or refine AI-enabled workflows, and recommend risk mitigations and guardrails (for example, sampling, review steps, or simpler alternatives).
    • Provide a risk advisory (e.g., Low/Medium/High with key considerations) to support decision-making by department leadership, IT, and CIO where appropriate.
    • Support, but do not replace, departmental judgment and accountability for whether and how to use AI.
  • OE (Training, Guidance, and Community):
    • Create or acquire training and supporting materials about responsible AI use for City staff, including role-appropriate training for managers, everyday users, and technical staff.
    • Maintain the AI Handbook to provide practical guidance, examples, and FAQs for AI use across the City.
    • Facilitate communities of practice and learning opportunities so departments can learn from one another’s AI use.
    • OE and IT play advisory and support roles in managing risk. They advise and recommend; departments and City leadership remain the decision-makers. OE and IT generally do not prohibit AI use on their own, but may recommend changes, added safeguards, or not proceeding when risk is unacceptable.

Data and AI Working Group

  • Serves as the City’s cross-departmental advisory group for data and AI.
  • For AI topics, supports responsible AI adoption across the City.
  • Reviews selected Medium and High-risk AI use cases when requested by BRMs or OE Performance Analytics, typically through virtual or asynchronous review.
  • Provides a citywide perspective on resident impacts, employee impacts, equity, and community expectations.
  • Recommends updates to data and AI training, guidance, standards, and regulations; final ownership of policies and procedures remains with the relevant policy owners.
  • Identifies cross-department opportunities and issues related to data and AI and recommends priorities to the CIO, OE Director, and executive leadership.
  • Standing membership includes representatives from: Information Technology (for example, Technology Strategy & Security, Data & GIS); Organizational Effectiveness (Performance Analytics and training/organizational change); Legal; City Clerk; Human Resources; Community Engagement; and other departments as needed (for example, Public Safety, Development Services, or Finance depending on the use case).
  • For any given AI use case, only those members who are relevant to the topic need to participate.

Legal

  • Advise on legal obligations, liability, and compliance related to proposed AI use cases, including vendor terms, fair treatment, and due process obligations.

City Clerk

  • Advises on records management, retention, Public Records Requests, and how AI inputs and outputs should be retained or made accessible in connection with AI use.

Procedures

1. Initiating an AI Use Case

1.1 When a department wants to start a new AI use case or significantly change an existing one, the department’s AI Liaison or manager contacts their BRM (or uses the designated intake channel).

1.2 The BRM confirms whether the request involves AI, and if so, whether it is primarily:

  • Generative AI (e.g., content generation, copilots, chat assistants),
  • ML or other algorithmic models (e.g., predictions, scoring, classification),
  • another AI capability.


1.3 For AI use cases, the BRM conducts a simple AI Use Case Intake, capturing at least:

  • the business problem and desired outcome,
  • the process or decision the AI will support,
  • who will use the AI and who will be affected,
  • what data will be used (including whether any is Restricted or Private),
  • any early concerns about fairness, service impact, or reputational risk.


1.4 Based on this intake and the AI Regulation’s risk categories, the BRM makes an initial judgment, with OE and IT as needed, about whether the use appears Low, Medium, or High-risk.

Low-risk uses of approved tools may proceed directly to implementation with basic guidance and training.

Medium and High-risk uses move to a more structured risk and solution assessment (Section 2).

2. Risk and Solution Assessment (Medium and High-Risk)

2.1 For Medium and High-risk AI use cases, the BRM convenes a small group or uses the Data and AI Working Group as the review forum. That group may include:

  • the department’s AI Liaison and business owner,
  • OE Performance Analytics,
  • the relevant IT Technology Owner,
  • Cybersecurity,
  • Legal and City Clerk as needed.


2.2 The OE Performance Analytics team works with the group to perform the risk and solution assessment described in the Performance Analytics Team responsibilities, including clarifying the problem, understanding data and populations, identifying key risk considerations, and exploring configuration options and alternatives.

2.3 The group uses this conversation to agree on:

  • whether the use case should proceed, proceed with safeguards, or be reconsidered,
  • if proceeding, what basic safeguards will be in place (e.g., specific prompts, review steps, documentation, data limits).


2.4 OE Performance Analytics and IT provide a risk advisory (e.g., “Medium risk with these conditions…”) to the department leadership and, if warranted, to the CIO and ARC. This advisory informs decision-making but does not replace managerial or executive judgment.

3. Readiness and Approvals of AI Use Cases

3.1 Before moving from planning to operational use, the BRM and department confirm that:

  • the AI Use Case Intake information is complete and documented,
  • any agreed safeguards from the risk and solution assessment are in place,
  • Cybersecurity has weighed in where Restricted or Private data is in scope,
  • records and Public Records Request needs have been considered with the City Clerk,
  • staff who will use the AI have or will receive appropriate training.

3.2 For High-risk AI use cases, the BRM brings the use case to the Data and AI Working Group for review. The Architecture Review Council (ARC) is consulted when the proposed use would require significant changes to enterprise architecture, shared platforms, or integration patterns.

3.3 Where the AI Regulation or Technology Procurement Regulation requires it, the CIO (or designee) reviews recommendations and makes a final decision on whether and how to proceed.

4. Implementation of New AI Use Cases

4.1 IT Technology Owners configure and enable the AI capability (for example, setting up a Generative AI tool or enabling an ML feature in an existing system) in line with the agreed use and safeguards.

4.2 Departments ensure:

  • the readiness items in Section 3.1 are complete (including training, records considerations, and any required Cybersecurity review),
  • human-in-the-loop responsibilities are clear (who reviews or can override AI outputs),
  • any required disclosures that AI is being used in “significant” contexts (as defined in the AI Regulation) are in place.


4.3 OE supports departments with training materials, guidance, and change support, where needed, but does not own day-to-day operation of AI tools.

4.4 Access and Licensing for AI Tools

  • Staff discuss AI tool needs with their manager and consider whether existing free or approved tools meet the need.
  • For a small number of users, the manager or staff submit a standard software request (for example, via Service Hub) for the relevant AI tool, referencing the AI Use Case Intake information where appropriate.
  • For larger groups or department-wide adoption, managers work with their BRM to plan licensing, training, and funding; the BRM coordinates with IT Technology Owners to use centrally funded enterprise licenses where available and to plan any department-funded licenses.
  • AI licenses are requested and managed through the City’s standard software licensing processes.
  • Staff may use personal or public AI accounts only as permitted under the City Use of AI Regulation; when a City-approved tool is reasonably available for the use case, staff are expected to use the City-approved tool.
  • Managers notify IT Technology Owners or follow established processes when a user no longer needs an AI license or changes roles, so licenses can be reallocated.

5. Ongoing Use and Periodic Review of AI Systems

5.1 Departments are responsible for everyday use of AI systems, including:

  • using AI tools in line with approved use cases and safeguards,
  • raising concerns (e.g., surprising behavior, bias concerns, or service issues) to their BRM or supervisor,
  • participating in periodic check-ins about how AI is working in their area when requested.


5.2 BRMs, IT Tech Owners, and OE Performance Analytics may check in with departments on selected AI uses to:

  • hear how tools are working in practice,
  • learn about issues or improvements,
  • update guidance or safeguards as needed.

Neither IT nor OE is expected to continuously monitor every AI use. Their role is to advise, support, and respond when issues or questions are raised.

5.3 Information from these conversations may be used to inform decisions about renewing, modifying, or decommissioning AI-enabled systems in alignment with the Technology Procurement Procedure.

6. AI Issues and Incident Response

6.1 If a department believes an AI use has led to a significant problem (for example, harmful outputs, suspected bias, privacy concerns, or major errors in resident-facing work), they notify their BRM or IT Help Desk as soon as practical.

6.2 IT and OE Performance Analytics work with the department to:

  • understand what happened,
  • determine whether the issue appears to be a one-off error, user mistake, or systemic pattern,
  • recommend immediate steps (e.g., change in process, temporary pause on a specific use, additional training).

6.3 Cybersecurity, Legal, and the City Clerk become involved if the issue involves data exposure, legal obligations, or records management concerns.

6.4 For serious or repeated issues, IT and OE may recommend escalation to the CIO and, if needed, ARC or executive leadership. Decisions about suspending or discontinuing use are made consistent with the AI Regulation and Technology Procurement Regulation.

7. Decommissioning AI Systems or Use Cases

7.1 When an AI-enabled system or AI use case is being discontinued:

  • IT Technology Owners manage technical decommissioning and data handling.
  • The department ensures any necessary transition in business process or service delivery.
  • The City Clerk provides guidance on records management, including retention and access for any inputs/outputs that must be maintained.


7.2 OE may support with communication and change management when decommissioning significantly affects staff roles or resident services.

Summary

This Procedure operationalizes the City Use of AI Regulation. All AI use must comply with the Regulation’s requirements for responsible use and human oversight, approved tools and account separation, data protection, records retention and Public Records Requests, and required disclosures. Where City guidance is needed for day-to-day use (examples, do/don’t, prompt tips, common scenarios), staff follow the AI Handbook and any department-specific guidance that is consistent with the Regulation.

Related Information

  • AI Regulation (4.30q, Revised 2025)
  • Technology Procurement Regulation (B 8.04e, Revised 2025)
  • Information Security Regulation (4.30p)
  • Data Protection and Classification Standard
  • Acct–05.101 Technology, Print, Digitalization Procedure

Approval and Revision History

This PROCEDURE shall be reviewed annually and updated as necessary to reflect changes in city policies, regulations, and standards.

VersionApproval DateApproverChanges
1.01/26/26OE Director, CIOOriginal release, Reviewed by Policy Committee

Message Sent Successfully!

Message Failed To Send.

Send a Message to Human Resources

Please fill out the form and a representative from the City of Boise's Human Resources department will be in touch with you.