Print

Use of generative AI in Queensland Government

Document:
Use of generative AI for gov information sheet (PDF, 155.5 KB)
Document type:
Guideline
Version:
v1.0.0
Status:
Current
Owner:
Data and Information Services, QGCDG
Effective:
August 2023–current
Security classification:
OFFICIAL-Public

Purpose

A Queensland Government Enterprise Architecture (QGEA) guideline provides information for Queensland Government agencies on the recommended practices for a given topic area. Guidelines are for information only and are intended to help agencies understand the appropriate approach to addressing a particular issue or doing a particular task. Agencies are not required to comply with QGEA guidelines.

This guideline describes what issues need to be considered when using generative Artificial Intelligence (AI) products or services. While this document provides general guidance on these issues in the Queensland Government context, wherever available, employees should refer to the specific policies and practices provided by their agencies.

This guideline is designed to:

  • clarify specific concepts of generative AI
  • make suggestions on key factors to consider when using generative AI tools
  • provide examples of how to evaluate the use of generative AI for government business.

Audience

This document is primarily intended for Queensland Government employees.

Scope

This guideline sets out key considerations for the use of generative AI products and services. The use of generative AI products and services for Queensland Government is governed by the same responsibilities, obligations, and policies for the use of other digital products or services.

This document is provided as guidance only and does not seek to create new regulation governing the use of generative AI products or services for the Queensland Government or to provide legal, ethical or implementation advice on the use of any specific product or service.

Key takeaways

  1. All official government information (including classification levels Sensitive and Protected) and personal information related to employees of the government, or others, should not be shared with, input, or uploaded to generative AI products and services not approved for use by your department.
  2. If in doubt, seek guidance from relevant officers in your agency before using generative AI. This may include Information Communication Technology (ICT), Digital, Legal and/or Privacy / Right to Information (RTI) teams. Some descriptive use cases to assist in determining the appropriate application of these products are provided as general guidance in this document.
  3. The Queensland Government will provide options to access generative AI capabilities in a secure and managed environment, consistent with existing policies. Where available, employees should use generative AI capabilities approved by their agency.
  4. As a Queensland Government employee, you are responsible for the data and information you share with, input, or upload to generative AI products and services. You are also responsible for any AI-generated content that you use, share, input, or upload in the performance of your duties.
  5. All existing QGEA policies extend to the use of generative AI products and services.

Use of generative AI

The examples in this document are indicative only and are used to illustrate how AI-generated content could be used in a government context. Agencies should consider their own business requirements and risk profiles when using generative AI.

Background

The term Artificial Intelligence describes several algorithmic technologies such as machine learning, neural networks, computer vision, and robotics to perform human-like tasks such as reasoning, planning, natural language processing, computer vision, and more. AI systems can improve their performance over time according to a set of human-defined objectives and can operate with a certain level of autonomy.

Generative AI

While traditional AI systems primarily extract information from data using recognition or classification tasks, generative AI generates new content such as images, text, software code or even music using algorithms and machine learning techniques. AI-generated content exhibits characteristics similar those found in human-generated content using models that learn underlying patterns and structures in their training datasets. Generative AI chatbots produce plausible, human-like responses to prompts from the user. Two such examples are Large Language Models (LLM) and image generators.

Large Language Models

Large Language Models (LLMs) leverage deep learning techniques to understand and generate human-like text. LLMs learn from large collections of text to produce the patterns, semantics, and context of language. They use neural networks with billions of parameters to perform tasks such as text generation, language translation, and document summarisation. Trained LLMs can generate coherent and contextually relevant responses that capture the linguistic nuances of their training data.

LLMs can respond to complex queries, and provide detailed, informative, and reasoned answers to questions prompted by the user. They can also understand the structure, syntax, and semantics of software programming languages. They can generate code snippets, provide code completion suggestions, assist in error detection, and offer insights for code improvement.

LLMs offer a new way for humans to interact with machines, through sophisticated natural language-based interactions and with multiple potential applications, including customer service, content generation, information retrieval, and decision-support. They can enhance software developer productivity by reducing coding errors and promoting code consistency. They offer intelligent code suggestions based on context, helping developers write code more efficiently and accurately.

Examples of Large Language Models Include:

  • OpenAI ChatGPT (powered by GPT3.5 and GPT4)
  • Google Bard (powered by PaLM 2)
  • Bloom
  • Meta’s LLaMA Model.

Examples of Large Language Models for code include:

  • GitHub Copilot (powered by OpenAI Codex model)
  • Amazon CodeWhisperer.

Image generators

Image generators, also known as image synthesis or image generation models, are a subset of generative AI specifically designed to generate realistic images from prompts provided by the user. Image generators employ machine learning techniques to generate new images that exhibit visual characteristics based on motifs in their training datasets. Image generators learn by capturing patterns, styles, and semantic information from large sets of labelled or unlabelled images. They can generate images that possess novel visual attributes, textures, and structures.

Image generators have applications in a variety of fields, such as computer vision, graphics, design, and entertainment. Their use includes synthetic image generation for data augmentation, simulation of realistic environments, generation of visual narratives, assistance in artistic endeavours, and assisting with image completion or restoration.

Examples of image generators include:

  • MidJourney
  • Stable Diffusion
  • DALL-E.

Opportunities

Generative AI tools provide Queensland Government with opportunities to enhance productivity and offer new services across many aspects of government business. For example, LLMs can automate repetitive tasks and streamline complex workflows, providing opportunities to review and change current work practices. LLMs can assist with the generation of reports, memos, and briefing notes. They can also assist in data analysis and visualisation, extracting valuable insights from large datasets in a fraction of the time it would take for manual analysis.

LLMs can aid in document retrieval and knowledge management, rapidly organise and retrieve information from diverse sources, and facilitate faster decision-making and collaboration. Further, LLM-powered services could handle common enquiries in customer service and community engagement, provide immediate responses, and enable human agents to handle more specialised or complex queries. This capability could enable agencies to identify opportunities for efficiency gains and increased operational effectiveness.

The use of image generators in a government context has the potential to yield significant efficiency and productivity gains. For instance, image generators enable the rapid creation of visual content, such as infographics, reports, and promotional materials, reducing the need for extensive manual design work. This can allow communications and engagement teams within government agencies to produce visually appealing and engaging materials more efficiently.

Additionally, image generators could produce visual content for training and educational purposes, such as emergency response simulations or built environment planning visualisations. Overall, the use of image generators in a government context could enhance efficiency, accelerate content creation, and optimise training processes.

Challenges of generative AI

Despite the potential for achieving productivity gains by incorporating machine assistance in the workplace, any application to government decision making, such as regulatory decisions or service delivery, should give due consideration to the potential of adverse outcomes that could impact individuals and business. This includes assessing how AI models were developed, putting frameworks in place to evaluate and mitigate risk, and associated training and skills development programs.

This guideline was developed to provide advice consistent with Australia’s AI Ethics Framework and how it applies to generative AI.

Bias amplification

Generative AI models learn from their training data. If training datasets contain biased or inaccurate representations of the population, the generated content could inherit these biases and perpetuate or amplify them in ways that undermine the principles of fairness and accuracy. Without transparency of methods and data used for training, it is difficult to identify systemic bias in generated content.

Accuracy

While AI-generated content can look convincing, there is a risk that it can contain false or misleading information. The term “hallucination” describes AI-generated content that appears plausible but is not true. These hallucinations can stem from the limitations of the training data, from a lack of a common sense understanding of the topic, or from inappropriate application of the tool.

Explainable AI

There is no simple way to explain how content is produced by generative AI models. LLMs can generate coherent and contextually relevant text but may not be able to provide clear reasoning for their output. This can be a challenge when trying to ensure the model produces fair and unbiased analysis and is presenting relevant and contextual content. Furthermore, LLMs cannot guarantee the reproducibility of their responses which can limit their usefulness in some contexts.

Legal and intellectual property risks

Legal and ethical questions about the ownership of data used to train generative AI models and their generated content remain open. These include questions about what constitutes derivative work under existing intellectual property laws for data used to train generative AI models. In one example, a global visual media company known for providing stock images has commenced legal action against the creator of an image generator for allegedly using copyrighted materials to train its generative AI model.

User prompts can also raise legal questions if copyrighted materials are used as input for generative AI tools.

Privacy and data protection

Privacy policies and data protection statements from the providers of generative AI tools may not be consistent with existing legislation and policies applicable to the Queensland Government. For example, OpenAI, the creator of ChatGPT keeps a record of user prompts and response history on their servers indefinitely. Such practices raise issues of sovereignty, privacy, confidentiality, and maintenance of public records. Other legislative compliance issues may arise when out of date data sets are used to train generative AI models. For example, the freely available version of ChatGPT was trained on data sets that are only current to September 2021.

Security and safety

Generative AI tools can quickly produce large volumes of content with the potential to harm, deceive, damage, or lead to other illegal or unethical behaviour. While generative AI solutions and service providers aim to align these models to existing social, ethical and societal norms, and filter out illegal outputs, there are no AI models that are completely immune from this risk.

Considerations

Queensland Government employees are responsible for the content they generate including content created using digital tools. Employees should exercise similar judgement when considering the use of generative AI products and services while performing their duties.

Queensland Government employees need to be aware of their obligations under existing legislation, policies, and the Queensland Public Service Code of Conduct when using generative AI products and services. Employees should be aware of their obligations under, but not limited to, the following.

Code of Conduct

The Queensland Public Service Code of Conduct describes how employees should conduct themselves while delivering services to the Queensland community and undertaking official duties.

The Code of Conduct is underpinned by four ethical principles affirmed by the Public Sector Ethics Act 1994:

  • Integrity and Impartiality
  • Promoting the public good
  • Commitment to the system of Government
  • Accountability and Transparency.

While Queensland Government employees should be conscious of the entire Code of Conduct when using generative AI tools, some specific sections of the Code relevant to the use of Generative AI include:

Commitment to the highest ethical standards when fulfilling our responsibilities

As employees we are required to ensure that our conduct meets the highest ethical standards when we are fulfilling our responsibilities.

We will:

  • ensure any advice that we provide is objective, independent, apolitical and impartial
  • ensure our decision making is ethical
  • engage with the community in a manner that is consultative, respectful and fair, and
  • meet our obligations to report suspected wrongdoing, including conduct not consistent with this Code.

Ensure diligence in public administration

We have an obligation to meet high standards in public administration and perform our duties to the best of our abilities.

We will:

  • apply due care in our work, and provide accurate and impartial advice to all clients whether members of the public, public service agencies, or any level of government
  • treat all people equitably and consistently, and demonstrate the principles of procedural fairness and natural justice when making decisions
  • exercise our lawful powers and authority with care and for the purpose for which these were granted
  • comply with all reasonable and lawful instructions, regardless of personal opinion.

Ensure appropriate use and disclosure of official information

The public has a right to know the information created and used by government on their behalf. This right is balanced by necessary protections for certain information, including personal information.

Information privacy legislation protects against the misuse of personal information. We have an obligation to ensure the lawful collection and handling of personal information and that any personal information is accurate, complete and up to date.

In addition, we will:

  • treat official information with care and use it only for the purpose for which it was collected or authorised
  • store official information securely, and limit access to those persons requiring it for legitimate purposes
  • not use confidential or privileged information to further personal interests
  • continue to respect the confidentiality of official information when we leave public service employment.

QGEA policies

Agencies and their employees should protect the privacy, confidentiality, integrity, and availability of any information they handle in accordance with applicable laws, regulations, and policies. They should comply with mandatory responsibilities and manage the risks associated with any information collection, processing, storage, and transmission.

The application of the Information security policy (IS18:2018) requires Queensland Government agencies to identify and manage risks to information, applications and technologies through their life cycle, using Information Security Management Systems (ISMS). An agency should identify privacy requirements under the appropriate legislation, classify their information and information assets according to business impact, and implement appropriate controls according to the classification.

Information that has been assessed as having a high business impact level in relation to confidentiality (C), integrity (I) or availability (A) may only be stored or processed offshore where the agency:

  • has undertaken a risk assessment related to the C, I and A business impacts
  • accountable officer or delegate has documented his or her acceptance of the off shored information risk assessment.

The following policies, frameworks and guidelines apply to all agency information and should be  considered when assessing the use of generative AI technologies with government information.

Legislation

In addition to specific agency legislative and regulatory obligations, employees should refer to their agencies’ internal governance and practices which adhere to the following legislation:

General examples

The following examples are for illustrative purposes only. They highlight issues that may arise from using generative AI products and services. Some of these issues do not apply to generative AI products and services approved for use within Queensland Government. Further advice will be provided when available.

Summarising research

As a Policy Officer within a central agency, you have been asked to conduct a research activity on a specific topic relevant to a policy position your team is working on. This task requires you to read and summarise several comprehensive and detailed reports on the topic from multiple, publicly available, national and international sources. You decide to accelerate your research by asking ChatGPT to summarise all the reports into key items.

Is this an appropriate use of a generative AI product?

Yes. The research reports relate directly to a topic that you have intimate knowledge of, and you can vet the output summaries for accuracy prior to using them. You are also asking the tool to summarise research reports that are already publicly available and licensed for your use. You recognise that using the generated summaries is not a substitute for reading each report in detail, however each summary provides an overview that will allow you to easily review relevant sections in detail.

Policy analysis #1

As a Policy Officer in a large government agency, you’ve been tasked with reviewing publicly available policy positions in other states and territories to provide detailed analysis to support the development of a similar policy within Queensland. As the specific policy topic is broad and complex, you decide to use ChatGPT to assist your review. You prompt ChatGPT to provide summaries of the current policies in force within other jurisdictions. You also prompt ChatGPT to analyse each policy position and determine the strengths and weaknesses of each approach.

Is this an appropriate use of a Generative AI product?

Yes, but with some key considerations. The commercially available ChatGPT product has only been trained on data up until September 2021. As a result, some generated content may be out of date. Additionally, ChatGPT has limitations when conducting complex reasoning tasks, such as critical analysis of complex topics and is susceptible to hallucinate. This may include lack of common sense, lack of contextual understanding and limited understanding of ambiguity. Your ethical obligations require you to be able to explain the reasoning that underpins any advice you provide based on ChatGPT’s analysis or provide your own critical analysis to justify your advice.

Use of generated images

As a communications officer in a large government agency, you have been asked to produce illustrations to assist in communicating a new policy for government. To progress the development process, you are looking at using a publicly available, image generator to assist in the creation of  images.

Is this an appropriate use of a generative AI product?

Yes, however it is your responsibility to ensure that any generated content is correct, accurate and contextually appropriate for your piece of work. Additionally, ensure that content provided by the image generator has an appropriate commercial licence. Image generators should have appropriate licensing arrangements to generate images as using material that breaches intellectual property laws or rights could cause reputational damage to the government.

Document drafting

As a Project Officer within a service delivery agency, you’ve been tasked with preparing a Risk and Compliance Report for an inflight initiative. As your time is limited, you decide to use ChatGPT to help you draft the report. You ask ChatGPT to provide you with the key items that are usually contained within a risk and compliance report in a large and complex government agency, and on receiving a generated structure for your report, you then prompt ChatGPT with specific details about the project, including things like financial information, schedules, and assurance activities.

Is this an appropriate use of a Generative AI product?

No. Using a generative AI product or service to brainstorm the structure and content of a risk and compliance report may provide a structure that is inconsistent with the QGEA portfolio, program and project management policy. Additionally, providing official information such as financial and risk information to the service may violate your information handling obligations.

Policy analysis #2

As a Policy Officer in a large government agency, you’ve been asked to review a policy discussion paper prepared by another agency which is approaching finalisation. The covering email with the discussion paper makes it clear the final policy position will form part of a submission to Cabinet. As the discussion paper is long and complex, you decide to use ChatGPT to assist you understand it and formulate an appropriate response. You place several large sections of the discussion paper into ChatGPT and ask it to summarise the key items.

Is this an appropriate use of a generative AI product?

No. Placing content that will form part of a policy position that will be taken to Cabinet will breach existing information security, and access and use policies relating to the use of confidential information. A record of your prompt and response will be stored by OpenAI, the creator of ChatGPT which may compromise requirements surrounding ‘Cabinet-in-Confidence' information, as well as risk transmitting this information outside of Australia. Sharing draft policy positions with ChatGPT can create reputational risk for the government.

Decision making assistance

As a customer service officer within a service delivery agency, one of your customers has presented you with several unique and complex challenges that require significant analysis or complex reasoning to determine the next course of action. You decide to use ChatGPT as a sounding board to analyse, research and understand what decision you might take to support your customer. In doing this, you share, input, or upload detailed specifics of the issue, including personal information, with ChatGPT.

Is this an appropriate use of a generative AI product?

No. Any of your discussions and prompts that you share, input, or upload to ChatGPT will contain confidential information, meaning that any personal information you input, or upload to ChatGPT may be in breach of privacy law. Similarly, using content from generative AI tools has associated risks of bias, inaccuracy and lack of transparency. Any decision that relies on content from generative AI should consider these risks as well as the ethical responsibilities of Queensland government employees under the Code of Conduct.

Applicability

This guideline applies to all Queensland Government agencies (as defined by the Public Sector Act 2022). Accountable officers (not already in scope of the Public Sector Act 2022) and statutory bodies under the Financial and Performance Management Standard 2019, must have regard to this guideline.