Requirements Gathering
Learning Objectives
- You know what requirements gathering is and know ways how large language models could be used to help with requirements gathering.
Requirements gathering is the process of exploring and collecting requirements of a system from the customer, including stakeholders and end users. In practice, gathering requirements requires — and slowly forms — an understanding of the business domain, the processes in the domain, and the common problems and tasks in the domain. This understanding is formed through interaction using methods such as interviews, observation, workshops, brainstorming, and prototyping.
Some of these activities can be supported through the use of large language models.
Forming an initial understanding
A key part of gathering the requirements is forming an understanding of the domain which happens over time. Large language models can be — depending on the domain — used to form an initial understanding of the domain, in a similar way as one could search for information on the web. We can, for example, ask the large language models to explain us key concepts in the domain.
In a few sentences, what is the transformer architecture?
The transformer architecture is a deep learning model introduced in the paper "Attention Is All You Need" by Vaswani et al. in 2017. It primarily uses self-attention mechanisms to weigh input data differently, allowing the model to focus on important parts of the input when producing an output. Unlike recurrent or convolutional layers, transformers process input data in parallel, making them highly efficient. The architecture has since become foundational for many state-of-the-art natural language processing models, including BERT, GPT, and many others.
In a few sentences, what is self-attention?
Self-attention is a mechanism within the transformer architecture that allows a model to weigh and consider different parts of an input sequence when producing an output for a particular element in that sequence. It computes a weighted sum of all input elements based on their relevance to a given query, enabling the model to focus on different parts of the input dynamically. By doing so, self-attention captures both short-term and long-term dependencies in the data without the need for recurrent layers. The weights are often visualized as "attention maps," highlighting which parts of the input are deemed important for various tasks.
As always, there is the risk of hallucination, where the model generates text that is not true. It is therefore important to verify the information that is generated by the model.
Structuring interviews
Large language models can be used to help structure interviews. In the example below, we ask the large language model to help forming a structure for an interview about an AI-driven programming assistant.
I am gathering requirements for an AI-driven programming assistant for a team of software engineers. When looking into the requirements that they need, how should I structure the interview?
Certainly! Structuring your interview for gathering requirements about an AI-driven programming assistant will help in eliciting meaningful information from the team of software engineers. Here's a suggested structure for your interview:
Introduction:
- Briefly explain the purpose of the interview.
- Outline the general topics you'll be covering.
- Reassure that there are no right or wrong answers, and you're looking for their genuine thoughts and needs.
Understanding Their Current Workflow:
- How do they currently approach software development?
- What tools, languages, and frameworks do they use?
- What parts of their workflow are manual, repetitive, or error-prone?
...
Closing:
- Summarize the main points discussed.
- Thank them for their time and insights.
- Explain the next steps in the process and how their feedback will be used.
Throughout the interview, try to foster an open conversation. Allow the software engineers to provide examples from their experience, and encourage them to think creatively about potential solutions or features. The more specific and candid their feedback, the better equipped you'll be to design an effective AI-driven programming assistant.
We could also ask the large language model to help us with the questions that we should ask in the interview, or even provide initial answers to some of the questions.
Designing workshops
Similarly, large language models can be used to help design workshops. In the example below, we ask the large language model to help us design a 60 minute workshop for gathering requirements for an AI-driven programming assistant.
I am designing a 60-minute workshop for gathering requirements for an AI-driven programming assistant for a team of software engineers. Could you provide me a template starting point for the workshop?
Certainly! Here's a template to help you design your 60-minute workshop on gathering requirements for an AI-driven programming assistant:
Workshop: Requirements Gathering for AI-Driven Programming Assistant
Objective: To understand the needs, preferences, and challenges faced by software engineers and identify how an AI-driven programming assistant can support them.
1. Introduction (10 minutes)
- Briefing on the Objective (2 minutes): State the purpose of the workshop.
- Background on AI Assistants (3 minutes): Provide a quick overview of what AI-driven programming assistants are and how they've evolved.
- Setting Expectations (2 minutes): Explain the importance of each engineer's input and feedback.
- Q&A Session (3 minutes): Let participants ask clarifying questions.
2. Group Activity: Dream AI Assistant (10 minutes)
- Instructions (2 minutes): Ask participants to take a few minutes to imagine their ideal AI assistant without any constraints. What would it do? How would it help them in their daily tasks?
- Sharing (8 minutes): Have a few participants share their dream AI assistant's functionalities.
3. Pain Points Identification (10 minutes)
- Instructions (2 minutes): Ask participants to list their current challenges or pain points in their daily programming tasks that they'd like an AI to help with.
- Group Discussion (8 minutes): Facilitate a discussion around common pain points and jot them down.
4. Feature Prioritization (10 minutes)
- Listing Potential Features (3 minutes): Based on the previous discussion, list down the potential features of the AI assistant.
- Dot Voting (5 minutes): Give each participant a certain number of dots (or stickers) to vote on the features they feel are most important.
...
Post-Workshop:
-Send out a summary of the workshop to all participants.
-Consider distributing a feedback form to understand the effectiveness of the workshop and gather further insights.
- Consolidate the gathered requirements and prioritize them based on feasibility, impact, and the project's scope.
This is a starting point and can be tailored further to match your team's dynamics, the complexity of the AI project, and other specific needs.
Try asking the LLM to generate a plan for a 30 minute event of your choice. Vary the prompt a bit, and see how the generated plan changes. Try to spot regularities in the plans. For example, are there common elements that are always present such as an introduction and a recap? Similarly, look for issues in the plans. Do the plans, for example, have content for more than 30 minutes, have unrealistic expectations of what can be done in a given timeframe, and so on?
Brainstorming
Brainstorming is a technique where multiple solutions for a problem are proposed, which can then be evaluated and refined. Prior research in using large language models has looked into large language models for brainstorming in the context of the alternate uses task, which prompts for as many uses as possible for a specific object.
See Putting GPT-3’s Creativity to the (Alternative Uses) Test and Brainstorm, then Select: a Generative Language Model Improves Its Creativity Score
Coming up with requirements is also possible. As an example, we can ask the large language model to brainstorm requirements for an AI-driven programming assistant.
I am gathering requirements for an AI-driven programming assistant for a team of software engineers. Could you brainstorm a set of requirements for me?
Absolutely! An AI-driven programming assistant can offer a wide range of capabilities. Here's a brainstormed set of requirements for such a tool:
Intelligent Code Autocompletion:
- Predictive typing for method names, variables, and other code components.
- Context-aware suggestions based on the specific code module or library in use.
Code Review & Analysis:
- Detect and suggest improvements for code inefficiencies.
- Identify security vulnerabilities in the code.
- Check for code style and consistency with the team's coding standards.
- Predictive bug detection using pattern recognition.
Documentation Assistance:
- Generate or suggest comments for code blocks.
- Identify areas of code lacking proper documentation.
- Auto-generate API documentation based on code and comments.
...
These are just a starting point. Depending on the specific needs of your team and the context in which you're working, you may prioritize, modify, or extend this list.
Although large language models can be used to help with requirements gathering, they are not a replacement for the process of requirements gathering. The requirements need to be written down for further analysis and refinement, and the requirements need to be validated with the stakeholders and end users.