An internal messaging feature designed to enhance collaboration between team members working in a government records request software, improving communication and efficiency.
This project explored the integration of AI technology to improve record request processing for government agencies. Through a mixed-methods research approach combining in-depth user interviews and large-scale feedback collection, we validated that AI-generated request summaries would significantly enhance workflow efficiency for agency staff. The research not only confirmed strong user interest with over 66% of surveyed users rating the AI summaries highly, but also provided critical insights into implementation considerations, allowing for informed decision-making about feature placement, permissions, and display options as we move toward implementation.
Lead UX Researcher
3 months (March 2024 - May 2024)
Miro, Pendo
Research findings, AI prompts
Government agencies process numerous record requests daily, requiring staff to manually review and extract key information from often lengthy and unstructured request descriptions. This manual process is time-consuming, prone to inconsistency, and creates bottlenecks in request processing workflows.
Staff must sift through verbose request descriptions to identify critical details, often spending excessive time on initial request assessment
The lack of quick access to essential request components slows down initial routing and assignment decisions, extending overall fulfillment timelines
Requestors frequently omit crucial details (such as relevant date ranges or specific departments), forcing staff to rely on institutional memory or follow up for clarification, which an AI summary could help identify quickly
To evaluate the potential value of AI-generated summaries for record request processing, I conducted a two-phase research study combining qualitative interviews with quantitative validation to understand user needs and reactions to the proposed feature.
Users consistently prioritize specific information types when processing requests, including date/time parameters, fulfillment possibility, appropriate department/assignee, "any and all" language, location details, and report numbers.
104/150 respondents indicated they would use the AI summary feature, with 100 rating it 4+ on a 5-point scale.
Users found the summary easy to read, appreciated its ability to reduce workload, valued it as a starting point, and noted it helped clarify next steps and internal processing.
Some users expressed hesitation about relying solely on AI summaries, with specific concerns about potential liability if important information were omitted or misrepresented.
Users expressed interest in editing capabilities and indicated they would still review the original request text, suggesting the summary should complement rather than replace existing workflows.
This early-stage research provided crucial validation before significant development resources were committed. The strong positive response from users gave the product team confidence to move forward with implementation while highlighting specific design considerations around placement, permissions, and editing capabilities. By identifying both enthusiasm and concerns, the research enabled informed decision-making about feature scope and safeguards, ensuring the final implementation will maximize value while minimizing risk.
This research project offered valuable insights not only about the specific AI feature but also about our product development process and user needs. The early-stage research approach allowed us to validate hypotheses, identify potential issues, and make informed decisions before significant resources were committed.
Balancing AI accuracy with user expectations required extensive experimentation with prompting approaches to generate consistently useful summaries
Determining the right combination of qualitative and quantitative methods to capture both depth of understanding and statistical confidence was a critical early decision
Addressing concerns about AI reliability and potential liability required transparent communication about the system's capabilities and limitations
Finding the right balance between promoting the AI summary as a workflow enhancement while acknowledging it should complement rather than replace human judgment
Ensuring product teams shared a common vision for how AI could responsibly enhance the user experience without introducing new risks
The strong positive response validated our hypothesis that AI summaries could significantly enhance workflow efficiency while identifying specific value drivers
User feedback revealed important considerations about placement, permissions, and editing capabilities that will directly inform our design approach
Users' concerns about liability highlighted the importance of transparent AI implementation with appropriate safeguards and human oversight
Early-stage research proved invaluable in reducing development risk by identifying both opportunities and concerns before committing to implementation
Understanding that users would still review original requests helped clarify that the feature should enhance rather than replace existing processes
Develop and test multiple approaches to summary placement and visualization within the request workflow
Create a comprehensive model for viewing and editing permissions that addresses user concerns while maintaining workflow efficiency
Establish clear measures to evaluate the feature's impact on request processing efficiency and user satisfaction
Create a system for ongoing evaluation of summary quality and relevance based on user feedback and usage patterns