Prompt User Research (1).png

Context

At OpenClassrooms, I implemented a Continuous Research process to regularly collect and analyze user feedback. Thanks to this process, we collect nearly 1,000 pieces of feedback per month. The time required to analyze such a large volume of feedback can be very long and interfere with other tasks of the designers.

This is why we worked on a prompt designed to speed up the analysis of this feedback.

Starting with a first prompt

We began with a very simple prompt aimed at grouping feedback into broad categories to identify “feedback families.”

Capture d’écran 2025-01-14 à 16.34.20.png

We then improved the prompt to identify the level of user satisfaction with these categories based on the positive or negative tone of their feedback:

Capture d’écran 2025-01-07 à 23.04.37.png

This gave us an initial overview of the main topics of user satisfaction or frustration through their feedback.

Iterating on the prompt

However, these initial results did not fully meet our expectations. The AI struggled to understand the core of our industry and often merged categories that were important to differentiate from a business perspective (e.g., coaching and mentoring).

We also realized that some feedback ended up in the wrong category: the AI had misunderstood the true meaning of the feedback.

That’s why we focused on our pre-existing categories, the ones we had identified by manually tagging the feedback. In collaboration with Content Designers, we developed a more advanced taxonomy and definitions to improve the AI prompts and ensure consistent analysis: