Feedback & Rating Module
Post-Chat Satisfaction Surveys

The Feedback module captures visitor satisfaction ratings at the end of a workflow or after a live chat session. Star ratings, thumbs up/down, and optional text feedback — all feeding directly into your analytics dashboard so you can measure and improve service quality over time.

  • No credit card
  • 14-day free trial
  • UK hosted
  • Visual builder

What Is the Feedback & Rating Module?

The Feedback & Rating module is the dedicated satisfaction-measurement step in the IMSupporting workflow builder. Place it at the natural end point of any conversation path — after an AI resolution, a live agent session, or an automated answer — and it presents visitors with a quick, low-friction prompt to rate their experience. Ratings are captured instantly and streamed to your analytics dashboard, giving you a continuous pulse on customer satisfaction without leaving the chat interface.

Unlike external survey tools that redirect visitors to a separate page (and lose most respondents in the process), the Feedback module lives inside the chat widget itself. Visitors tap a star rating or a thumbs up/down icon, optionally leave a text comment, and the workflow moves on to a thank-you message. Response rates are dramatically higher because the action happens in context, at the moment the experience is still fresh. Every rating is timestamped, linked to the session transcript, and — when applicable — attributed to the specific agent or AI model that handled the conversation.

CSAT Measurement

Track customer satisfaction scores over days, weeks, and months. Spot trends early and act before small dips become systemic problems.

Agent Performance

Attribute ratings to individual agents so you can recognise top performers, identify coaching opportunities, and balance workloads more effectively.

Workflow Optimisation

Compare satisfaction across different workflow branches. Discover which conversation paths delight visitors and which ones need refinement.

Customer Voice

Optional free-text comments surface insights that star ratings alone can't capture — feature requests, praise, and frustration points in visitors' own words.

How the Feedback Module Fits Into a Workflow

The Feedback module is designed as a terminal step — it sits at the very end of a conversation branch, right before the closing thank-you message. Its purpose is singular: capture a satisfaction signal while the experience is still top-of-mind. Because it requires only a single tap, it adds virtually no friction and doesn't interrupt the natural rhythm of the conversation.

In practice, the most effective workflows collect feedback after every resolution path. If a visitor's question is answered by AI, the workflow flows through the Feedback module before closing. If the visitor was escalated to a live agent, the Feedback module appears once the agent marks the conversation as resolved. This dual-path approach ensures you gather ratings for both automated and human interactions, giving you a complete picture of your support quality.

Example: Post-Resolution Feedback Collection

A SaaS company collects feedback after both AI-resolved and agent-resolved support conversations.

START
Message
"Hi! How can we help today?"
AI Chat
Attempt AI resolution
Human Chat
Escalate if needed
Message
"Thanks! Your feedback helps us improve."

Step-by-Step Breakdown

  1. Welcome Message: A friendly greeting opens the conversation and tells the visitor that help is available immediately.
  2. AI Chat: The AI agent attempts to resolve the visitor's query using your knowledge base. Many common questions are answered here without any human involvement.
  3. Human Chat (if needed): If the AI can't fully resolve the issue, the workflow escalates to a live agent who picks up the conversation with full context.
  4. Feedback: Once the conversation is resolved — whether by AI or agent — the Feedback module presents a star rating prompt. The visitor taps their rating and optionally leaves a comment.
  5. Thank-You Message: A closing message thanks the visitor for their time and confirms the feedback has been received.

Best Practices & Tips

Collecting useful feedback is as much about placement and simplicity as it is about asking the right question. These techniques will help you maximise response rates and gather actionable data.

Place at Every End Point

Don't limit feedback to a single branch. Add a Feedback module at the end of every conversation path — AI resolution, agent resolution, and self-service. This gives you consistent data across all support channels and ensures no interaction goes unmeasured.

Keep It Simple — Stars, Not Surveys

A five-star rating takes one tap and one second. A ten-question survey takes five minutes and gets ignored. Choose the simplest format that gives you usable data — star ratings or thumbs up/down work best for in-chat feedback.

Add an Optional Comment Field

After the rating, offer a single optional text box — "Anything else you'd like to share?" Visitors who feel strongly (positive or negative) will use it. Those who don't can skip it without friction. This captures the "why" behind the score.

Review Feedback Regularly

Schedule a weekly review of feedback data in your analytics dashboard. Look for trends, recurring themes in comments, and shifts in scores after workflow changes. Feedback only creates value when it's acted on consistently.

Frequently Asked Questions

What rating formats are available?

IMSupporting supports three built-in rating formats: a five-star scale, a thumbs up/down binary choice, and a 1–10 numeric scale. You can choose the format that best suits your use case when configuring the Feedback module in the visual builder. Star ratings are the most popular choice because they're universally understood and require minimal effort from the visitor.

Can I see feedback in the dashboard?

Yes. All feedback data flows directly into your IMSupporting analytics dashboard. You can view average ratings over time, filter by date range, workflow, or agent, and drill down into individual responses to read text comments alongside the full chat transcript. The dashboard also surfaces your overall CSAT score as a headline metric.

Is feedback tied to specific agents?

When a live agent handled the conversation, the feedback rating is automatically attributed to that agent. For AI-only resolutions, the rating is attributed to the workflow and AI model. This distinction lets you assess human and automated performance separately, and helps team leads identify coaching opportunities or recognise outstanding agents.

Can I make feedback optional?

Absolutely. In the module settings you can toggle whether the feedback step is required or skippable. When set to optional, a "Skip" link appears below the rating prompt so visitors can proceed without leaving a score. Most teams keep it optional to avoid frustrating visitors, and still achieve healthy response rates because the in-chat format is so convenient.

Start Measuring Customer Satisfaction Today

Add the Feedback module to your workflow in minutes. Capture ratings, read comments, and track CSAT scores — no code required.