Getting Feedback: Responding to Customer Feedback

user feedback post sales content, user feedback on content, technical writing, feedback on support sites, knowlegebase feedback, knowledge base feedback, help system feedback, product led growth content feedback, metrics in tech comm, metrics knowledgebase, metrics in knowledge base, metrics in technical communication, customer feedback product led growth, customer feedback product-led growth, measuring technical writing, measuring knowledge base content, best practices technical writing, research in technical communication, measuring technical writing quality, measuring knowledge base quality, Pendo data, Gainsight data, governance customer feedback, Governance technical docs customer feedback, Governance knowledge base feedback, Governance help systems, Governance in product led growth

In a previous article, I covered how to get feedback from your customers on your knowledge base and help system content. Getting help center feedback is important so you know what’s working and how customers feel about the content. It can also be a way to start measuring the quality of post sales content in help centers or knowledge bases.

In a product led growth environment, this feedback is critical to reduce product abandon rates. In any product environment, it’s a significant part of reducing product churn.

Responding to customer feedback is a 2 part process: Governance and response. I originally thought of breaking these into 2 separate articles, but I think they are best dealt with together. They feel too related to pull very far apart.

user feedback post sales content, user feedback on content, technical writing, feedback on support sites, knowlegebase feedback, knowledge base feedback, help system feedback, product led growth content feedback, metrics in tech comm, metrics knowledgebase, metrics in knowledge base, metrics in technical communication, customer feedback product led growth, customer feedback product-led growth, measuring technical writing, measuring knowledge base content, best practices technical writing, research in technical communication, measuring technical writing quality, measuring knowledge base quality, Pendo data, Gainsight data, governance customer feedback, Governance technical docs customer feedback, Governance knowledge base feedback, Governance help systems, Governance in product led growth

Governance in help center feedback

If you set up a method for customers to give you feedback, the last thing you want is for the writers to respond. You do not want them engaging with the customers immediately. They might respond to that feedback as soon as it comes in.

Why is this a bad idea?

Because you need a plan and a process. Writers just responding however they think is best results in:

  • Writers being very helpful: One of the many things I love about the tech industry is that, as a community, we’re a helpful people. We are always there to lend a hand. Sometimes too helpful. You don’t want a writer taking the customer feedback as a personal challenge. They shouldn’t set a life goal to help or inform this user no matter what. That writer has deadlines and probably shouldn’t spend several days tracking down the answer to the feedback.
  • Inconsistent responses: Writers just writing a response and sending it out with the information they think is needed. For 1 writer, this could be a defense of the content, perhaps citing the workflow limitations of the current processes. (I’m know, but we don’t have access to an AWS environment right now. We depend on the programmers to give us this information. They say maybe we’ll have access to the QA site next year. I hope they train us.) Another writer could just send a thank you for the input message.
  • Multiple responses: If no one knows who should respond, many people respond, flooding the commenters inbox. Maybe some responses offer a solution, but the solutions are inconsistent with each other. This is related to the helpfulness above, but it doesn’t look good from the customer perspective.
  • No one responding at all: Everyone assumes someone else should respond. So, in an odd application of the bystander effect, no one responds. No one even puts the feedback into any system because, again, everyone thinks someone else did it.

There are more bad reasons, but these are the ones I expect, and have seen, in any environment.

Set up the plan for help center feedback

When you set up the help center feedback mechanisms, set up the plan. It doesn’t need to be pages of detailed steps with roles and flow charts (yet). However, it does need to be clear about what the goals are. It should also explain why.

Include at least the following for the feedback sent to you:

  • Where the knowledge base feedback comes in to: How is the feedback being sent to us? Is it in Gainsight? A JavaScript form that send to Slack? If we’re using multiple feedback methods for customers to reach out, how are they coming in? Do we know where to go look for each? Who in the company gets to see the feedback? Do other teams in the company need to be part of this?
  • Criteria for responding to customer help center feedback : Do we respond to feedback? Why, and when, and how? What’s the longest we should wait if we respond? Do other teams in the company need to be part of this? Do we pass the feedback to the Engagement Managers (if we have them)? What else should we do with feedback?
  • Feedback like Google Analytics: For inferred feedback, like analytics, what do we care most about? What questions are we asking from the analytics? What metrics should we be tracking now? What can we grow into?
  • Our agreed schedule to evaluate the knowledge base feedback: What’s our cadence for reviewing feedback? Do we want a team to meet several times a week to review the feedback? One time a week with people monitoring during the week? Different cadences for different kinds of feedback? Do other teams in the company need to be part of this?

Customers who have taken the time to submit documentation feedback appreciate it when a writer responds, even if the feedback was negative. Any feedback mechanism that you install must result in action. Customers need to know that the company will acknowledge and respond to any concerns or suggestions that they raise through the form. It’s also important to respond promptly to feedback you receive. If a customer doesn’t get a response within the first 72 hours, trends show that it’s likely they will never receive it.

Gales, Christopher; Splunk Documentation Team. The Product is Docs: Writing technical documentation in a product development group (p. 42). Kindle Edition.

Set up the process for help center feedback

Now that you have the business requirements for feedback set, it’s time for the plan. Decide the specifics of meeting the business needs identified above.

This is where a flow chart may be very helpful. I like swim lane flow charts for this because it identifies the roles and flow. Decide the specifics of how you’re doing at least these:

  • Where the feedback comes in to: Where specifically is this feedback going? Is it posted to a Slack channel? Which one? Does it also create a Jira ticket? What’s the associated tags with the tickets? Who specifically gets tagged, if anyone, in the Jira ticket?
  • Criteria for responding to knowledge base feedback: What are the criteria for feedback we respond to? Does everyone get an automatic Thank you response with others followed up at a later date? What are those criteria? What content templates do we need for the responses? Who owns those templates, and where do the templates live? What can we automate? How do we deal with urgent feedback?
  • Feedback like Google Analytics: Who gathers this data? Who specifically compiles the data into a report? Where does that report go then? Do we have a specific place on Confluence or Sharepoint these reports get saved? Can we use a tool to automatically compile the data? Who reports the results and to whom?
  • Our agreed goals to evaluate the help center feedback : What’s our exact cadence for reviewing feedback? Do we have a standing meeting scheduled for reviewing, or are we doing this another specific way? Are we reviewing feedback in a standup as part of a defined sprint? What specific actions are we taking to consider the feedback? Who is doing those actions in what timeframe? Do we look at every feedback in the meeting or is there a triage system we need to develop?

Make sure these tasks get put into job descriptions. Job time must be allocated for these tasks, regardless of who is doing what here. This initiative will fail if you add it to already overburdened people. By making this part of job descriptions and allowing for time to be allocated, feedback is managed better. You just get better results.

Review the plan and the process at least every 6 months

I see many places forgetting this part of the governance process. These are all living documents and living processes. You must review at least every 6 months to make sure the processes still serve your organization.

As you know more, you can do better.

The goals you define when you start may not be the goals that serve the company a year later. Setting the expectation this will be reviewed and perhaps updated helps people be more willing to work the process. They’ll contribute ideas as they have them for improvement.

Continue to gather knowledge base feedback and help center feedback over time to get the power of a larger dataset. A larger dataset helps you see important trends. These trends can help you discover better ways to support your customers in any product environment.

Leave a Reply