With the introduction of ChatGPT late last year, more and more people are thinking about how it will shape the future of work. Though it's certainly still early, tools like ChatGPT appear to be much more sophisticated than the AI tools of the past.
For those of us working in the support world, there seem to have been equal amounts of "look how great ChatGPT is" and "it's going to take your job!" sentiments floating around out there. On one hand, it can help agents be more productive and find new ways to help customers faster. But on the other hand, some fear that this productivity boost may mean that human agents will have less to do.
Our leadership team has been thinking about the role of AI in support for years, and we don’t share that concern here at Flight. Neither did the panelists in cognitive science and AI ethics that we heard from in our New Horizons event last month. In fact, one of our key takeaways was that ChatGPT can actually serve to highlight and lift the importance of support people.
And really, the overarching message is that ChatGPT, like anything, is a tool, and like its predecessors, it can either be wielded poorly or well. It can help us or hurt us, depending on how we engage it, and it’s best in moderate doses.
That’s why we affectionately dubbed ChatGPT our “frenemy,” and it’s part of what informs our position that to work responsibly and effectively with ChatGPT, it’s all about when, where, and how we decide to interact with it, the boundaries we set with it, and the way we support our teams as we learn our way around it.
Our panel shared their thoughts on all of this and more, and we compiled the biggest takeaways from the discussion below.
ChatGPT is good for some things, but not everything
Though it is a powerful tool, ChatGPT isn't a cure-all. ChatGPT is great at many tasks that center around language and content creation. For example, it's great at checking the grammar of a response. Or, if you have bullet points for an answer to a question, it can easily expand that into paragraphs of text. It's also good at joining multiple pieces of text into one coherent piece. So, if you need to use more than one macro, or canned response, it could help put those together in a coherent way. Or, if you’re just having one of those days where you or your messages are feeling a little “blah” – ChatGPT can help add a sprinkle of style.
It's also good at analyzing large amounts of text. For example, you could upload a number of different conversations, and it could do a sentiment analysis to give you some high-level takeaways.
On the contrary, don’t count on ChatGPT to do math or complex reasoning or give you insight on current events (here’s looking at you, Bing!). It probably doesn’t know much of the specifics of your product either, and perhaps it shouldn’t always know those… but more on that later.
Exercise caution
ChatGPT can generate responses to a wide range of topics and questions and can often provide helpful and informative answers. However, it's important to keep in mind that ChatGPT isn't perfect and can sometimes provide inaccurate responses, especially when it encounters unfamiliar topics, and that it will do so quite confidently!
Because of this, it's good practice to make sure you check any answer it gives you. It’s also a good general rule of thumb to use ChatGPT mostly for topics that you’re already somewhat educated about, so that you have a baseline of knowledge on the topic and can spot inaccuracies and flesh out any weak points.
Another thing to keep in mind when using it is that we're still not totally sure what it does with data we enter when prompting. For example, if you want to ask a question about your product, you may have to enter proprietary product information. But, it’s possible that ChatGPT could store that information and then share it with others later. The same is true with customer information. So, if you're ever on the fence about whether or not it's a good idea to include a certain bit of information in a prompt, it's probably best to err on the side of caution.
Establish guidelines
Continuing on the theme of smart use, one of the best things you can do is codify what’s acceptable for your team by establishing some written guidelines on when and how to engage with ChatGPT in a responsible way.
What those guidelines are will vary from team-to-team, but a few that came up in our conversation were:
- Be transparent about use - if you wrote something using ChatGPT, or used it to create a report, be upfront about it.
- Don't prompt with sensitive information - we're not totally sure how entered information is being used at this point, sot, it's important to keep those things out for now.
- Be accountable to check for errors - ChatGPT may have delivered the answer, but you're responsible for the accuracy of anything you present
Along with consulting your team when creating guidelines, you should also talk with your security or compliance teams to see if they have any additional insights or concerns.
It won't replace us
While ChatGPT is a great sidekick in customer service, it can't replace people.. It lacks empathy and emotional intelligence. So, in situations where customers are distressed, support agents can offer emotional support and understanding which can build rapport and resolve the issue.
Another area where ChatGPT doesn't fare as well as humans is with complex or nuanced issues Support folks sense the edges of things and the unspoken questions and needs — they can prompt customers to dig deeper and get to the real core of issues. They can also use their past experience, knowledge, and critical thinking skills to navigate these more complex issues with deep expertise, dexterity, and compassion.
Finally, ChatGPT may sometimes feel impersonal, and some customers may simply prefer the personal touch that comes with speaking to another human. Support agents are the only ones who can fill that role and establish a truly personal connection with customers.
Overall, while ChatGPT can be an effective tool that can help with some types of mostly-language based tasks and supplement the work of support agents, it won’t replace us anytime soon. Rather, in removing some parts of the work that make us feel like machines, we can focus more on what we do best — being humans and helping others in a personal way. And, in failing to do those parts that only people can do, ChatGPT can highlight the importance of support people and help to elevate the function.
A bright future
The introduction of ChatGPT has sparked a lot of discussion about AI and its impact on the future of work. While it is a powerful tool, it is, after all, not magic, nor a one-stop-shop. It’s merely a means for us to be better at what we do — helping customers and building great experiences.