American activist Bryant H. McGill once said, “one of the most sincere forms of respect is actually listening to what another has to say.” For API providers, listening to the average user, accepting feedback, ingesting these experiences, and iterating on this information is a powerful exercise.
That being said, what questions should developers should ask their users in the first place? Knowing what to ask, when to ask it, and how to ask it, is just as important as ingesting the eventual response — perhaps even more important, when considering how it sets the developer/user dichotomy and tone of discourse for the future.
In this piece, we’re concerned with answering a singular question — what questions should API developers ask their users and how should they ask them?
Why Feedback is Important
We’re not paying lip service, here — consumer feedback is possibly the most important element of any development ecosystem. Building an API is essentially a design puzzle, matching the specific needs the user has communicated with the specific talents, skills, and limitations of the development environment.
When developers don’t match the needs of their customers to their development strategy and long-term goals, a rift is created between the user and the developer. This rift is the prime cause of most hardships when it comes to customer acquisition, retention, and user experience.
So how does a developer prevent this rift? That’s where communication comes into play. Developer relations takes many forms, ranging from basic social interactions on Twitter and official forums all the way to email campaigns and direct conversations. The utilization of effective and complete API metrics is also a key factor leading to the success of an API.
Now that we understand the importance of this feedback, the question arises — what questions should be asked?
What Do You Expect From This API?
As we noted in our article on drafting readable platform policy, expectations drive all business and social interactions. The matching of user and provider expectations will drive the overall user experience of the API and relevant services, and can go a long way towards informing both your public image for other potential users and the internal experience of utilizing the service.
Let’s frame this as a common-sense manner. It’s your birthday! As part of your celebrations, you’ve decided to go out for dinner at a restaurant that just opened down the street. Looking at the prices, you decide it is rather expensive, but worth the extra expense.
When you go to this restaurant, your expectations of service, food quality, and interactions with the wait staff defines your perception of that restaurant and that meal, for better or for worse. If you have low expectations of the restaurant, but instead receive a five-star meal with wonderful service, that communicates a fault in your expectations. Conversely, if you expect the best and receive low quality food and slow service, that communicates faulty expectations from the service provider.
In this example, you expect quality for the high cost. The same is true with API users — discovering and utilizing your service over others requires an investment of time and effort, and failing to provide value to that resource investment can lead to a negative perception of the entire experience.
By asking directly what your users expect, you can not only gear your performance and service to a high expectation, you can help to level out their long-term expectations of development and implementation.
A great example of this is the development cycle adopted by Mojang, developers of the hit PC game Minecraft. During their Early Alpha development cycle, they made it clear that, though they had big plans, implementation would be slow. When they began selling Minecraft and entered “beta”, they noted that the user experience may be fraught with bugs and issues, but that users should expect periodic updates to fix these issues.
Mojang asked the users what they wanted. They asked them what they expected the game to look like, what items should be included, and what mechanics they would like to see. They then communicated the realities of the development platform, expressing what was feasible and what wasn’t. They tempered expectations while gathering these expectations as a platform from which to guide future development.
Everything from inventory slots to combat mechanics have been tweaked and manipulated given user feedback. Every build of the game is released in a beta channel for user testing, and the Minecraft forums are often flooded with data points for user experience that the dev team often calls from during their second testing phase.
While this example isn’t necessarily in the API space, it does demonstrate specifically how powerful an open channel of communication is. Mojang is known amongst its community of followers as a company that cares, a company that communicates, and one that can be depended on to implement things when it says they will implement them, with few exceptions. Most importantly, users are aware of the realities of their expectations, and whether or not they can be implemented at all.
By making beta builds open, users can test the code — a benefit previously discussed in our piece on GitHub. Likewise, the open channel of communication allows for common security vulnerabilities and “happenchance” discoveries to be communicated and quickly fixed, preventing zero-day exploits and vulnerabilities.
API providers need to follow suit. Ask your users what they expect the functionality to look like. Ask users what they want the API to do, and how they want it to do these things. By understanding what your userbase expects, you can guide development in such a way as to minimize backlash and maximize satisfaction with the end product.
What Is Your Greatest Frustration with the API?
Often, issues with an API aren’t communicated directly — not out of a lack of channels for communication or out of fear — but out of simple embarrassment or perceived “bother” for the developer. Users can think “well, this is a beta API, so I won’t bother them with a request; hopefully it is resolved in a later revision”. Still other users can say “maybe this isn’t an API issue, but my own issue… I’m not a very good coder, after all”.
Much of this thinking can be harmful to the API ecosystem. By assuming the fault lies with the user, and not the provider, legitimate issues often go unchecked or unmanaged, only to be found out at a much later date as part of a bug audit or a feature-breaking update.
The best way to work around these issues is to engage the user in a conversation about what they perceive as “frustrating”. Instead of asking a leading question, such as “where does your usage often fail” or “what do you feel you can’t do”, ask about their frustrations. This will lead to some greater insight about the functionality of your API, and can potentially highlight issues that may otherwise be unaddressed.
Asking about common frustrations helps to inform where your user experience fails. When a user runs into something frustrating, sometimes it’s a result of confusing navigation, poor documentation, or faulty functionality notations. Highlighting these failures and addressing them improves the user experience, and thereby improves the quality of your API.
Secondly, allowing your users to vent frustrations helps guide development by showing weaknesses in functionality. When frustrations arise that aren’t related to documentation or other similar issues, they’re largely because of poor functionality. A user might find a call frustrating when it doesn’t perform as expected, or returns incomplete data.
While this is often corrected on the user side or processed through error-correction, finding these issues early on and correcting them helps put development on the right track. Identifying common issues and rectifying them can turn a middling API into a truly useful and functional service for the API’s users.
Finally, and most importantly, providers need to create a communication channel with developer users. Whether this means having official API forums, dev Twitter handle, a public email address, or even just a custom Google form, ensuring there’s a path to vent and discuss is just as important as accepting this feedback.
Why Did You Choose Our API?
A “potential user” is of limited metric use, as they’re a complete unknown. Potential users are wildcards, and attracting them to your API in the first place may be a complex discovery process. “Current users”, however, represent high value and important metrics because they all share common interests that drew them to your API. Metrics are an incredibly important and powerful tool for API developers, and failing to tap into these types of metrics could doom an API to obscurity and low user integration.
Asking why users chose your particular API over the bevy of other choices on offer does a lot to inform the developer about not only the specific user’s wants and behaviors, but helps define your unique value proposition, and aids your marketing attempts in targeting others in their set demographic.
The API economy has evolved, and their are now many types of consumers using APIs. You likely have a wealth of demographic data on your specific user; age groups, hardware and browser profiles, geographic location, etc. Pairing this profile with what specifically enticed different consumers to your API could be powerful knowledge for segmenting lean marketing campaigns.
If you know why people flock to your API, future development can be geared toward specialization, emphasizing the qualities and functionalities considered “unique” and “attractive” to these users, while mitigating the negative aspects that might otherwise have turned them away.
Keep in mind however that this sort of data can be overreacted to, leading to “mob rule”. While adjusting to the wants and desires of the majority user group is important, it is equally important that developers attempt to stave off feature creep and bloat. This philosophy is most commonly referred to as the 80/20 rule in agile software development.
If You Could Change Our API, How Would You?
Sometimes the best way to get useful, actionable information is to just flat out ask for it. Asking your users what they’d change about your API is akin to that age old “if you were President/King” question, and opens up an avenue of direct change that would otherwise be obscured.
The key to implementing this question effectively is to ask for specific, actionable responses. A response like “better integration with media services” is not an actionable response, as it doesn’t list the services which the user would like to better integrate with, nor how those services tie into the API functionality.
A better response would be something like “increased tools to integrate with media extensions to the APIs data handling suite”. When responses are given, extra details should be requested as part and parcel.
Keep in mind that it must be clearly communicated to users that not all changes are possible. Changes to core functionality outside of planned expansion, changes to how services interact when it would cause feature breaking, and so forth should all be noted as caveats.
Methods to Use for Accumulating Feedback
This is all well and good, but all the questions in the world won’t matter without having an effective means by which the question can be asked and gathered. API providers can quickly find themselves in a sort of catch–22 — getting this feedback is important, but the source of this feedback can determine its liability, and the method of procurement could even drive consumers away.
Likewise, understanding when to ask these questions is just as important as figuring out how to ask them. There is no magic bullet type answer to this question, but if we look at two theoretical applications of these concepts, we can see some things to avoid, as well as some methods that excel.
Example One – Ineffective Questioning
An API provider by the name of KAL Laboratories, or KAL for short, is performing a survey of their userbase to improve profitability and identify new markets. The lead programmer, Shawn, decides to start the questionnaire with a bevvy of technical questions.
These first few questions hinge around the languages used by the API, and contains questions such as “do you like the Twitter API integration that we did with the data handling package?” and “what do you think about the use of Go?”.
The questionnaire gets passed on to the public resources specialist, Sandra, who adds her own questions. Things like “did you know about our work with non-profit charities?” and “what are your favorite websites?” abound.
Finally, the questionnaire is lightly edited, put into a stock form, and is blast-emailed to all the users who have registered their email as part of the API registration process. The questionnaire gets very few answers, and the userbase declines.
What Went Wrong?
Straight off the bat, there are some huge issues with the methodology by which this questionnaire was constructed. First and foremost, the scope is extremely broad — identifying new markets and increasing monetization revenue is a huge topic and one that respondents could easily feel put-off by.
Secondly, the questions aren’t very useful. Because the tone of the questions vary wildly depending on who is asking them, and go from high-technical to “fluff”, the questionnaire would be hard to take serious at best, and annoying at worst. Even if the questions were well-formed, providing them in a stock form without branding or explanation makes it easier to disregard the survey.
Finally, the questionnaire was distributed in an annoying way. Respondents likely did not know their emails would be used for analytics like this, and as such, when they are inundated with what is essentially spam mail, their opinions of both the API provider and its product will decline rapidly.
Example Two – Effective Questioning
Seeing the poor response to their first questionnaire, KAL decides to re-evaluate their approach. First of all, they look at their motive and questions. Their original stated goal was to investigate how to “improve profitability” and “identify new markets”. While these are understandable goals, they are not properly framed. Improving profitability comes with understanding why profits are artificially depressed, and identifying new markets comes with understanding what your API does well (and equally what it does poorly).
With this in mind, a team meets to discuss a new questionnaire. Instead of relying on only two people to construct the survey, questions are submitted to the PR team for vetting, a handful of 10 are selected, and the survey is restructured to ease feedback with easy questions up front. The entire experience is reviewed and refined internally.
A limited trial begins. A set of “power users” are asked if they would like to take a questionnaire during a routine support conversation. They agree to do so, and take the survey. They give their answers not only to the questions on the survey, but to pointed questions about the quality of the survey itself.
With this information, the team again reviews the questions and vets them before issuing a general email call. This time, instead of cold calling their users, they simply notify the userbase that they will begin issuing periodic surveys to improve functionality — and that users may simply opt out to not receive them.
After a short period, the first survey is sent out, metrics are analyzed, and the activity is deemed a resounding success.
What Went Right
The most important thing here is the fact that everything was done via a process. Before, everything from the construction of the survey to its application was haphazard. In this variation, the team not only pinpointed what specifically they wanted to know, but discussed how to ask.
This vetting is key to the process. While an engineer might legitimately want to know what a user thinks of their complex code and resultant front end, the question should not be “What is your opinion of our front end live page?”, but rather, “Is our portal easy to use?”.
Next, the team reached out to a select number of trustworthy users to test these questions. Being able to test questions in a microcosm gives you the benefit of extrapolating general responses to your survey without actually incurring the cost of performing the survey.
Finally, once these questions were tested, the team reached out to users to inform them of their intents — and to give them an opt out option. Forcing questions upon your userbase is ineffective and counterproductive. Rather, tell your users why your survey exists, and make it completely optional. This gives the user a sense of control, and increases the value of their response as well as the frequency by which the average user will respond.
Think As a User
At the end of the day, compiling these sorts of questions is incredibly useful for an API provider. It helps validate your product’s direction, as well as the viability, necessity, and desire for additional services with new monetization possibilities.
The takeaway from all of this is simple — think like a user. Imagine logging into your email one day to find a huge survey foisted upon you by a company that you at one time thought as non-intrusive and privacy respecting. Imagine this survey is filled with grammatical errors, non-sensical questions, and confusing terminology. How would you respond?
Keep this in mind while forming your questions and conducting your surveys — it could mean the difference between informative analytics and a ruined reputation. If done right though, a survey process will demonstrate you truly share your developer user’s journey and path to success.