profile

Growth Dives

Growth Dives: How to get answers quickly and avoid features that flop


How to get answers quickly and avoid features that flop

UX research from Slack, ChatGPT & Grammarly

Read online here

One of the most common things I hear from founders is:

We keep launching features that have no impact. They just flop and we don’t know why.

This is what’s known as a feature factory, something Marty Cagan covers in his book Inspired.

It’s where teams have a constant state of busyness to launch feature-after-feature without having a positive impact on core metrics.

One way to stop this is to test assumptions before building something; to uncover what we’re silently assuming and de-risk the idea.

Enter: one-question surveys.

I first learned about these from Teresa Torres, Product Discovery Coach and author of Continuous Discovery Habits. I went on her Assumption Testing course in 2023 and loved it.

On the course I learned that one-question surveys are used to test risky assumptions or hypotheses.

A great one-question survey does three things:

  • Asks a simple question (if someone needs to read it twice, that’s bad)
  • Asks about actual behaviour (no ‘could you’ or ‘would you’ statements)
  • Is embedded in the user experience (if you ask somewhere that is too far removed you may impact your results)

The simplicity and directness of this tactic leads to higher response rates than surveys via email. This is due to the ‘Spark Effect’ 🧠 where users are more likely to take action when the effort is small.

As a result you can get 1000s of responses within a short time (depending on your sample sizes).

These one-question surveys are so subtle that they’re often hard to spot.

However, with my eagle eyes I have spotted three interesting and different examples from leading tech companies: Slack, Grammarly and ChatGPT.

🦅 🦅 🦅

We’ll run through each to look at how teams research a range of assumptions quickly to help inform new features, personalise experiences and increase product adoption.

Let’s go 🔎

First up: Slack’s work survey

Last week, I was in Slack when I saw something new bottom-right:

Where do you usually work with your team?
Get customised tips based on your working style

Notice how the question is concise and written in a clear way (you’re able to understand it on the first read).

There’s a choice of four responses below the question:

Remote, In person, A bit of both, Prefer not to say

These cover all bases and are mutually exclusive (i.e. they don’t overlap) meaning there’s no confusion about which to chose.

This execution passes the vibe check:

✅ Simple question

✅ Asks about actual behaviour

✅ Is embedded in the customer experience

I also love the language here, so casual with ‘a bit of both’.

Once I click 'a bit of both' the module changes, turning into a CTA with:

Work faster in slack
See tips for hybrid work

Next, I click the CTA after which an info panel appears, showing me tools for hybrid work.

Here the survey manages to do three things:

  1. Find out the % of people who work in office, hybrid or remote (perhaps to inform things that are in discovery or development)
  2. Personalises the user experience (for current users)
  3. Increases adoption (or at least awareness) of Slack’s feature set

If the features are premium, then this could also impact conversion to paid. The features here are all free however: huddles, canvas and integrations (up to 10 free).

My bet here is that they’re focusing on feature adoption amongst current users over new features.

I wonder if the assumption here is:

  • Problem: we have low adoption of certain features
  • Assumption(s): we think people need different features depending on their work (hybrid, remote, in person). We think people don’t know about the features. We think people don’t see how the features can help them specifically.
  • Solution: We think that if we personalise the features for users depending on their work, they will understand how Slack can help them and will be more likely to use our features.

Did it work on me?

No, not really. I huddle regularly, but I don't use Canvas. I know it’s there, but haven’t decided to move from Notion and Docs just yet…

Curious to hear if you think there's anything else they might be researching here (reply and let me know).

Next: Grammarly.

Grammarly’s feature feedback

I was researching with ChatGPT the other day when — as usual — the Grammarly icon got in the way.

It was the auto-citations tool, something that isn’t particularly necessary in ChatGPT now that citations are more prevalent in the UI (compared to 2024).

In trying to turn off Grammarly, I stumbled upon a one-question survey about feature feedback.

I was simple, clear and used emojis to help me decide.

It even had a free text entry field as a fast-follow for more information.

Notice how the copy for the question is large and bold, making it easy to read.

Each of the three options is combined with small copy: I dislike it, It’s OK, I like it.

This ensures valid results — that someone doesn’t mistake an emoji for another emotion.

In terms of execution, it passes 2/3 of the rules:

✅ Simple question

❌ Asks about real user behaviour

✅ Is embedded in the customer experience

As a result, this is more of a satisfaction survey instead of an assumption test. But what’s interesting is that this question acts as an assumption generator due to the free text entry field.

The team will be able to sift through responses to find new opportunities to improve the product experience.

This format is common for Grammarly, as a similar survey is accessible from the normal module.

So far, we’ve seen one question surveys for customer research, personalisation and opportunity mining, now one for training AI chatbots.

ChatGPT‘s ‘is this answer helpful’ question

Once I’d managed to close Grammarly, I continued with my original task: research with ChatGPT.

Within which I saw another one-question survey:

Is this conversation helpful so far? 👍 👎

In terms of trigger, this question seems to appear:

  • At the end of the first answer in a chat (not each time)
  • At the end of a conversation (but not always)

Once clicked, the already-hard-to-see question turns an even lighter shade of grey, and thanks me for my feedback.

When I asked ChatGPT, apparently these responses are actually used to improve answers in real-time.

Notice how there’s also a persistent little thumbs up and down after each response too 👍 👎

When I tap on the thumbs down at the end of the response, there’s a fast follow question asking for more detail.

If I click ‘more’ I get to a popup:

If on the original ‘tell us more’ I click one of the original response, I get a tailored response.

When I click the thumbs up, I get no followup, the thumb just darkens to confirm I've clicked it.

For some reason this feels…cute?

The icons are so small they’re like little baby thumbs 👶 🍼

These micro-feedback surveys are another example of the Spark Effect in action 🧠 i.e. small, interactive details to make the experience feel fun and exciting.

What’s most interesting to me is what the team do with these answers. From the UI, we can see that the top reasons someone reports a bad response are:

  • The style being off putting
  • The answers being incorrect
  • ChatGPT not following instructions
  • People not like ‘memory’ (i.e. ChatGPT learning you)
  • ChatGPT not doing what you want it to

This survey will be gathering data on how common these are, allowing the team to adjust the product to solve the biggest problems. For instance, this feedback has likely informed some UI changes in January 2025:

  • Assumption: people thinking it’s not following instructions
  • Change 👉 showing more of the ‘thinking’ UI
  • Assumption: the accuracy of answers
  • Change 👉 adding more citations

It’s a great example of a feedback loop in action, all fuelled by micro surveys.

In conclusion: surveys solve all problems (not really).

We’ve seen how one-question surveys can be used to:

  • Test whether assumptions are true
  • Test the relative importance of different assumptions against each other
  • Find out more about the customers
  • Personalise the experience
  • Train AI chatbots
  • Assess how current features are doing
  • Generate a bank of answers within which you can mine for new opportunities

However, it's easy to look at some great examples and think that this sort of activity will solve all problems. The hardest part is working out:

  1. What your assumptions are in the first place
  2. Who you need to ask
  3. How to phrase it to actually test your assumption (and not get dud data)

That's the tricky part.

If you’re thinking of running these, spend extra time on the question and answers.

Think: short, clear, easy.

And definitely test them with someone who has fresh eyes who doesn't work in product.

Would love to see some more examples of one-question surveys.

Seen any good ones out in the wild recently?


Done! Thank you SO much for reading (all the way to the bottom, wow look at you go).

Any feedback? If you're feeling bored, delighted, angry, confused - any emotion - would love to hear.

See you next week!

Rosie 🕺


Growth Dives

Each week I reverse engineer the products of leading tech companies. Get one annotated teardown every Friday.

Share this page