top of page
Search
  • Writer's pictureMF

I Tried an AI UX Research Tool. Here’s what happened…

I was inspired by a coworker, Hez, who gave a talk about AI. “Just try some of them out, I’ll send you a list,” he kindly offered before making good on that promise of emergent everyday AI tools. The resultant list included the following:


  1. https://tome.app/ A deck builder

  2. https://openai.com/blog/chatgpt Chat GPT

  3. https://klever.kraftful.com/ An interview text insight summarizer

  4. https://www.usegalileo.ai/ A “UI AI Builder” but it’s actually just a sign up list.


In his hip patched jacket and overpriced sneakers, Hez leaned over my desk, watching me navigate to one of the earlier iterations of Kraftful, a tool that promised to accelerate research insights.


I uploaded the following non-sensitive transcript from an internal side project, after carefully anonymizing it. We have been iterating on our own understanding of discovery and framing as an org. and I have been through enough 1:1 interviews and resultant synthesis to already have a good grasp of emerging themes.:


The sample transcript


After pasting the above transcript into the system, I shifted nervously from foot to foot. Is this the future of research? Will this threaten the employability of researchers everywhere? Will it amplify our powers? It felt like the beginning of an era, and perhaps the end of my paycheck.


As soon as the feeling abated, the results were ready:




To say that it was right about roughly half of the 13 resultant insights is generous. The other ones didn’t seem to track with anything.


Generally, it seems like a simple trick: flip a statement into a solution, problem, or a suggestion. Add in a dash of universal, horoscope–like generalities that sound right such as wanting to “learn more” as filler. These are things I’ve seen first year UX Designers do when they haven’t uncovered information in the right research format yet— just change the few insights they have gotten into what they want to know. Can’t find a problem? Easy, take a gain and flip it. User said they like winning at Tennis? What’s the problem? Easy — the problem must be… Losing at Tennis?


Except that’s not how research works. A gain isn’t always the equal and opposite of a pain. Humans are nuanced. Furthermore, the amount of these things they feel and the context in which they feel them is paramount. People like winning at Tennis, and losing isn’t fun but it also isn’t as big of a pain as not being able to find a partner to play with in the first place. The only way to find that out — the pain of not being able to find a Tennis partner — is by talking to more warm blooded, sweat banded, humans.


Speaking of magnitude, removing a pain is also, generally speaking, a stronger motivator than a potential gain. If I have a headache I don’t care as much about earning $5 — I just want my headache to go away. That kind of nuance is lost here.


Thinking deeper, on the one hand, an argument I’ve heard for the benefits of this kind of tool is that it seductively offers “A different perspective”. On the other hand, we tend to underestimate how much those different perspectives undermine our subconscious thinking. Consider advertising. Advertising is actually incredibly effective. We hear something once or twice and when we walk by it in the store, it begins to feel familiar, like something of our own background or choosing. This happens by mere-exposure effect. By reading insights that aren’t real, is this actually tainting my brain’s ability to sift through disparate information and make its magic to find the real gems “nuggets” of wisdom therein? Perhaps it can be a corruption as much as an assistant.

Overall, at this stage, the tools seem incredibly dangerous for novice researchers who haven’t matured enough to understand the subtlety and complexity of sound research technique.


Later on, I used Chat GPT as another foray into AI. This was at the suggestion of another coworker. Bless my coworkers and students for keeping me current. Our goal was to come up with a list of application names for a class graduation tracking app. The list would be shared with internal stakeholders and run by people in our user group for association tests and feedback. We had come up with a paltry 15 names so far after a half hour of brainstorming as a group. We entered some parameters into the chat “A list of 20 names for an application that tracks graduation and certificates”. We tried a few more variations on that theme. It performed beautifully, mixing in synonyms and antonyms and even similar related words. Even comic characters made their debut in some of the names. We were able to generate about 15 more that we liked by pulling out the best ones from those rounds. The name that ended up testing the best was ultimately one we (humans) had generated (and not the AI), however, we still found the assist useful to our process.

Overall, it looks like AI is promising for generative work, but I am going to strongly advice against using it for evaluative research insights until it significantly matures.


I recall going to a David Bowie costume exhibit at the Brooklyn Museum, and seeing his “cut ups” lyric generator. He had a system for taking disassociated words and combining them for songwriting inspiration. Maybe he was onto something similar to the value of generative AI. Perhaps there is also an association between people who like to wear fancy coats and people who like to explore the cutting edge of technology.


“A winter’s day, a bitter snowflake on my faceMy summer girl takes little backward steps awayJack Frost took her hand and left me, Jack Frost ain’t so cool

Sell me a coat with buttons of silverSell me a coat that’s red or goldSell me a coat with little patch pocketsSell me a coat ’cause I feel cold”


– Sell me a coat, by David Bowie


If product management, development, and design strategy is the kind of thing you are looking to get started for your business, here at Tanzu labs we proudly partner with companies in the public and private sectors to enable them to find a winning product-market fit, build and modernize apps, and more. Drop by for a free consultation at our office hours.


TLDR: It flipped problems and solutions, didn’t track magnitude, and made up incorrect insights from whole cloth. It might be helpful for getting unstuck or generating a different perspective but at this point seems like it could mis–steer the product direction as a copilot. A better use of AI we also tried? Assisting with product name brainstorming. AI is currently best suited for generative, and not evaluative, research.



2 views0 comments
bottom of page