Enabling the next evolution in self-service analytics with generative AI that goes well beyond chat

Enabling the next evolution in self-service analytics with generative AI that goes well beyond chat

CONTEXT

Rasgo is on a mission to democratize data by empowering B2B business users to find answers to key business questions on their own through an AI-powered experience that puts LLMs to work on enterprise data warehouses.

The app harnesses the power of generative AI to enable marketing, FP&A, and C-suite team members to deep-dive into their datasets and unearth key insights through an intuitive, iterative conversational (chat-based) BI experience, and a range of supporting features. Not only does this bypass the need to bog down already swamped data teams with ad hoc requests, it also allows for richer, more contextual analysis from SMEs now empowered to make data-driven decisions faster.

This new, intuitive platform not only makes access to information easier but also helps businesses make faster, smarter decisions by providing instant access to context-rich answers. By streamlining these processes, we've helped clients reduce the time and resources spent on data retrieval, which ultimately boosts their efficiency and productivity.

SUMMARY

As the sole UX designer at Rasgo, I had the exciting opportunity to shape a product from the ground up, and steer the direction of the entire product design cycle. I was initially hired as a Senior UX Designer, but as we pivoted from an ML feature store to a self-service data querying tool and then to a chat-based interface with generative AI in the span of 6 months, I quickly took on the role of a 0-1 founding designer. I worked closely with teammates across all departments, ranging from the CEO to Engineering, Customer Success and Sales in order to create intuitive and delightful experiences for our Fortune 2000 users.

My work touched all parts of the platform and impacted how everyday users interacted with their large, enterprise data warehouses. During my tenure, I worked hands-on in every stage of the design process—from digging into user research and sketching out wireframes to crafting prototypes and conducting usability tests. I lead end-to-end designs for our desktop app, revamped the website with marketing, developed pitches and briefs, co-developed a design system with engineering, and established a design practice and language for our team. Most notably, I spearheaded the design of our MVP AI SaaS product, and created core features and areas of the product like Chat, Table management, Metadata management, AI readiness, Admin management, and more.

Designs aside, I helped drive revenue growth by influencing product, marketing, and customer support functions as a key member of our small team.

COMPANY

Rasgo (pending acquisition)

INDUSTRY

Data Science
Business Intelligence
Artificial Intelligence

ROLE

Senior UX Designer

DURATION

Feb 2022 - Aug 2024

TEAM

2 Front End Engineers
3 Back End Engineers
1 CTO / Head of Product

PROJECT TYPE

Web App

Key Projects

I worked on many more projects during my time at Rasgo; if you're interested in chatting more about them, reach out to me.

AI Insights

BUSINESS NEED

In talking to clients, we found that the majority of business users were at a loss with how to come up with the right questions to ask our AI. Most faced the "blank canvas” problem, where they struggled with knowing where to start when querying data. Others, who weren't on the data side of their teams, simply didn’t posses the skills to come up with complex queries from scratch, or determine how to dissect information in a meaningful way to elicit actionable results from our AI.


Data from session recordings, and qualitative surveys and interviews we conducted over the span of two weeks all suggested that users wanted information served to them in a digestable, scannable format that allowed them to cherry-pick what to look at and then decide what to look into further. 

GOAL

Design a solution that proactively combs through active and relevant datasets to unearth data and context that business users can use as a springboard for deeper exploration. The main task here was to remove the onus from the user to come up with a good prompt and contextualize raw data returned from SQL queries in a way that made sense and was relevant to an average business user.

MY ROLE

I lead research, design, and testing of a new ‘Insights’ feature. This AI-driven feature surfaced relevant information to a business user via data-rich cards in a scrollable feed. Acting as a RSS feed / highlights reel of sorts, Insights provided tailored metrics, analysis, charts, contextual information, data points, and follow-up questions, and served as a starting point for more in depth data analysis.

Culled from datasets relevant to a business user’s job and interests (captured via a preferences section), Insights were expandable, shareable pieces of content that refreshed every time the user accessed the app, ensuring novelty and discoverability. They could also be sorted into themed boards or collections, that in turn offered a summarized, ‘big-picture’ view of a business question.

Customizable, Conversational BI Interface

BUSINESS NEED

Though users were appreciative of the proactive reporting delivered by the Insight cards, many felt constrained by the unilateral approach. Business users wanted a more freeform way to interact with our AI and iterate through follow up analysis, the way they would with a colleague from their data team.

GOAL

Create an intuitive and visually rich conversational BI interface that allows users to follow up on information from an Insight, or perform ad-hoc data analysis from scratch using natural language.

MY ROLE

I designed a simple chat interface with an additional ‘edit’ mode available exclusively for Admins. The Business User version of the interface allowed users to converse with our AI in an iterative, intuitive experience, with suggestions (hardcoded chat starter prompts) on the empty state to outline how to better leverage our AI for data analysis and custom widgets in the chat to display product-specific artifacts like documents, datasets, data previews, etc.


Users could access this new chat interface via a dedicated module in the menu, or from an Insights page, allowing for two separate workflows - one supporting data analysis from scratch, and one supporting follow up analysis contextualized to a parent Insight. The Admin version of the page included edit access to allow Admins to manage what information and prompts showed up on the empty chat / splash screen.

On the product side, we wanted to enhance response accuracy by enabling users to specify which datasets, tables, or artifacts were relevant to their queries. To achieve this, I incorporated a selection menu into the prompt bar within the chat module. This feature allowed users to reference specific artifacts, such as datasets, SQL queries, and templates, in their prompts. These references were then provided to our AI, offering it additional context and direction.

Prompt Templates

BUSINESS NEED

With the introduction of Chat, data-savvy business users were finally empowered to conduct their own detailed analysis, supercharged with AI, while data-averse users could user the same feature to ask broad strokes questions and get actionable insights without much effort.

Still, something was amiss.

One of the biggest pain points we unearthed from reviewing usage data and interviews was that the Chat-based approach bred redundancy. Users often typed in the same prompt, with a couple of key variables interchanged, multiple times a day. We saw a lot of prompts like ‘How many units of [[product type]] were sold in the past [[timeframe]] and ‘Use data from [[table]] to calculate a Deals Won and Deals Lost report for [[timeframe]] and ‘How many outstanding invoices do we have from [[company name]]’.

We realised that over 70% of chat messages involved users sending routine, repetitive prompts to our AI, asking it to help them with everyday mundane tasks like report generation, pulling metrics, writing emails, generating OKRs, and running routine data analysis based on a predefined format or formula. Through qualitative interviews with power users, we realised the need for a way to automate these everyday workflows, and remove the cumbersome task of manually typing them into the chat.

GOAL

Create an intuitive feature set to help users automate routine tasks (within the boundaries of what AI can automate) using natural language, removing a key redundancy baked into a Chat-based approach. Our aim was to help users free up time so that they could spend more of their workday doing knowledge work to add value to their function and the company, and less on low-level non-strategic tasks.

MY ROLE

I considered multiple approaches to this solution, and ultimately settled on a suite of advanced features under the banner of ‘Templates’ to help users create workflows and prompts that codified and automated routine tasks.

Unlike the existing hardcoded conversation starter prompts that currently lived in the limited space on the default Chat screen, the Template module offered a wider range of possibilities. It supported pre-canned SQL queries, single step and multi-step prompts, parameterized prompts (i.e. prompts with variables), and format-specific prompts (i.e. prompts that resulted in a chart or report or powerpoint being generated). Users could create public or private templates that contained a title, description, prompt, and optional fields like a dataset selector or SQL query, and that were subsequently shareable and reusable across their organisation. Department power users and enablement teams could now easily create Templates for common workflows, reports, and metrics, that supporting teams and business users could use to fast-track their daily tasks.

Other features included:

Advanced, workflow prompts: This feature enabled users to automate complex analyses and tasks with multi-step prompts that used multi-step RAG for more sophisticated, nuanced reasoning. These augmented natural language prompts directed our AI or specific AI agents to conduct intricate stepped retrieval and reasoning tasks, where the result of each subsequent step hinged on the output of a preceding one.

Template builder: A creation tool to enable advanced prompt engineering. The template builder allowed users to create, edit, test, iterate on, and save prompts for personal or organisation-wide use. Users could use the template builder sandbox to test and run templates and tweak prompts and steps prior to saving or sharing them with their team. The builder also allowed users to employ specific agent functions like multi-format artifact generation, image ingesting, web browsing, or searching capabilities to further direct what the output of a template could be.

Template marketplace: An organisation-specific template library, including suggested prompts from the Rasgo team, and other prompts that users in the organisation had saved and shared. The goal of this transparent marketplace was to drive AI adoption across companies by enabling data-averse users to enlist the power of AI regardless of their individual skillset. Other goals we hoped this would accomplish were to increase org-wide AI fluency and transparency, help users crowdsource and surface useful and efficient prompts that their team members were already using to streamline their own workflows, and create a org-wide dictionary of shared GenAI tools that contributed to the company’s knowledge base in a format that users were free to explore and employ at leisure.

Template scheduling: Schedule templates to run at specific times, and at a predefined cadence. This feature, developed for sales, FP&A, and data teams, allows users a quick, hands-off way to gather updated information on trends, metrics, and reports that are usually generated and reviewed at a set timeframe (a daily sales report, a weekly deals report, etc). Users can generate a prompt template that pulls data, request that it’s formatted in a particular way (i.e. a table, chart, or CSV), optionally set it to deliver to a Slack thread or email, and then determine the cadence at which they want to run it.

AI Agents

BUSINESS NEED

With AI, a one-size-fits all approach, though effective, isn’t always ideal. We realised pretty quickly that funneling all user questions in an organisation to a single AI resulted in muddled responses that were sometimes inaccurate, or referenced datasets and terms that were irrelevant to the specific business user. We needed a way to help our AI to provide more accurate and relevant responses that better reflected the needs and nuances of the users it served

GOAL

Implement a new, diversified AI with specialised ‘Agents’ - multiple, parallel instances of our core AI that can be optimized for a specific business function. These new Agents would have access to varying LLMs and models, and could be fine-tuned with specific data, base prompts, knowledge, or skills relevant to a single function, role, or use case.

To support this, we needed to enable Admin to create and manage the underlying configurations, and allow users to properly identify and select the right Agents for their intended task. 







MY ROLE

I lead design of the new ‘Agents’ feature across multiple touch points in the product.

I retrofitted our existing settings and configuration pages to include a conditional workflow that allowed Admins to register new LLMs and models, and subsequently create and manage Agents tied to these models. For users, I created new Agent selection modules across the app that allowed them to select Agents for their chats, while educating them on how to properly choose the right one for the intended task.

Database AI “Readiness”

BUSINESS NEED

An ongoing issue we faced during our AI pivot was that of quality and accuracy. Users were excited with what the product could do, and the ease with which it could answer high-level data analysis questions, but wanted more. The list of issues included the AI not knowing what specific internal business jargon meant, how certain metrics were calculated, not grasping underlying business logic or nuance, and inability to distinguish between different units (of measurement, currency, etc) and understanding timeframes.

Our previous approaches to the problem focused around providing users with tools to better direct our AI, or increase the specificity of their prompts. This time, we turned our focus to the root cause - the quality of the data itself. Since an AI’s response to a question is only as good as the underlying data, organisations with high ambiguity in their documentation (or ones that had little to no documentation to begin with), missing metadata, and improperly labelled fields were prone to responses that had incorrect data and skewed analytics, which subsequently resulted in flawed insights and decisions.

GOAL

Empower organisations to assess the AI-readiness or ‘health’ of their EDW, and provide them with tools to do so. Our approach to this covered a suite of features geared towards increasing the percentage of meaningful AI interactions among users.

MY ROLE

I spearheaded research, design, and testing for the entire suite of features that were deployed. Some of what we implemented included:


• Notes - a Rasgo-specific metadata layer with a predefined set of questions we wanted users to answer to better help our AI better understand a dataset

• Glossary - a space where Admin and users could codify business logic and jargon to clarify context

• Automated column descriptions - AI-generated descriptions to cover lapses in documentation or poorly documented datasets, and reduce inconsistencies in data formats, units, and definitions

• Dataset Visibility - toggles to make datasets accessible or inaccessible to our AI at the organisation and user-level, to ensure that only relevant and clean tables were being considered

• AI Readiness Scoring - a proprietary scoring system that ranked each dataset on its “AI-readiness” i.e. how prepared the dataset was, by our measure, to interact with our AI

• AI Manager - an management page / dashboard that provided Admins with a birds-eye view of their entire data warehouse’s AI health, with a triaged list of datasets that needed review and further work