UX Researcher · Cleveland, OH

Research that moves
people and products
forward.

Nearly 11 years of qualitative and mixed-methods research across financial services, healthcare, retail, and enterprise technology.

Research that earns its keep.

My entire career has been built in a paid client consulting model, which means every research engagement has to be good enough to justify the investment. That environment has made me a fundamentally rigorous researcher, because there is no internal safety net when clients are paying for credibility.

I specialize in qualitative and mixed-methods research across the full arc of the product lifecycle, from early generative discovery through evaluative usability testing and longitudinal tracking. I am equally comfortable designing a complex research program and delivering a rapid usability evaluation under a compressed timeline.

Most recently I led the discovery research for a full-scale redesign of a 30-year-old internal client management platform, enabling the successful migration of 700+ clients and mitigating significant retention risk. That work sits alongside a three-year longitudinal research program I co-led for a major insurance provider, a mixed-methods direct mail study for a national television company, and a broader body of research across banking, healthcare, fashion retail, and enterprise technology.

I am currently completing a Master of Arts in Human-Computer Interaction at SUNY Oswego, building on a career-long commitment to research craft and a genuine belief that understanding people is the most important thing a product team can do.

Current Role
Senior Researcher / UX Designer
R.R. Donnelley (RRD)
Education
MA, Human-Computer Interaction
SUNY Oswego (Expected 2027)

BBA, Entrepreneurship
Kent State University
Certifications
Nielsen Norman Group UX Certified
CITI Program: Research Ethics

Three case studies.

Each project reflects a different dimension of research practice. Click any study to read the full version.

Case Study 01

Direct Mail Optimization for a National Television Provider

UX Research and Prototyping to Improve Customer Engagement and Reduce Costs

Moderated In-Depth Interviews Eye Tracking Analysis Rapid Prototyping

A national television provider had a direct mail piece that consistently outperformed their mailing schedule but had no insight into why. I ran a single mixed-methods study combining 25 in-person in-depth interviews with Tobii eye tracking to create a dual evidence base that informed a full redesign — reducing costs by 50% without sacrificing engagement.

Four key performance drivers identified: simplicity, visual hierarchy, trust cues, information load
Z-pattern gaze behavior confirmed as primary engagement driver via eye tracking
One-page redesign reduced estimated print and postage costs by 50%
Read full case study

Case Study 02

Discovery and Validation Research for a Legacy Platform Redesign

Research-Led Modernization of a B2B Print Ordering System

Stakeholder Interviews Observational Research Iterative Usability Testing UX Design

A 30-year-old B2B ordering platform was at risk of losing major accounts. I led a three-phase research program across discovery, sprint validation, and pre-launch testing. The most critical insight came from observation: users had normalized problems they could no longer articulate.

Observational sessions surfaced pain points interviews missed entirely
All at-risk accounts retained following launch
Task completion time reduced by more than 40%
Read full case study

Case Study 03

Longitudinal Rapid Iterative Usability Testing for a Telematics Platform

A Three-Year Dual-Audience Research Program Across Multiple Release Cycles

Evaluative Usability Testing Longitudinal Research Dual-Audience Testing Iterative Validation

A national insurance provider needed a sustained usability research program for their telematics platform. I co-led 14 research sprints over three years, testing agents and consumers in parallel with documented design rationale and validation outcomes at every stage.

Dual-audience structure surfaced divergences between agent and consumer mental models
Documented institutional research memory across the full program
Critical issues declined measurably sprint-over-sprint
Read full case study

A full research toolkit.

Methods are selected based on what the question actually requires.

Qualitative Research
  • In-depth interviews (IDI)
  • Moderated usability testing
  • Observational research
  • Focus groups
  • Diary studies
  • Concept testing
  • Stakeholder interviews
Quantitative & Mixed Methods
  • Survey design & analysis
  • Unmoderated usability testing
  • Eye tracking
  • A/B testing
  • Card sorting & tree testing
  • Longitudinal tracking
  • Dual-audience testing
Tools & Practice
  • Figma · Axure RP
  • Dscout · SurveyMonkey
  • Dovetail · Mural
  • Tobii Eye Tracking
  • Research operations
  • Iterative sprint validation
  • Research program design

Industries that demand rigorous research.

Deep experience across regulated and complex industries, with broad research exposure spanning many more.

Banking Insurance Healthcare Providers Healthcare Payers Fashion & Apparel Ecommerce Grocery & Food Home Improvement Specialty & Jewelry Retail Consumer Packaged Goods Enterprise Technology B2B SaaS Telecommunications Direct Mail & Print Marketing Automotive & Motorsports Travel & Hospitality

Let’s work together.

Open to senior UX researcher and research consulting roles. Remote preferred.

Case Study 01

Direct Mail Optimization for a National Television Provider

UX Research and Prototyping to Improve Customer Engagement and Reduce Costs

My Role
Lead UX Researcher
Methods
Moderated IDI · Eye Tracking · Rapid Prototyping
Participants
25 in-person sessions
Deliverable
Redesigned prototype · A/B test ready

The brief

A major national television service provider came to our UX research team with an unusual problem: they had a direct mail piece that consistently outperformed everything else in their mailing schedule — a so-called champion letter — but had no insight into what made it work or how to replicate that success across their other communications.

The ask was twofold: understand what made the champion letter effective at a specific, attributable level, and use those insights to inform a redesign of their broader direct mail program that was cheaper to produce without sacrificing what made the champion letter work.

As the lead researcher and designer, I owned the full research lifecycle — study design, recruitment, moderation, analysis, and synthesis — and transitioned directly into a design intervention phase, prototyping a redesigned version of the champion letter informed by the findings.


What we needed to understand

  • Which elements of the existing mailers are capturing customer attention and driving response behavior?
  • Why does the champion letter outperform the others — what specific attributes differentiate it at the level of user behavior and perception?
  • How can the most effective mailer be redesigned to reduce print and postage costs while maintaining or improving engagement?

Mixed methods: why both mattered here

We used a mixed-methods approach combining in-depth interviews with Tobii eye tracking. The combination was deliberate: self-reported interview data tells you what people think they noticed, and eye tracking tells you what they actually looked at. Running them together let us validate whether stated preferences aligned with actual behavior — and in several cases they did not, which was the most important finding of the study.

In-Depth Interviews (IDIs)

We conducted 25 moderated, in-person in-depth interviews with participants across a range of ages, demographics, and geographic locations. Each participant was shown six direct mail pieces from the client's existing library, including the champion letter. Sessions explored first impressions, messaging clarity, emotional resonance, perceived trustworthiness, and call-to-action comprehension. I coded and synthesized the qualitative data from all 25 sessions.

Eye Tracking

Within the same session, eye tracking using Tobii captured where participants looked during the critical first eight seconds of exposure to each mailer — the window most predictive of sustained engagement. I analyzed gaze patterns, fixation points, scan paths, and time-on-element data myself across headlines, imagery, and calls to action for each piece, alongside the qualitative coding from the interviews.

Why eight seconds? Research on direct mail and print advertising consistently identifies the first eight seconds as the make-or-break window for engagement. Eye tracking that window specifically let us understand what the champion letter was doing in the moment of highest stakes.

What the research revealed

01 — Simplicity outperformed complexity

The champion letter had the simplest visual structure of the six pieces: one clear CTA, a single benefit-focused headline, and minimal visual clutter. Eye tracking confirmed it through more efficient gaze patterns and lower time-to-CTA.

02 — Visual hierarchy controlled gaze

Eye tracking showed that the champion letter guided viewers along a predictable Z-pattern scan path — brand mark, headline, primary benefit, CTA — in sequence. Mailers with competing visual elements scattered attention and led to higher drop-off rates within the first few seconds.

The divergence finding: Participants described finding two lower-performing mailers "visually appealing" in interviews. Eye tracking showed they were spending the most time on decorative imagery and consistently missing the CTA. What people said they noticed and what they actually looked at were different things — which is exactly why combining methods mattered.

03 — Trust cues were essential

Participants responded positively to specific trust signals: brand recognition elements, testimonials, and personalized addressing. Mailers lacking these were described as generic or overly promotional — reducing both perceived credibility and response intent.

04 — Information overload actively hurt performance

Mailers with more than three calls to action, dense body copy, or small fonts were the lowest performers. Eye tracking validated that participants skipped over large text blocks entirely, regardless of their informational value.


From research to prototype

I prototyped a redesigned version of the champion letter in Axure, addressing the client's cost concerns and the engagement drivers identified in research. The goal: cheaper to produce while retaining every element that contributed to the champion letter's performance.

  • One-page layout — eliminated content that eye tracking showed participants weren't reading, cutting print and postage costs by an estimated 50%
  • Optimized visual hierarchy — restructured layout to guide attention along the confirmed Z-pattern gaze path
  • Streamlined copy — shorter sentences and bulleted benefits, reducing cognitive load
  • Retained all trust elements — brand marks, testimonials, and personalization that drove the highest positive response
  • Single CTA — eliminated the decision paralysis created by multiple competing calls to action

What happened as a result

50%
Estimated reduction in print and postage costs through one-page redesign
A/B
Redesigned prototype entered active campaign testing cycle
6
Mailers evaluated across 25 participants using mixed methods
4
Attributable performance drivers identified and validated

What this project reinforced

This project reinforced that UX research methods have meaningful application well beyond digital interfaces. Combining eye tracking with qualitative interviews created a level of evidence that purely qualitative work couldn't have achieved — and the divergence between stated preferences and actual gaze behavior was the most persuasive finding in the room.

The constraint of reducing cost without reducing impact was a useful creative challenge. It forced the design toward greater simplicity, which turned out to be exactly what the research recommended anyway. The cost problem and the engagement problem had the same solution.

Tobii Eye TrackerAxure RP

Case Study 02

Discovery and Validation Research for a Legacy Platform Redesign

Research-Led Modernization of a B2B Print Ordering System

My Role
Lead UX Researcher & Supporting Designer
Methods
Stakeholder Interviews · Observational Research · Iterative Usability Testing · UX Design
Duration
Multi-phase across full redesign lifecycle
Stakes
Key accounts at risk of churn

A platform that had outlived itself

A major enterprise client relied on a B2B ordering platform that had been in production since the early 2000s. The system was visually and functionally outdated — workflows were cumbersome, the interface was not responsive across devices, and the system showed its age in ways that had become impossible to ignore.

Several key accounts had communicated directly that a modernized platform was a prerequisite for continued partnership. Retaining these accounts required a redesign that actually worked, and doing it based on assumptions rather than evidence risked accelerating churn rather than reversing it.

I led the research program across the full redesign lifecycle — from initial discovery through pre-launch validation — ensuring design decisions were grounded in what users actually needed. I also contributed to the design work itself, supporting our head of design across wireframes, flows, and interaction design as the redesign moved from research into execution.


Why research-led mattered here

The platform had been in use for over two decades. Users had deeply internalized workarounds and had stopped consciously registering the friction they were navigating every day. Relying solely on interviews would have missed most of the actual problems because users no longer experienced them as problems — they had become part of the workflow.

The normalization problem: When people use a broken system long enough, they stop seeing it as broken. They adapt their behavior, develop workarounds, and eventually those workarounds feel like the natural way to do things. Standard interview questions stop working because the frustration has been buried under years of adaptation. Observation cuts through this.

Three phases across the full lifecycle

Phase 01

Discovery Research on the Legacy System

Before a single wireframe was drawn, we conducted discovery research to understand how users actually worked with the existing platform. This included stakeholder interviews with client-side users, internal support staff, and account managers to map workflows and surface conscious pain points. We ran observational sessions watching users complete common tasks in the live system — looking specifically for workarounds and friction that users had normalized. We synthesized findings in a workshop with the product and design team, translating discoveries into prioritized opportunity areas.

Phase 02

Iterative Validation Through Design Sprints

As design work progressed through agile sprints, we ran ongoing validation research. After key sprints covering the ordering workflow, dashboard design, and account management, we conducted moderated usability sessions with representative users from both the client-facing and internal support populations. Each round surfaced refinements: navigation labeling that didn't match user mental models, form field sequencing that created confusion, and feature placements that made logical sense in isolation but didn't fit actual user workflows. Findings fed directly back into the sprint cycle.

Phase 03

Final Validation Before Launch

Before deployment, we ran a comprehensive final validation covering the full redesigned platform end-to-end. This served as a quality gate — confirming that cumulative design decisions held together as a coherent experience and that improvements made throughout the project actually resolved the pain points identified in discovery. It also surfaced a small number of edge case issues addressed before release.


What discovery research revealed

01 — The ordering workflow was the highest-priority pain point

Across all user segments, the ordering process was identified as most critical. Too many steps, unclear sequencing, no progress indicators. Observation revealed users frequently backing out of partially completed orders and starting again — a behavior they described as normal.

02 — Mobile access was a requirement, not a preference

Several client users reported attempting to place orders from mobile and abandoning the task entirely. Responsive design was elevated from a preference to a non-negotiable design requirement.

03 — The dashboard was the most visited and least useful screen

The existing dashboard provided almost no useful information on first view despite being the landing page after login. Users wanted order status, reorder shortcuts, and account information — all requiring multiple navigational steps to find.

04 — Workarounds had become invisible

When asked what frustrated them, users consistently underreported. Watching them use the system revealed the reality: tabbed browsers held open as manual progress indicators, screenshots to track order details, paper notes to compensate for missing system memory. These weren't described as problems because they had become part of the job.


What the research program produced

100%
At-risk client accounts retained following launch
40%+
Reduction in task completion time (pre vs. post-launch comparison)
3
Research phases embedded across the full redesign lifecycle
New
Platform became a sales asset contributing to new account wins

What this project reinforced

The most important lesson was the value of observation alongside interviews in discovery. Users who had worked with the legacy system for years had genuinely stopped seeing its problems. Watching people work in the system revealed issues no interview would have surfaced.

Embedding research across all three phases meant the final product wasn't a guess. Every major design decision had been touched by user feedback at some point, which gave the team and the client genuine confidence at launch.

Moderated Usability TestingStakeholder InterviewsObservational ResearchUX DesignFigmaAxure RP

Case Study 03

Longitudinal Rapid Iterative Usability Testing for a Telematics Platform

A Three-Year Dual-Audience Research Program Tracking Product Quality Across Multiple Release Cycles

My Role
Co-Lead Researcher
Duration
3 years · 14 sprints
Audiences
Insurance agents · Consumers
Scale
16–18 participants per audience per sprint

Building a research program, not just a study

A national insurance provider engaged our team to build and operate a sustained usability research program supporting the development and continuous improvement of their telematics platform — a connected suite of web and mobile products enabling customers to enroll in usage-based insurance programs and manage their participation over time.

The platform served two distinct audiences with fundamentally different needs: insurance agents who facilitated enrollment on behalf of customers, and consumers who went through enrollment themselves and then interacted with the active program experience. Understanding both audiences — and the divergences between them — was central to the research design from the start.

I co-led this engagement alongside one research partner over three years. We shared moderation and synthesis responsibilities across all 14 sprint cycles, structured deliberately to prevent researcher fatigue and maintain consistency across a multi-year program.


The sprint structure

Each sprint consisted of four rounds of testing structured across a six-week cycle. The re-test rounds within each sprint let us measure whether design changes were actually working before the next round of development.

Week 1

Moderated usability sessions with 16–18 insurance agents on current interface

Week 2

Moderated usability sessions with 16–18 consumers on current interface

Weeks 3–4

Design and development iteration based on findings from both audiences

Week 5

Agent re-test: same task scenarios on updated interface to validate changes

Week 6

Consumer re-test: same scenarios on updated interface

Then repeat

Next sprint begins after a 2–3 month development period

Why four rounds per sprint matters: Single-round testing tells you what is broken. Four-round testing tells you whether the fix worked — and whether it introduced new problems. The re-test rounds were the mechanism that separated this program from a standard research cadence.

Documentation discipline

Sessions used a mix of moderated remote and in-person testing. Agent sessions were structured around realistic task scenarios from common workflows. Consumer sessions were scenario-based with exploratory segments to surface unprompted reactions.

What made this program longitudinal in a meaningful sense was the documentation standard we maintained. Each sprint produced a structured findings report documenting: usability issues by severity and frequency, design changes made in response to prior findings, the validation status of those changes (resolved, partially resolved, newly introduced), and new issues in the current sprint.

By sprint 8, we could trace any current design decision back to its research origin — and identify cases where early decisions had been validated, modified, or reversed based on subsequent testing.


What the longitudinal structure made possible

01 — Dual-audience insight

Agents and consumers frequently had different mental models of the same interface. What made intuitive sense for an agent facilitating enrollment often created confusion for a consumer navigating independently. A clear example: the enrollment confirmation page used language agents understood immediately because it matched their professional vocabulary — but consumers found it ambiguous and at times alarming. The dual-audience structure surfaced this kind of divergence consistently.

02 — Tracking design decisions over time

We built an evidence base that went beyond typical usability findings. We could show not just that something was a problem, but whether a proposed solution worked — and if a fix introduced new issues, we caught those in re-test rounds before they reached users at scale.

A pattern across multiple sprints: Design changes that simplified the visual interface often resolved primary task issues but introduced new issues with secondary tasks. Simplification in one area created expectation shifts in another. Longitudinal tracking let us see these ripple effects in a way single-round testing never could.

03 — Sprint-over-sprint quality improvement

Across 14 sprints, critical usability issues declined measurably. The product team began anticipating the kinds of issues we would surface, which meant later sprint designs were more mature and the critical issue count in each round was lower. This is what a sustained research program produces that episodic studies cannot.

04 — Managing researcher fatigue

Co-leading a three-year program required deliberate management. We rotated primary moderation responsibilities across sprints, cross-checked each other's synthesis to prevent individual bias from calcifying, and maintained explicit documentation of our evolving assumptions so we could identify when prior exposure was shaping how we interpreted new findings.


What the program produced over three years

14
Research sprints completed over three years
4
Testing rounds per sprint across two distinct audience segments
2x
Platform surfaces covered — enrollment acquisition and active program experience
Down
Critical usability issues declined measurably sprint-over-sprint
  • Documented history of design changes and validated outcomes across the full program lifecycle
  • Program expanded mid-engagement to cover the active dashboard experience, reflecting growing client confidence
  • Research cited internally as a key factor in the product team's confidence shipping changes at pace without sacrificing quality

What three years of longitudinal research taught me

This engagement was formative in my understanding of what longitudinal research can do that point-in-time studies cannot. Testing the same workflows repeatedly across a product's evolution created a kind of institutional memory for the user experience — we knew what had been tried, what had worked, what had created new problems, and why.

The dual-audience structure pushed me to think carefully about how the same interface can generate entirely different experiences depending on who is using it and why. A design that works for an expert agent using the platform daily can be genuinely confusing for a consumer who may only interact with it once or twice during their policy period.

Managing a program at this scale also required discipline in documentation. Findings that aren't captured rigorously become unretrievable, and in a longitudinal program, losing that thread has compounding costs. The documentation standard we maintained wasn't overhead — it was the mechanism that made the program valuable as a whole.

Evaluative Usability TestingModerated Testing (Remote & In-Person)Longitudinal ResearchDual-Audience TestingIterative Validation