UX Researcher · Cleveland, OH
Nearly 11 years of qualitative and mixed-methods research across financial services, healthcare, retail, and enterprise technology.
About
My entire career has been built in a paid client consulting model, which means every research engagement has to be good enough to justify the investment. That environment has made me a fundamentally rigorous researcher, because there is no internal safety net when clients are paying for credibility.
I specialize in qualitative and mixed-methods research across the full arc of the product lifecycle, from early generative discovery through evaluative usability testing and longitudinal tracking. I am equally comfortable designing a complex research program and delivering a rapid usability evaluation under a compressed timeline.
Most recently I led the discovery research for a full-scale redesign of a 30-year-old internal client management platform, enabling the successful migration of 700+ clients and mitigating significant retention risk. That work sits alongside a three-year longitudinal research program I co-led for a major insurance provider, a mixed-methods direct mail study for a national television company, and a broader body of research across banking, healthcare, fashion retail, and enterprise technology.
I am currently completing a Master of Arts in Human-Computer Interaction at SUNY Oswego, building on a career-long commitment to research craft and a genuine belief that understanding people is the most important thing a product team can do.
Selected Work
Each project reflects a different dimension of research practice. Click any study to read the full version.
Case Study 01
UX Research and Prototyping to Improve Customer Engagement and Reduce Costs
A national television provider had a direct mail piece that consistently outperformed their mailing schedule but had no insight into why. I ran a single mixed-methods study combining 25 in-person in-depth interviews with Tobii eye tracking to create a dual evidence base that informed a full redesign — reducing costs by 50% without sacrificing engagement.
Case Study 02
Research-Led Modernization of a B2B Print Ordering System
A 30-year-old B2B ordering platform was at risk of losing major accounts. I led a three-phase research program across discovery, sprint validation, and pre-launch testing. The most critical insight came from observation: users had normalized problems they could no longer articulate.
Case Study 03
A Three-Year Dual-Audience Research Program Across Multiple Release Cycles
A national insurance provider needed a sustained usability research program for their telematics platform. I co-led 14 research sprints over three years, testing agents and consumers in parallel with documented design rationale and validation outcomes at every stage.
Capabilities
Methods are selected based on what the question actually requires.
Experience Across
Deep experience across regulated and complex industries, with broad research exposure spanning many more.
Contact
Open to senior UX researcher and research consulting roles. Remote preferred.
Case Study 01
UX Research and Prototyping to Improve Customer Engagement and Reduce Costs
Overview
A major national television service provider came to our UX research team with an unusual problem: they had a direct mail piece that consistently outperformed everything else in their mailing schedule — a so-called champion letter — but had no insight into what made it work or how to replicate that success across their other communications.
The ask was twofold: understand what made the champion letter effective at a specific, attributable level, and use those insights to inform a redesign of their broader direct mail program that was cheaper to produce without sacrificing what made the champion letter work.
As the lead researcher and designer, I owned the full research lifecycle — study design, recruitment, moderation, analysis, and synthesis — and transitioned directly into a design intervention phase, prototyping a redesigned version of the champion letter informed by the findings.
Research Questions
Methodology
We used a mixed-methods approach combining in-depth interviews with Tobii eye tracking. The combination was deliberate: self-reported interview data tells you what people think they noticed, and eye tracking tells you what they actually looked at. Running them together let us validate whether stated preferences aligned with actual behavior — and in several cases they did not, which was the most important finding of the study.
In-Depth Interviews (IDIs)
We conducted 25 moderated, in-person in-depth interviews with participants across a range of ages, demographics, and geographic locations. Each participant was shown six direct mail pieces from the client's existing library, including the champion letter. Sessions explored first impressions, messaging clarity, emotional resonance, perceived trustworthiness, and call-to-action comprehension. I coded and synthesized the qualitative data from all 25 sessions.
Eye Tracking
Within the same session, eye tracking using Tobii captured where participants looked during the critical first eight seconds of exposure to each mailer — the window most predictive of sustained engagement. I analyzed gaze patterns, fixation points, scan paths, and time-on-element data myself across headlines, imagery, and calls to action for each piece, alongside the qualitative coding from the interviews.
Key Findings
01 — Simplicity outperformed complexity
The champion letter had the simplest visual structure of the six pieces: one clear CTA, a single benefit-focused headline, and minimal visual clutter. Eye tracking confirmed it through more efficient gaze patterns and lower time-to-CTA.
02 — Visual hierarchy controlled gaze
Eye tracking showed that the champion letter guided viewers along a predictable Z-pattern scan path — brand mark, headline, primary benefit, CTA — in sequence. Mailers with competing visual elements scattered attention and led to higher drop-off rates within the first few seconds.
03 — Trust cues were essential
Participants responded positively to specific trust signals: brand recognition elements, testimonials, and personalized addressing. Mailers lacking these were described as generic or overly promotional — reducing both perceived credibility and response intent.
04 — Information overload actively hurt performance
Mailers with more than three calls to action, dense body copy, or small fonts were the lowest performers. Eye tracking validated that participants skipped over large text blocks entirely, regardless of their informational value.
Design Intervention
I prototyped a redesigned version of the champion letter in Axure, addressing the client's cost concerns and the engagement drivers identified in research. The goal: cheaper to produce while retaining every element that contributed to the champion letter's performance.
Outcomes
Reflection
This project reinforced that UX research methods have meaningful application well beyond digital interfaces. Combining eye tracking with qualitative interviews created a level of evidence that purely qualitative work couldn't have achieved — and the divergence between stated preferences and actual gaze behavior was the most persuasive finding in the room.
The constraint of reducing cost without reducing impact was a useful creative challenge. It forced the design toward greater simplicity, which turned out to be exactly what the research recommended anyway. The cost problem and the engagement problem had the same solution.
Case Study 02
Research-Led Modernization of a B2B Print Ordering System
Overview
A major enterprise client relied on a B2B ordering platform that had been in production since the early 2000s. The system was visually and functionally outdated — workflows were cumbersome, the interface was not responsive across devices, and the system showed its age in ways that had become impossible to ignore.
Several key accounts had communicated directly that a modernized platform was a prerequisite for continued partnership. Retaining these accounts required a redesign that actually worked, and doing it based on assumptions rather than evidence risked accelerating churn rather than reversing it.
I led the research program across the full redesign lifecycle — from initial discovery through pre-launch validation — ensuring design decisions were grounded in what users actually needed. I also contributed to the design work itself, supporting our head of design across wireframes, flows, and interaction design as the redesign moved from research into execution.
The Challenge
The platform had been in use for over two decades. Users had deeply internalized workarounds and had stopped consciously registering the friction they were navigating every day. Relying solely on interviews would have missed most of the actual problems because users no longer experienced them as problems — they had become part of the workflow.
Research Approach
Phase 01
Discovery Research on the Legacy System
Before a single wireframe was drawn, we conducted discovery research to understand how users actually worked with the existing platform. This included stakeholder interviews with client-side users, internal support staff, and account managers to map workflows and surface conscious pain points. We ran observational sessions watching users complete common tasks in the live system — looking specifically for workarounds and friction that users had normalized. We synthesized findings in a workshop with the product and design team, translating discoveries into prioritized opportunity areas.
Phase 02
Iterative Validation Through Design Sprints
As design work progressed through agile sprints, we ran ongoing validation research. After key sprints covering the ordering workflow, dashboard design, and account management, we conducted moderated usability sessions with representative users from both the client-facing and internal support populations. Each round surfaced refinements: navigation labeling that didn't match user mental models, form field sequencing that created confusion, and feature placements that made logical sense in isolation but didn't fit actual user workflows. Findings fed directly back into the sprint cycle.
Phase 03
Final Validation Before Launch
Before deployment, we ran a comprehensive final validation covering the full redesigned platform end-to-end. This served as a quality gate — confirming that cumulative design decisions held together as a coherent experience and that improvements made throughout the project actually resolved the pain points identified in discovery. It also surfaced a small number of edge case issues addressed before release.
Key Findings
01 — The ordering workflow was the highest-priority pain point
Across all user segments, the ordering process was identified as most critical. Too many steps, unclear sequencing, no progress indicators. Observation revealed users frequently backing out of partially completed orders and starting again — a behavior they described as normal.
02 — Mobile access was a requirement, not a preference
Several client users reported attempting to place orders from mobile and abandoning the task entirely. Responsive design was elevated from a preference to a non-negotiable design requirement.
03 — The dashboard was the most visited and least useful screen
The existing dashboard provided almost no useful information on first view despite being the landing page after login. Users wanted order status, reorder shortcuts, and account information — all requiring multiple navigational steps to find.
04 — Workarounds had become invisible
When asked what frustrated them, users consistently underreported. Watching them use the system revealed the reality: tabbed browsers held open as manual progress indicators, screenshots to track order details, paper notes to compensate for missing system memory. These weren't described as problems because they had become part of the job.
Outcomes
Reflection
The most important lesson was the value of observation alongside interviews in discovery. Users who had worked with the legacy system for years had genuinely stopped seeing its problems. Watching people work in the system revealed issues no interview would have surfaced.
Embedding research across all three phases meant the final product wasn't a guess. Every major design decision had been touched by user feedback at some point, which gave the team and the client genuine confidence at launch.
Case Study 03
A Three-Year Dual-Audience Research Program Tracking Product Quality Across Multiple Release Cycles
Overview
A national insurance provider engaged our team to build and operate a sustained usability research program supporting the development and continuous improvement of their telematics platform — a connected suite of web and mobile products enabling customers to enroll in usage-based insurance programs and manage their participation over time.
The platform served two distinct audiences with fundamentally different needs: insurance agents who facilitated enrollment on behalf of customers, and consumers who went through enrollment themselves and then interacted with the active program experience. Understanding both audiences — and the divergences between them — was central to the research design from the start.
I co-led this engagement alongside one research partner over three years. We shared moderation and synthesis responsibilities across all 14 sprint cycles, structured deliberately to prevent researcher fatigue and maintain consistency across a multi-year program.
Program Design
Each sprint consisted of four rounds of testing structured across a six-week cycle. The re-test rounds within each sprint let us measure whether design changes were actually working before the next round of development.
Week 1
Moderated usability sessions with 16–18 insurance agents on current interface
Week 2
Moderated usability sessions with 16–18 consumers on current interface
Weeks 3–4
Design and development iteration based on findings from both audiences
Week 5
Agent re-test: same task scenarios on updated interface to validate changes
Week 6
Consumer re-test: same scenarios on updated interface
Then repeat
Next sprint begins after a 2–3 month development period
Methodology
Sessions used a mix of moderated remote and in-person testing. Agent sessions were structured around realistic task scenarios from common workflows. Consumer sessions were scenario-based with exploratory segments to surface unprompted reactions.
What made this program longitudinal in a meaningful sense was the documentation standard we maintained. Each sprint produced a structured findings report documenting: usability issues by severity and frequency, design changes made in response to prior findings, the validation status of those changes (resolved, partially resolved, newly introduced), and new issues in the current sprint.
By sprint 8, we could trace any current design decision back to its research origin — and identify cases where early decisions had been validated, modified, or reversed based on subsequent testing.
Key Research Contributions
01 — Dual-audience insight
Agents and consumers frequently had different mental models of the same interface. What made intuitive sense for an agent facilitating enrollment often created confusion for a consumer navigating independently. A clear example: the enrollment confirmation page used language agents understood immediately because it matched their professional vocabulary — but consumers found it ambiguous and at times alarming. The dual-audience structure surfaced this kind of divergence consistently.
02 — Tracking design decisions over time
We built an evidence base that went beyond typical usability findings. We could show not just that something was a problem, but whether a proposed solution worked — and if a fix introduced new issues, we caught those in re-test rounds before they reached users at scale.
03 — Sprint-over-sprint quality improvement
Across 14 sprints, critical usability issues declined measurably. The product team began anticipating the kinds of issues we would surface, which meant later sprint designs were more mature and the critical issue count in each round was lower. This is what a sustained research program produces that episodic studies cannot.
04 — Managing researcher fatigue
Co-leading a three-year program required deliberate management. We rotated primary moderation responsibilities across sprints, cross-checked each other's synthesis to prevent individual bias from calcifying, and maintained explicit documentation of our evolving assumptions so we could identify when prior exposure was shaping how we interpreted new findings.
Outcomes
Reflection
This engagement was formative in my understanding of what longitudinal research can do that point-in-time studies cannot. Testing the same workflows repeatedly across a product's evolution created a kind of institutional memory for the user experience — we knew what had been tried, what had worked, what had created new problems, and why.
The dual-audience structure pushed me to think carefully about how the same interface can generate entirely different experiences depending on who is using it and why. A design that works for an expert agent using the platform daily can be genuinely confusing for a consumer who may only interact with it once or twice during their policy period.
Managing a program at this scale also required discipline in documentation. Findings that aren't captured rigorously become unretrievable, and in a longitudinal program, losing that thread has compounding costs. The documentation standard we maintained wasn't overhead — it was the mechanism that made the program valuable as a whole.