B2B Sales SEO Heuristic Evaluation
Overview
A Fortune 500 Company client reached out to Accenture interested in an evaluation of search capabilities and UX improvements for their internal sales tool. The 4-year old product replaced another popular sales tool called, but is not receiving the same approval from users.
Accenture pitched a a pilot program for the project to to complete an assessment, staffing a two-person team consisting of one Product Lead and one UX Lead.
*This work is partially confidential and has been reworked to protect client confidentiality. The purpose of this case study is to emphasize process and method.
Role & Duration
UX Design Lead,
4 weeks
Programs
Problem
Users are struggling to find and identify the exact result they’re searching for because of poor SEO. Also, users are selecting incorrect items, but not being notified of errors until the end of the checkout process requiring them to start over.
Strategy
Design Constraints
Soon into the project, we learned that the client intended to go into a multi-phase redesign that would span across several fiscal years. So, we knew this would limit our current-state UI recommendations.
Execution
Internal client teams were unable to confirm what ways the multi-year internal redesign would affect the product in these early stages. So, instead of redesigning full-page wireframes, our team focused heavily on usability concepts to emphasize individual page components/features.
This allowed us to highlight key areas that needed to be prioritized most for actionable improvement within this narrow scope. This ensured that our UX recommendations would stand independently of one another and would not be obsolete if an adjacent component was removed from future redesigns. We wanted to showcase how minor UX best practices could be applied, but still make a major impact for users.
Micro Feedback
Assessing current user sentiments
The Client provided us the raw data for 817 responses to a satisfaction survey where users were asked to rate their experience on a scale with 1 — Very Difficult and 7 — Very Easy.
42.7% of users found the product some level of ‘Difficult’ on the satisfaction scale.
ux research insights
System fetches irrelevant search results.
Errors appear too late with no clear solution.
Overwhelmingly # of repetitive search fields.
Too many clicks, too much time for a simple task.
Inconsistent and confusing jargon.
Low confidence in user selection.
Heuristic Evaluation
Figuring out what’s going wrong to get it going right
After completing our audit of the site’s architecture and the user interviews, we were able to build a 40-page readout that evaluated their current state and make recommendations for improvement. This included:
Heuristic evaluation with a list of user concerns and recommendations with ranked priority
Instances of application with simple wireframe improvements to highlight how best practices could be put into practice on the site.
KPI recommendations for metric tools to track the progress once implementation occurred.
Match between system and real world
❌ Usability Concern
Inconsistent and technical language.
Using laymen terms becomes increasingly important as a fail-safe when a product’s flow isn’t clear enough and users need to close the gap themselves.
✅ Recommendation
Transition away from industry jargon to more recognizable call-to actions.
Terminology should be consistent throughout the entire customer journey to increase familiarity. Even though the product is in the B2B space, it’s operating primarily as a e-commerce platform. There’s a cognitive disconnect when a product is requesting the user to complete similar tasks from their personal life, but replacing what could have been familiar words with unnecessary industry-specific language.
For example: From End User to Customer
Error prevention
❌ Usability Concern
Users typically get too many results or 0 results.
With very little middle-ground. Receiving both no results and wading through infinite pages of unorganized results can lead to higher bounce rates.
Users are often left guessing. And guessing incorrectly.
Users constantly select the incorrect ‘Customer Type’. The client has an internal tagging system used to segment their company catalog based on these customer types. But, since users aren’t clearly presented this context,
✅ Recommendation
Limit the number of initial options and push most relevant fields first to return less results.
Based on video from user interviews provided by the client, the most critical filters are ‘Company’ and ‘Location’. But, we recommend adding a new filter, ‘State’, as a required field to prevent duplicate cities of the same name in different states, such as Portland, Maine versus Portland, Oregon. Required fields create high-level filters that cut down the number of irrelevant search results. Based on interaction notes, all other address-related fields should be considered a part of an advanced search.
Add descriptors upfront to pair content with context.
Additionally, text fields should include microcopy to guide users to the type of content the system is expecting to see. For example, suggesting suite/apartment numbers should only go in Address Line 2, will weed out extraneous variations of a single Company contact in the existing catalog.
Create hierarchy in data libraries.
Currently, the Customer Type breakdown is hidden within the information symbol and labeled obscurely in the dropdown field. If a user inputs everything else correctly, but selects the wrong Customer Type, they’ll get 0 results because the system was searching through the wrong catalog. No one can determine what the client classifies as a ‘Medium’ or ‘Large Business’ besides the client. Pairing Customer Types and their qualifications will help ensure users are pulling up the correct catalog.
Flexibility and efficiency of use
❌ Usability Concern
No red routes for primary users
Users are being pushed to pages that aren’t necessary to their workflow. Even worse, they’re being asked to fill out fields on these pages in order to move on to the next one.
✅ Recommendation
Hide outlier paths until users determines the necessity.
Only display the pages that are relevant to a majority of your users workflow, instead of everything at once. Company Information, Billing Address and Shipping Address are always required. Any extraneous steps should be considered an add-on to the core process. This reduces the amount of unnecessary clicks for all users.
Helps users recognize, diagnose and recover errors
❌ Usability Concern
Case of mistaken identity (error vs. info)
On the first page, users are immediately greeted by a bright, yellow text box with instructions. Historically, yellow boxes presented in this way are interpreted as an error. This is especially confusing for the user since they haven’t done anything on this page yet to prompt a message.
✅ Recommendation
Clearly distinguish information versus errors and their terms of use.
Remove the yellow box and tweak the page description slightly so copy doesn’t already assume the user is wrong to create smoother sessions. Repurpose the yellow box for actual errors when they occur in realtime. For example, if someone tries to skip ahead and click on other sections, only then will the yellow error box appear and explain why the user can’t proceed. And most importantly, it needs to identify the required action needed to move on.
search relevancy
Optimizing tech for human feedback
Implementing relevancy models and ‘Learning-to-Rank’
Track patterns of different user groups with click statistics to create personalized, ranked search results.
Banning ‘No Results’ pages
Avoid completely blank 'No Results' pages. Display 'Related Results' even when there are no exact matches to strengthen click tracking for search relevancy training.
Integrating ‘Human-in-the-loop‘
Allow users to help train machine learning by prompting for feedback about individual results.
Maximizing ‘Favorites’ data
Pull metadata from a user's existing 'Favorites' tabs to track trends in their industry and region based on saved entries to push similar results to the top of their search.
KPI Recommendations
Establishing the right metrics to target and track new progress
At the time of this engagement, the client had yet to implement tracking metrics for this product. When the team found out, we added a list of KPI recommendations to our UX readout. We wanted to make sure our package was the complete toolbox they needed to strategize.
Retain ALL Onsite Queries
Aggregate user behavior by logging every user search queries to find variations for specific text/results. For Example: 'Acenture,' 'Accenture Inc' 'AccentureGroup' searches should all redirect to a single 'Accenture' company entry.
Session Duration
Track how long it takes for the user to successfully generate a quote after product selection.
Cart Abandonment Rate
Track how often users create quotes, but don't submit them and evaluate if usability is a contributing factor.
Time-on-page
Calculate time spent on each section of checkout during a user's session to narrow down the location of blockers.
Exit Rate
Use single-page bounce rates to show what specific pages users abandon most.
Frequency/Recency Data
Segment users by categorizing their relationship with the product based on usage. From there, create targeted goals for each group to address unique needs.