Qualified Candidates
Searching in the Dark

A heuristic evaluation of the CIA Careers application portal — identifying why the search experience was causing qualified applicants to question whether relevant roles existed, and what it would take to make job discovery reliable and trustworthy.

ClientU.S. Government Agency

Engagement~40-Hour Usability Audit

RoleUsability Analyst, Verint

MethodHeuristic Evaluation

FocusSearch Usability

Scroll to explore

Problem · Implication · Recommendation

Inconsistent search behavior turned a discovery problem into a trust problem

A careers portal has one primary job: connect the right applicants with the right opportunities. When the search experience is unreliable, that job fails — not just once, but every time a qualified candidate tries a slightly different search term and gets a completely different result.

Problem

The search experience was inconsistent, unguided, and poorly scoped

Applicants couldn't reliably locate relevant job listings. The portal handled common query variations unpredictably and offered little guidance when searches returned incomplete or no results.

Implication

Inconsistent results made applicants question the search — not the listings

When "engineer" and "engineers" return drastically different results, users don't assume a bug — they assume they're searching wrong, or that opportunities don't exist. Trust erodes, and qualified candidates abandon the search.

Recommendation

Strengthen search reliability, scope clarity, and results guidance end-to-end

Targeted improvements to query handling, search assistance features, filtering controls, and orientation cues can make job discovery consistent and trustworthy — keeping qualified candidates engaged through the application process.

Project Background

As the role catalog grew, the search experience didn't keep pace

The CIA Careers portal serves as the primary interface applicants use to explore and apply for opportunities within the organization. As the range and volume of available roles expanded, the search functionality that was supposed to help applicants navigate those opportunities became a source of friction instead.

The client team identified the issue through applicant feedback: candidates were struggling to locate relevant listings and reporting difficulty navigating search results effectively. The audit was scoped to diagnose precisely why — and to deliver recommendations the team could act on to restore confidence in the job discovery experience.

The Problem

Applicants couldn't reliably find relevant roles. Search behavior was inconsistent across common query variations, and the interface offered no guidance when results fell short of user expectations.

The Goal

Identify the specific search and navigation failures causing qualified candidates to abandon the portal and deliver prioritized, actionable recommendations to restore reliability and trust.

My Role

Sole usability analyst on the engagement — responsible for the full heuristic evaluation, findings development, written report, and stakeholder presentation.

Constraints

~40-hour heuristic evaluation conducted without user testing — scoped to deliver expert analysis quickly while the client team was actively planning portal improvements.

Methodology

A structured three-stage evaluation focused on search as a system

The evaluation used the Verint Usability Audit Methodology — a structured heuristic review process designed to identify friction points efficiently without requiring participant recruitment or session scheduling. For a client needing actionable findings quickly, this approach delivered expert analysis within a fixed engagement window.

Rather than evaluating the portal's general usability, the audit was deliberately scoped around search as an end-to-end system: from how applicants entered the search experience, through how queries were processed and results presented, to how the interface responded when searches failed. Every finding was evaluated against its impact on trust and task completion.

The search-system framing was critical. Many of the issues identified weren't isolated interface problems — they were interconnected failures that compounded across the search workflow. Understanding them as a system, rather than a list of individual issues, shaped how the recommendations were structured and prioritized.

40h

Structured heuristic review, findings development, written report, and stakeholder presentation

6

Distinct usability findings covering search scope, query handling, assistance, filtering, messaging, and orientation

7

Usability dimensions evaluated across the full job discovery workflow

6

Recommendation areas delivered with specific, actionable implementation guidance

Evaluation Process

01

Audit

Interface reviewed against established usability principles and best-practice indicators across the full job discovery workflow

02

Analysis

Each issue analyzed to determine its underlying cause and its specific impact on search reliability and applicant trust

03

Findings

Issues translated into practical recommendations designed to improve the effectiveness of the job discovery workflow

Evaluation Scope

Every touchpoint in the job discovery workflow — from search entry to result

The evaluation covered the end-to-end search experience: how applicants entered the portal, how they queried for roles, how results were displayed and filtered, and how the interface responded when the experience broke down.

Interface Areas Reviewed

Job search interface and entry points

Job listing results pages

Sorting and filtering controls

Search results messaging and labeling

Adjacent pages supporting job exploration

Usability Dimensions Evaluated

Search functionality and query behavior

Input tolerance and query variation handling

Navigation and orientation clarity

Filtering and sorting mechanisms

Labeling and messaging clarity

Interaction mechanisms and workflow completion

Key Insight

The search didn't fail dramatically — it failed unpredictably

The most significant finding wasn't a broken search function. It was an inconsistent one. Small variations in how applicants phrased the same query produced dramatically different results — creating the impression that the system's behavior couldn't be trusted or predicted.

Same intent — different outcomes

"engineer"Many results

"engineers"Significantly fewer

"IT analyst"Returns results

"IT-analyst"Different results

Acronym variationsUnpredictable

Why this matters

When a user searches "engineers" and gets fewer results than "engineer," they don't conclude there's a bug. They conclude they're using the wrong search term — or that the roles they're looking for don't exist.

This is the specific mechanism by which a technical inconsistency becomes a trust failure. The interface is working — but it's creating doubt about itself at the exact moment a qualified candidate is deciding whether to apply.

The Core Problem

Inconsistent query handling doesn't just produce incomplete search results — it erodes user confidence in the search tool itself. Applicants who can't trust the search start to question whether relevant opportunities exist at all, and leave before finding the roles they're actually qualified for.

Key Findings

Six interconnected failures across the search experience

The findings were not isolated interface issues — they formed a pattern of compounding friction. Each one weakened the applicant's confidence in the portal; together, they made reliable job discovery genuinely difficult.

Multiple search interfaces existed without clear differentiation

The portal presented more than one search entry point without explaining their different purposes. Applicants had no guidance about which search tool to use for finding job listings — and guessing wrong meant starting over.

Clearly distinguish search interfaces and guide applicants to the right one

Common query variations produced unpredictable results

The search system did not reliably handle singular/plural differences, acronyms and abbreviations, minor spelling variations, or spacing and hyphen inconsistencies — causing the same underlying intent to produce dramatically different outcomes depending on how it was phrased.

Normalize query variations and treat equivalent terms as equivalent inputs

The interface offered no assistance when searches came up short

Common search assistance features — autocomplete, "Did you mean…" prompts, search tips — were absent. Applicants whose initial queries failed received no help refining their approach, leaving them to troubleshoot on their own.

Introduce autocomplete, disambiguation prompts, and search guidance

Filtering was limited to one category at a time, and results defaulted to alphabetical sort

Applicants could only select one job category filter at a time, preventing them from exploring related roles simultaneously. Default sorting by alphabetical order rather than recency made newly posted opportunities harder to identify.

Enable multi-category filtering and default sort to most recently posted

Results pages were missing key contextual information

Search terms were not reiterated on results pages, job posting dates were absent from listings, and failed searches provided minimal guidance on what to do next. Applicants had no way to confirm they were seeing the results they expected, or understand why a search had come up empty.

Reiterate search terms, display posting dates, and provide clear no-results guidance

Navigation didn't clearly indicate the user's current location in the portal

The global navigation lacked active-state indicators, requiring applicants to spend additional effort maintaining their orientation as they moved between sections. This compounded the cognitive load already introduced by an inconsistent search experience.

Visually highlight active navigation state throughout the job discovery workflow

Recommendations

Six improvement areas — structured to address the full search system

Recommendations were organized to reflect how the issues compounded, not just how they appeared in isolation. This framing helped the client team understand both the individual fixes and the larger systemic improvements needed to make job discovery genuinely reliable.

Search Scope Clarity

Help applicants find and use the right search tool

  • Clearly distinguish search interfaces and their individual purposes

  • Guide applicants toward the most effective search entry point for job discovery

Input Tolerance

Make the search system forgiving of natural query variation

  • Support minor spelling errors and treat them as the intended term

  • Normalize singular/plural, acronym/full-term, and spacing/hyphen variations

Search Assistance

Support applicants when their first search attempt falls short

  • Introduce autocomplete suggestions to guide query construction

  • Add disambiguation prompts and search tips for failed or low-result queries

Filtering & Sorting

Give applicants more control over how they explore results

  • Allow applicants to select multiple job categories simultaneously

  • Default search results to most recently posted rather than alphabetical order

Results Messaging

Give applicants the context they need to trust what they're seeing

  • Reiterate search terms on results pages to confirm query was received correctly

  • Display job posting dates and provide clear guidance when searches return no results

Orientation Indicators

Reduce navigation overhead throughout the portal

  • Visually highlight active navigation elements across all portal sections

  • Reinforce location cues throughout the job discovery workflow

Delivery & Presentation

Findings delivered as both a stakeholder presentation and a written report

The engagement closed with a stakeholder presentation and a written usability report documenting the full analysis — giving the client team both an interactive review session and a lasting reference document. The dual-format delivery ensured findings could be acted on immediately in the presentation and referenced over time as the team prioritized and tracked implementation.

The written report was particularly valuable given the technical specificity of the search-related findings — implementation teams working on query normalization, autocomplete logic, and filter behavior needed precise documentation, not just slide summaries.

Outcomes

Recommendations implemented — search reliability and orientation both improved

Following the presentation, the client implemented several of the recommended improvements. Changes focused primarily on strengthening search functionality, improving navigation orientation indicators, and enhancing messaging within the search experience — directly addressing the trust and reliability issues identified during the evaluation.

🔍

Search Functionality Strengthened

Improvements to search behavior made job discovery more consistent and reliable for applicants — reducing the unpredictable variation in results that had been eroding trust in the portal.

🧭

Navigation Orientation Improved

Orientation indicators were updated to help applicants maintain their bearings within the portal — reducing the additional cognitive effort required to navigate between sections during job discovery.

💬

Search Messaging Enhanced

Contextual messaging within the search experience was improved, giving applicants clearer feedback about their results and better guidance when searches came up short.

Strategic Takeaway

Inconsistency is more damaging than failure. A search that breaks is a bug. A search that works differently depending on how you phrase the query is a trust problem — and trust problems don't announce themselves. They just quietly send qualified candidates somewhere else.

— Core principle applied throughout this engagement

This engagement reinforced the value of evaluating search as a complete system rather than a collection of individual features. Input handling, results display, assistance features, filtering, and messaging don't operate in isolation — they create a cumulative experience of either confidence or doubt. When one layer is unreliable, it infects how users interpret everything else. Fixing search usability well means understanding how these layers interact, and designing each one to reinforce rather than undermine the others.

U.S. Government Agency · CIA Careers Portal Search Usability Audit · Verint

Heuristic EvaluationSearch UsabilityGovernment UXNavigation DesignInput Tolerance

Hover State

Label is not visible on hover

No BCTs

-Explaiin-

Moderated Remote User Test

Screenshot Examples

No BCTs

-Explaiin-

No BCTs

-Explaiin-