Universal Finder: Moving Toward One Platform for All Finders

Showing our buyers relevant inventory of what they want to shop for among eBay's vast billion plus inventory is always a priority. One of the avenues we use regularly for filtering is what call "Finders," which are shown in the Search and Browse experiences. This article explains our efforts of unifying our finders and moving them towards a universal finder platform.

If you’ve ever looked for a car, tire, or parts for your car on eBay you probably would have seen something like these on search, listing details, category browse, product pages, motors home page, etc. 

Figure 1. Find Results on the Tire Finder page

Figure 1. Find Results on the Tire Finder page 

 

image33 Figure 2. Find Results on the Vehicle Finder page 

 

Figure 3. Finder on the Parts Finder page

Figure 3. Finder on the Parts Finder page

 

Figure 4. Finder on Category Browse page

Figure 4. Finder on Category Browse page

 

We call these finders, simply because they help make your shopping journey easy to “find.” Inspired by the experience service architecture (Experience service 101), we started exploring how our existing legacy architecture should change to get the benefits of the new stack.

There were lots of problems we had in the existing stack:

  • Tight integration with individual domains. The code for the finder was in each hosting page's codebase search, view listing, etc. Each finder had small tweaks related to the domain, which meant any change would mean a rollout on each of the partner pools. For instance, the addition of new field—say “Drive Type”—would mean changing all the partner code base and rollout to partner pools (search, view listing, etc.).
  • Higher operational and maintenance costs
  • Slower time to market time for new features, experiments, and enhancements
  • Inconsistencies due to all of the above in the experience

We wanted to fully utilize the power of experience services and build modules, not pages (see “Don't Build Pages, Build Modules“).

With this in mind, we started exploring the module provider architecture to see how that would help us solve all the pain points mentioned above. This is how it looks like at a high level.

High-level module provider architecture

Figure 5. High-level module provider architecture

 

We have clients calling experience services, which call module providers that are responsible for a respective module that, in turn, can call the domain services.

Breaking each of these down further for the finder use case:

Finder workflow

Figure 6. Universal Finder workflow

 

The Universal Finder Module provider in the above diagram represents one experience module provider service that is called from different experience services for the hosting pages (search, view listing, category browse, product detail pages, etc.) depending on the underlying page. Since this is an experience module provider, it knows the underlying page and can render the finder component based on it. This way we have one finder that all the client/experience services integrate with, but can render any finder dynamically based on the inputs—encapsulating all the business/domain logic at one place as a component. This also aligned with the microservices architecture since we are separating out core logic for finder in a microservice whose purpose is do one thing across all pages.

Separating out core logic for finder in a microservice

Figure 7. Separating out core logic for finder in a microservice

 

At a high level, here is how the new architecture works:

  • Clients (desktop web, mobile web, native) call the relevant experience service that is responsible for identifying (web vs native) and selecting the appropriate set of modules, taking experimentation into account.
  • For each request, these experience services, in turn, call the universal finder module provider once they know that for this given input combination (category, keywords, experiment etc.), there might be a finder that needs to be shown.
  • The Universal Finder module might have its own finer-grained experimentation, tracking, and localization data and calls the domain service to get raw domain data for the finder.
  • Each of the domains (finder for parts, finder for tires, finder for vehicles, finder for electronics, etc.) registers its components and rules with the Universal Finder.
  • The domain service has registered rules that trigger based on the inputs to a rule. These rules might be like :
    • Check if input category is X
    • Check if keywords(search) are in or not in Y
    • Check the region of the request (US, UK, etc.)
    • Check if the request is for EPID (product) listing
    • Call a service to perform any complex logic as a rule
  • All the rules configured above per domain get fired in parallel and finally, for the given set of inputs, one domain is declared as the winner.
  • Once a domain wins (say finder for tires) we rely on that component in the Universal Finder to orchestrate and make all the necessary finer calls to get the data it needs for serving the finder. For example, in case of the tire finder, it might call series of domain services to get the list of all vehicles, to get the list of all valid tire sizes, and maybe to get the list of any user-saved vehicles from previous sessions.
  • The Universal Finder domain service then aggregates and sends the data back to the experience module provider, which would then send it back to experience services in a standardized schema for all the finder modules.

Conclusion

The above architecture strives to isolate and encapsulate the core business logic for putting a finder from each of the domain pages (teams). The idea is to have one team responsible for the finder as a service that works with dedicated verticals (parts, tires, vehicles, etc.) to power the experience. This architecture drastically reduces time to market and release timelines for new finders features/enhancements.