What's the difference between Paris and Paros?
Dec 9, 2015
Autocomplete Full-text Search Backend Frontend

If you visit pamediakopes.gr, or any one of our sites, to start looking for a flight and you type in a few letters to select the place of origin, you will see how our autocomplete feature kicks in, suggesting places based on the letters you entered.

Behind the scenes, our frontend is calling our autocomplete service, which is tuned to run very fast and serve many requests at once. It’s one of the most tiny and efficient services we have.

Figure 1 Figure 1: Autocomplete in our sites

Looking back, it wasn’t always so and there is a hilarious story behind its evolution. In this post we’ll talk about how it came to its current form.

The early years

Originally we had no autocomplete service. Nada. Zilch. Zero. When web or mobile clients wanted to offer autocomplete features they just had to do it for themselves. That meant, for example, that to show autocomplete for airports each client had to have a list of airports in its own space and do the job itself. This was a sorry state of affairs with many levels of inefficiency:

The places service

We then decided to create an endpoint to return a list of places based on partial input and embed it in one of our product APIs—this was called the places service. This was a good first step towards the right direction but the implementation leaved a lot to be desired:

  • For one thing, we stuffed all our data in a table in SQL Server and used a LIKE query to search it. Performance and response times ranged from mediocre to awful and it didn’t take a lot to bring the service to its knees.
  • Every search executed a query on the database, which was great for service but somewhat bad for the database.
  • We supported only a handful of languages which in itself reduced the usefulness of the service.

Upgrading the places service

It didn’t take too long to figure out that having customers wait for several seconds to get suggestions because they made the mistake of typing something like “sa” wasn’t providing a good experience. So we upgraded the places service and started making use of the full-text search capability of SQL server. Our gains were immediate on two fronts:

  • Response times were dramatically improved, ranging from 2ms to, at most, 1 second.
  • Searches started using linguistics based on the rules of each particular language.

The new autocomplete and the Paris/Paros conundrum

Our third incarnation of autocomplete was performing so well that we decided to make a new, standalone service to provide autocomplete and information on other data as well. This service would be used by all the clients we made and would be the single point of reference for autocomplete and common data we use. It turned out that implementing 99% of the service was relatively straightforward. It took several heated arguments to complete the remaining 1%.

Following is a transcript of how these went.

Backend developer: Well, the API is done and deployed to staging. Please have a look and let me know if you’re OK with it.
Frontend developer after a couple of hours: When a user searches for “Pa”, Paris is always at the top of the results. However, if he is a Greek user, most likely he is searching for “Paros”. I think we should sort the results based on the language.
Backend developer: …geez…let me have a look and understand what you’re saying.

At this point the developer duplicated the search and saw that the full-text search returned Paris first but not Paros. As he had minimal control on the linguistics of the search, he just sorted the results by the language of each result and returned them back.

Backend developer: You should be OK now. Please have another look.
Frontend developer after a few minutes: Wow! Now it always returns “Paros” at the top of the results list!
Backend developer: Well, that’s what you wanted isn’t it?
Frontend developer: Well… not exactly. Ideally, the results are sorted based on the language of the current user, so for the same input, say “Pa”, if the user is Greek he see “Paros” first, while if he is not, he see “Paris” first.
Backend developer: …darn, I didn’t think of that.

And it made sense. What we really wanted was to differentiate the results autocomplete served based on the language used by the customer. We figured that a customer typing in “Pa” would most probably be looking for Paris while a Greek customer typing in “Pa” would most probably be looking for Paros.

We ended up letting full-text query do its job on the database and we added a results sorting based on the Accept-Language header of the request before returning the results.

But that was not the end of it…

Backend developer: Fine, results sorting per user’s language is ready. Are we OK now?
Frontend developer after a few minutes: Now “Paros” for non-Greek users is directly under “Paris”!
Backend developer: And what’s wrong with that!??
Frontend developer: “Paris” is a shortcut for all the individual airports of Paris city: Charles-de-Gaulle, Orly etc. The autocomplete entries for these should be shown together with the general “Paris” entry, and not after “Paros”!
Backend developer: Ah, right, so we need to keep same-city airports together…

The results sorting now turned to a chain of sorting algorithms. The previous one was kept as is and another one added which took under consideration the logic of same-city or parent airports.

Backend developer: Same-city airport grouping is ready! Surely, now, the results come out OK, don’t they?
Frontend developer after a few minutes: Well, there is still one thing…
Backend developer: What now?
Frontend developer: Many advanced users search airports based on their code: NYC, SFA, LAX etc. So if the user enters exactly “PAR”, which is the airport code for “Paris”, then “Paris” should be before “Paros” regardless of the user’s language. Sorry…

Another sorter was added that kicked in only if the user entered exactly three letters and only if a result matching those three letters was in the results—in this case, it was bubbled up to the top.

Going live and results

We took the autocomplete service live a few days later and our web and mobile clients started using it soon after that. The results were worth it:

  • We added 40 languages to our database with full-text being able to use them all. For the first time, we could truly serve localized results in autocomplete.
  • Clients didn’t have to keep their own copies of data anymore, significantly reducing their memory footprint.
  • Updates to our data are now centrally managed and become available to clients without requiring a release.

Performance-wise, we made heavy use of caching on the service side, minimizing the number of trips to the database and allowing the service to serve a large number of requests per second with ease. Additionally, our clients cache results on their side as well to minimize the trips to the service, so caching on the client is what usually kicks in with caching on the server being in place to save the database from client cache misconfigurations or misbehaving clients.

Today we’re seeing a quarter to three quarters of a million of hits to the service per day, with call rate peaking at about 20 calls per second. The average response time runs at about 30ms, with 95% of the calls being served within 300ms and 99% of the calls being served within 600ms.

Share on