Susan Nevelow Mart writes for the ABA Journal about the varying nature of results from legal research databases:

“At first glance, the various legal research databases seem similar. For instance, they all promote their natural language searching, so when the keywords go into the search box, researchers expect relevant results. The lawyer would also expect the results to be somewhat similar no matter which legal database a lawyer uses. After all, the algorithms are all trying to solve the same problem: translating a specific query into relevant results.

The reality is much different. In a comparison of six legal databases—Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel and Westlaw—when researchers entered the identical search in the same jurisdictional database of reported cases, there was hardly any overlap in the top 10 cases returned in the results.”

Read more here.