Utilising the latest technology, CYFOR Legal can make document review simple.

Harnessing the power of cutting-edge technology, CYFOR Legal can turn a seemingly daunting and arduous task of a large-scale review, into a more manageable, time and cost-effective review. CYFOR Legal can utilise technology, artificial intelligence, and analytics to greatly reduce the number of documents for human review. One case study found an initial document review set of 232,000 documents after the initial filtering and de-duplication process, this was then reduced by search terms, date ranges and analytics to circa 40,000.

The set of documents was then used in an assisted review pool where the reviewer would code a small sample set of documents with a ‘positive’ choice for example ‘Relevant’ and a ‘negative’ choice for example ‘Not Relevant’, this sample set of documents is then analysed by the machine, in conjunction with the remaining document set in the review pool to provide a ‘score’ on the document. This score is then reviewed by the computer to ‘push’ forward documents that have a higher score, first to reviewers, with the probability that the document with a higher score is potentially more relevant to the reviewer. From the reviewer’s perspective, there is no change in how a document is coded, it is simply either Relevant or Not Relevant to the matter.

Result

This simplicity allows the engine to be more effective at ‘predicting’ whether or not a document is relevant to the reviewer and indeed the matter. The end result, the client reviewed roughly 18,000 documents out of the initial 40,000 document pool and the system deemed that there was a very small probability that the remaining 22,000 outstanding documents were not relevant to the case. This reduction in over half of the population with a relativity small review team saved about 37 days of manual human review, which therefore saved a massive amount of the client’s budget.

The set of documents was then used in an assisted review pool where the reviewer would code a small sample set of documents with a ‘positive’ choice for example ‘Relevant’ and a ‘negative’ choice for example ‘Not Relevant’, this sample set of documents is then analysed by the machine, in conjunction with the remaining document set in the review pool to provide a ‘score’ on the document. This score is then reviewed by the computer to ‘push’ forward documents that have a higher score, first to reviewers, with the probability that the document with a higher score is potentially more relevant to the reviewer. From the reviewer’s perspective, there is no change in how a document is coded, it is simply either Relevant or Not Relevant to the matter.

Key elements

  • Document review set of 232,000 reduced to approx. 40,000
  • Documents used in assisted review pool
  • The client reviewed 18,000 relevant documents
  • 37 days of manual review saved with technology

Accreditations & partners