Sensidev logo
Hero Image

MongoDB Indexing for Better Performances

by Alexandru Predaabout 1 year ago 3 min read

Fetching MongoDB documents for a collection of approximately 4 GB in WiredTiger compressed format or ~18 GB in BSON format can be a slow process, especially when only the “_id” field is used as the index by default.

Suppose our documents have the following format for the “_id” field:

"_id": {
  "book": "{my_book_name}",
  "pages": {numer_of_pages}

When fetching documents, MongoDB’s queryPlanner automatically chooses the winningPlan for your query. In this case, it will select the ‘COLLSCAN’ stage to retrieve the documents. This approach scans all documents, making it a time-consuming process.

How can we overcome the problem?

To obtain information on query plans and execution statistics, MongoDB provides the ‘explain’ command, which can be used on the collection or cursor. Here’s an example of an explained query:

        '_id.pages': { $gte: 100, $lte: 200 }

Despite the size of our collection, the “IXSCAN” stage method should produce results almost instantly, but “COLLSCAN” is typically used instead.
To resolve this issue, a new index should be created for the collection. In our case, a new compound index is created as follows:

    '': 1,
    '_id.pages': 1

When creating a compound index, follow this rule of thumb to determine the field order:

• Add fields used for Equality queries first.
• Next, index fields in the order of Sort queries.
• The last fields to be indexed should represent the Range of data to be accessed.


By creating a compound index with the correct order, MongoDB will use the index scan method, making the process of fetching filtered documents faster and more efficient.

Dev Thoughts

Improve Your Dev Journey: Essential Skills Beyond Just Coding

by Dragos Ispas15 days ago5 min read

Building a Multi-Platform Driver Onboarding Application with React Native

by Gabriel Macoveiabout 2 months ago5 min read