| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| meilisearch-enterprise-macos-amd64 | 2026-03-02 | 134.2 MB | |
| meilisearch-macos-amd64 | 2026-03-02 | 132.3 MB | |
| meilisearch-windows-amd64.exe | 2026-03-02 | 134.5 MB | |
| meilisearch-enterprise-windows-amd64.exe | 2026-03-02 | 136.2 MB | |
| meilisearch.deb | 2026-03-02 | 88.1 MB | |
| meilisearch-enterprise-linux-amd64 | 2026-03-02 | 141.2 MB | |
| meilisearch-enterprise-macos-apple-silicon | 2026-03-02 | 129.5 MB | |
| meilisearch-linux-amd64 | 2026-03-02 | 139.3 MB | |
| meilisearch-macos-apple-silicon | 2026-03-02 | 128.0 MB | |
| meilisearch-enterprise-linux-aarch64 | 2026-03-02 | 135.4 MB | |
| meilisearch-linux-aarch64 | 2026-03-02 | 133.7 MB | |
| meilisearch-openapi.json | 2026-03-02 | 592.0 kB | |
| README.md | 2026-02-26 | 10.9 kB | |
| v1.37.0 source code.tar.gz | 2026-02-26 | 20.2 MB | |
| v1.37.0 source code.zip | 2026-02-26 | 21.2 MB | |
| Totals: 15 Items | 1.5 GB | 0 | |
[!IMPORTANT]
This release contains breaking changes for users of thenetworkexperimental feature.
Meilisearch v1.37 introduces replicated sharding, removes the vectorStoreSetting experimental feature, stabilizes our new vector store for best performance, adds a security fix and miscellaneous improvements.
โจ Improvements
ยง Replicated sharding
[!NOTE] Replicated sharding requires Meilisearch Enterprise Edition (EE). - Users of Meilisearch Cloud, please contact support if you need replicated sharding. - Users of the Community Edition, please contact the sales if you want to use replicated sharding in production.
ยง Breaking changes
networkobjects sent to thePATCH /networkroute must now contain at least oneshardobject containing at least one remote whenleaderis notnull.
Existing databases will be migrated automatically when upgraded with --experimental-dumpless-upgrade when leader is not null, such that for each remote:
- A shard with the same name as the remote is created
- This shard has exactly one remote in its
remoteslist: the remote with the same name as the shard.
This change will not cause any document to be resharded.
To be able to upgrade without resharding, the migration uses the same name for remotes and for shards. However, in new configurations, we recommend using different names for shards and remotes.
Example of migration
For instance, the following network object: :::jsonc { "leader": "ms-00", "self": "ms-01", "remotes": { "ms-00": { /* .. */ }, "ms-01": { /* .. */ } } } is converted to: :::jsonc { "leader": "ms-00", "self": "ms-01", "remotes": { "ms-00": { /* .. */ }, "ms-01": { /* .. */ } }, "shards": { // โจ NEW "ms-00": { // shard named like the remote "remotes": ["ms-00"] // is owned by the remote }, "ms-01": { "remotes": ["ms-01"] } } }Addition of network.shards
The network object for routes PATCH /network and GET /network now contains the new field shards, which is an object whose values are shard objects, and keys the name of each shard.
Each shard object contains a single field remotes, which is an array of strings, each string representing the name of an existing remote.
Convenience fields
The shard objects in PATCH /network contain the additional fields addRemotes and removeRemotes meant for convenience:
- pass an array of remote names to
shard.addRemotesto add these remotes to the list of remotes of a shard. - pass an array of remote names to
shard.removeRemotesto remove these remotes from the list of remotes of a shard. - if present and non-
null,shard.remoteswill completely override the existing list of remotes for a shard. - if several of these options are present and non-
null, then the order of application isshard.remotes, thenshard.addRemotes, thenshard.removeShards.
Adding a new shard with some remotes
:::jsonc // PATCH /network { // assuming that remotes `ms-0`, `ms-1`, `ms-2` where sent in a previous call to PATCH /network "shards": { "s-a": { // new shard "remotes": ["ms-0", "ms-1"] } } } Remotes `ms-0` and `ms-1` own the new shard `s-a`.Fully overriding the list of remotes owning a shard
:::jsonc // PATCH /network { // assuming remotes `ms-0`, `ms-1`, `ms-2` // assuming shard `s-a`, owned by `ms-0` and `ms-1` "shards": { "s-a": { "remotes": ["ms-2"] } } } `ms-2` is now the sole owner of `s-a`, replacing `ms-0` and `ms-1`.Adding a remote without overriding the list of remotes owning a shard
:::jsonc // PATCH /network { // assuming remotes `ms-0`, `ms-1`, `ms-2` // assuming shard `s-a`, owned by `ms-2` "shards": { "s-a": { "addRemotes": ["ms-0"] } } } `ms-0` and `ms-2` are now the owners of `s-a`.Removing a remote without overriding the list of remotes owning a shard
:::jsonc // PATCH /network { // assuming remotes `ms-0`, `ms-1`, `ms-2` // assuming shard `s-a`, owned by `ms-0` and `ms-2` "shards": { "s-a": { "removeRemotes": ["ms-2"] } } } `ms-0` is now the sole owner of `s-a`.Entirely removing a shard from the list of shards
Set the shard to `null`: :::jsonc // PATCH /network { "shards": { "s-a": null } } Or set its `remotes` list to the empty list: :::jsonc // PATCH /network { "shards": { "s-a": { "remotes": [] } } }network.shards validity
When network.leader is not null, each shard object in network.shards must:
- Only contain
remotesthat exist in the list ofremotes. - Contain at least one remote.
Additionally, network.shards must contain at least one shard.
Failure to meet any of these conditions will cause the PATCH /network route to respond with 400 invalid_network_shards.
Change in sharding logic
Documents are now sharded according to the list of shards declared in the network rather than the list of remotes. All remotes owning a shard will process the documents that belong to this shard, allowing for replication.
Example of replication
The following configuration defines 3 remotes `0`, `1` and `2`, and 3 shards `A`, `B`, `C`, such that each remote owns two shards, achieving replication (losing one remote does not lose any document). :::jsonc { "leader": "0", "self": "0", "remotes": { "0": { /* .. */ }, "1": { /* .. */ }, "2": { /* .. */ } }, "shards": { "A": { "remotes": ["0", "1"] }, "B": { "remotes": ["1", "2"] }, "C": { "remotes": ["2", "0"] } } }- Full replication is supported by having all remotes own all the shards.
- Unbalanced replication is supported by having some remotes own more shards than other remotes.
- "Watcher" remotes are supported by having remotes that own no shards. Watcher remotes are not very useful in this release, and might be upgraded in a future release, so that they keep all documents without indexing them, allowing to "respawn" shards for other remotes.
useNetwork takes network.shards into account
When useNetwork: true is passed to a search query, it is expanded to multiple queries such that each shard declared in network.shards appears exactly once, associated with a remote that owns that shard.
This ensures that there is no missing or duplicate documents in the results.
_shard filters
When the network experimental feature is enabled, then it becomes possible to filter documents depending on the shard they belong to.
Given s-a and s-b the names of two shards declared in network.shards, then:
_shard = "s-a"in afilterparameter to the search or documents fetch will return the documents that belong tos-a._shard != "s-a"will return the documents that do not belong tos-a_shard IN ["s-a", "s-b"]will return the documents that belong tos-aor tos-b.
You can use these new filters in manual remote federated search to create a partitioning over all shards in the network.
[!IMPORTANT] To avoid duplicate or missing documents in results, for manually crafted remote federated search requests, all shards should appear in exactly one query.
[!TIP] Search requests built with
useNetwork: truealready build a correct partitioning over shards. They should be preferred to manually crafted remote federated search requests in replicated sharding scenarios.
Update instructions
When updating your Meilisearch network using dumpless upgrade, please observe the following guidelines:
- Do not call the
PATCH /networkroute until all remotes of the network are finished updating - If using the search routes with
useNetwork: true, call them on un-updated remotes. Calling it on already updated remotes will cause un-updated remotes to fail the search as they don't know about the_shardfilters.
By @dureuill in https://github.com/meilisearch/meilisearch/pull/6128
ยง Remove vectorStoreSetting experimental feature
The new HNSW vector store (hannoy) has been stabilized and is now the only supported vector store in Meilisearch.
As a result, updating to v1.37.0 will migrate all remaining legacy vector store indexes (using arroy) to hannoy, and the vectorStoreSetting experimental feature is no longer available.
By @Kerollmops in https://github.com/meilisearch/meilisearch/pull/6176
Improve indexing performance for embeddings
We removed a computationally expensive step from vector indexing.
On a DB with 20M documents, this removes 300s per indexing batch of 1100s.
By @Kerollmops in https://github.com/meilisearch/meilisearch/pull/6175
ยง ๐ Security
- Bump mini-dashboard (local web interface) which
- now stores API key in RAM instead of in the
localStorage - bumps dependencies with potential security vulnerabilities
- now stores API key in RAM instead of in the
By @Strift and @curquiza in https://github.com/meilisearch/meilisearch/pull/6186 and https://github.com/meilisearch/meilisearch/pull/6172
ยง ๐ฉ Miscellaneous
- Mark Cargo.lock as not linguist-generated by @Kerollmops in https://github.com/meilisearch/meilisearch/pull/6181
Full Changelog: https://github.com/meilisearch/meilisearch/compare/v1.36.0...v1.37.0