The host is AWS from ElasticSearch, I have 2TB of data stored in 6 nodes and in 30 indexes with 10 shards each. A simple search in all indexes is very very slow and takes a few minutes.
Where I made the mistake? Is that normal or I have bad settings or maybe I have too much data stored?
My cluster settings:
"search": {
"max_queue_size": "1000",
"queue_size": "1000",
"size": "4",
"auto_queue_frame_size": "2000",
"target_response_time": "1s",
"min_queue_size": "1000"
},
My nodes settings:
"os": {
"refresh_interval_in_millis": 1000,
"name": "Linux",
"pretty_name": "CentOS Linux 7 (Core)",
"arch": "amd64",
"version": "4.15.0-1039-aws",
"available_processors": 32,
"allocated_processors": 2
}
Thank you!
2
Answers
Its a very broad question with very less information, can you please provide more information like:
allocated_processors
very less(2) than available processors(32)You can refer to my 10 tips on improving search performance ,and also tell me the values of some param mentioned in the tips.
That’s to much.
The goal size for a shard should be around 50Gb. With your setting you are more around 5Gb Each.
You can shrink to 5 shards or less and force merge to 1 segment.
Performances should be improved a lot.
After that, look at other good advises provided by Optsters in his blog. They are all relevant.