According to the Splunk blog1, the best method for sizing or scaling a search head cluster is to estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head. This gives you an idea of how many search heads you need to handle the peak search load without overloading the CPU resources. The other options are false because:
Estimating the maximum daily ingest volume in gigabytes and dividing by the number of CPU cores per search head is not a good method for sizing or scaling a search head cluster, as it does not account for the complexity and frequency of the searches. The ingest volume is more relevant for sizing or scaling the indexers, not the search heads2.
Estimating the total number of searches per day and dividing by the number of CPU cores available on the search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the concurrency and duration of the searches. The total number of searches per day is an average metric that does not reflect the peak search load or the search performance2.
Dividing the number of indexers by three to achieve the correct number of search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the search load or the search head capacity. The number of indexers is not directly proportional to the number of search heads, as different types of data and searches may require different amounts of resources2.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit