Elasticsearch unassigned shards

Elasticsearch unassigned shards

Elasticsearch shards across a cluster can get into many undesirable states. Some such state hit us with our Jaeger collector stopping our Docker containers and Kubernetes pods from starting. Our Elasticsearch cluster was treated harshly and both data nodes were offline at the same time causing a state that Elasticsearch could not recover from without intervention. The below examples showed the steps taken to recover the cluster.

Check cluster health for clues

Checking the cluster health showed a number of **unassigned_shards **forcing the status of the cluster to red. 

{
  "cluster_name" : "tracing",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 140,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 0.0
}

**Checking into shard allocation explanation **

Elasticsearch offer an endpoint to explain shard allocation across a cluster. Using this endpoint it was clear the shards were in an unassigned state due to the cluster nodes restarting.

{
  "index" : "jaeger-span-2019-01-15",
  "shard" : 2,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2019-01-15T04:30:53.526Z",
    "last_allocation_status" : "no_valid_shard_copy"
  },
  "can_allocate" : "no_valid_shard_copy",
  "allocate_explanation" : "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster",
  "node_allocation_decisions" : [
    {
      "node_id" : "DzyPBnARQ7yqd7x_qjIyZQ",
      "node_name" : "elasticsearch-data-1",
      "transport_address" : "10.244.3.74:9300",
      "node_decision" : "no",
      "store" : {
        "found" : false
      }
    },
    {
      "node_id" : "Rd5bU7fvTbar2ZqJK6Aljw",
      "node_name" : "elasticsearch-data-0",
      "transport_address" : "10.244.5.248:9300",
      "node_decision" : "no",
      "store" : {
        "found" : false
      }
    }
  ]

State of shards

Checking into the state of the shards gave us the information we needed to assign the shards per index to clear up this mess.

jaeger-service-2019-01-02 4 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 4 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 2 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 2 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 1 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 1 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 3 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 3 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 0 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-02 0 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 4 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 4 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 2 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 2 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 3 p UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 3 r UNASSIGNED CLUSTER_RECOVERED
jaeger-service-2019-01-08 1 p UNASSIGNED CLUSTER_RECOVERED

Fix this mess

Fixing the unassigned shards issue was fairly simple to do using the cluster reroute endpoint, which was also fairly easy to automate 140 times as required.

The below shows the simple one-liner script used to rectify the sharding issue.

#> range=2
#> IFS=$'\n'

#> for line in $(curl -s 'localhost:9200/_cat/shards' | fgrep UNASSIGNED); do
  INDEX=$(echo $line | (awk '{print $1}'))
  SHARD=$(echo $line | (awk '{print $2}'))
  number=$RANDOM
  let "number %= ${range}"

  curl -XPOST http://localhost:9200/_cluster/reroute? -d '{
  "commands" : [ {
  "allocate_empty_primary" :
  {
    "index" : '\"${INDEX}\"',
    "shard" : '\"${SHARD}\"',
    "node" : "undercooked-horse-elasticsearch-data-0",
    "accept_data_loss" : true
  }
}
]
}'
done


#> curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason
jaeger-service-2019-01-03 4 p STARTED
jaeger-service-2019-01-03 4 r STARTED
jaeger-service-2019-01-03 2 p STARTED
jaeger-service-2019-01-03 2 r STARTED
jaeger-service-2019-01-03 3 p STARTED
jaeger-service-2019-01-03 3 r STARTED
jaeger-service-2019-01-03 1 p STARTED
jaeger-service-2019-01-03 1 r STARTED
jaeger-service-2019-01-03 0 p STARTED
jaeger-service-2019-01-03 0 r STARTED
jaeger-span-2019-01-15    4 p STARTED
jaeger-span-2019-01-15    4 r STARTED
jaeger-span-2019-01-15    2 p STARTED
jaeger-span-2019-01-15    2 r STARTED
jaeger-span-2019-01-15    3 p STARTED
jaeger-span-2019-01-15    3 r STARTED
jaeger-span-2019-01-15    1 p STARTED
jaeger-span-2019-01-15    1 r STARTED
jaeger-span-2019-01-15    0 p STARTED
jaeger-span-2019-01-15    0 r STARTED