Resolving elasticsearch exception for Parse Failure [No mapping found for [@timestamp] in order to sort on]

elasticsearch logo

We use Kibana at work (which is a nice UI to sit on top of logstash – I think logstash has adopted Kibana now that I check up on it – which also makes use of elasticsearch) as our visual tool to dig into our system logs from instances we host on AWS. To my dismay I received the alert this morning that we were running out of disk space on our dedicated logging box (it aggregates rsyslog data, with our other production boxes pumping information through to logstash). Mind you, it has been running for close to a year without missing a beat so it was time for some maintenance.

Unfortunately I hadn’t setup a trimming cron job to deal with the eventuality of the logs blowing out on disk space so there was very little disk space left when I finally decided to go take a look.

So, I first solved the issue of space being taken up by killing off log indexes over 3 months old (we don’t need to be keeping anything older than that for the moment) and set that up as a daily cron job to happen in the evening. However I was impatient and had decided to reboot the box after I thought I was done (and I hadn’t waited for the deletions of indexes to flush on disk). When the box came back up I noticed Kibana wasn’t showing any new entries, so diving into the elasticsearch logs I found the following:

Caused by: org.elasticsearch.search.SearchParseException: [cluster][0]: query[ConstantScore(NotDeleted(cache(@timestamp:[2013-04-26T02:17:20+00:00 TO 2013-04-26T02:17:40+00:00])))],from[0],size[15]: Parse Failure [No mapping found for [@timestamp] in order to sort on]
 at org.elasticsearch.search.sort.SortParseElement.addSortField(SortParseElement.java:164)
 at org.elasticsearch.search.sort.SortParseElement.addCompoundSortField(SortParseElement.java:138)
 at org.elasticsearch.search.sort.SortParseElement.parse(SortParseElement.java:76)
 at org.elasticsearch.search.SearchService.parseSource(SearchService.java:545)
 ... 10 more

Seems like I’m not the only one to encounter the problem, with some quick googling there were a bunch of others suffering this issue as well.The suggestion most people offered was to delete the elasticsearch data files! This was terrible and I was filled with a cold sense of dread, I really didn’t want to lose any of our production log data if I could help it. With some trusty googling I found the following bug report:

https://logstash.jira.com/browse/LOGSTASH-889

Which lead me to believe if it was only a few indexes affected I should be able to delete them right? It’s better to lose a day or so (since I was only missing data in the last 10 hours and elasticsearch has daily indices) than the whole lot. So with that in mind I plonked into my shell and sent the following (I was going to go back and kill the most recent indices at a time):

~$ curl -XDELETE 'http://localhost:9200/logstash-2013.04.26'
{"ok":true,"acknowledged"}
~$

So, going back to look at Kibana I saw immediately spooled rsyslog messages from other production systems happily streaming in.

Happy Days!

I’m keeping this handy in the event I hit this snag again, who knows it might help someone else as well.

3 thoughts on “Resolving elasticsearch exception for Parse Failure [No mapping found for [@timestamp] in order to sort on]

  1. I had this problem this morning. To not lose a day’s worth of logs, I have found that breaking the indices up by the hour helps in case you have a corrupt index.

    Also in my Kibana 3 dashboard, it was automatically configured for logstash. This is in my dashboard settings:

    [logstash-]YYYY.MM.DD

  2. I had the same errors in my Kibana/Logstash/Elasticsearch setup, and found my way to your post. I ended up identifying a fix, that in my scenario didn’t require deleting any indexes.

    The error — Parse Failure [No mapping found for [@timestamp] in order to sort on] — was being caused by Kibana defaulting to “_all” for its index queries in many of my dashboards. There’s a “kibana-int” index in ES that Kibana uses to store the dashboard settings. Because it doesn’t have a timestamp field, the query fails and causes this error.

    Solution: Go to “Configure Dashboard” (top right corner, gear icon), and go to the Index tab. If you have “_all” as the default index, replace that with “logstash*”. The wildcard will include all of the daily logstash indices in ES.

    Hope this helps someone else… this was driving me nuts for a while!

Leave a Reply to Phillip Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.