目录
一 read_only_allow_delete" : "true"二 illegal_argument_exception三 Result window is too large一 read_only_allow_delete" : "true"
当我们在向某个索引添加一条数据的时候,可能(极少情况)会碰到下面的报错:
{ "error": { "root_cause": [ { "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" } ], "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" }, "status": 403 }
上述报错是说索引现在的状态是只读模式(read-only),如果查看该索引此时的状态:
GET z1/_settings # 结果如下 { "z1" : { "settings" : { "index" : { "number_of_shards" : "5", "blocks" : { "read_only_allow_delete" : "true" }, "provided_name" : "z1", "creation_date" : "1556204559161", "number_of_replicas" : "1", "uuid" : "3PEevS9xSm-r3tw54p0o9w", "version" : { "created" : "6050499" } } } } }
可以看到"read_only_allow_delete" : "true",说明此时无法插入数据,当然,我们也可以模拟出来这个错误:
PUT z1 { "mappings": { "doc": { "properties": { "title": { "type":"text" } } } }, "settings": { "index.blocks.read_only_allow_delete": true } } PUT z1/doc/1 { "title": "es真难学" }
现在我们如果执行插入数据,就会报开始的错误。那么怎么解决呢?
清理磁盘,使占用率低于85%。手动调整该项,具体参考官网这里介绍一种,我们将该字段重新设置为:
PUT z1/_settings { "index.blocks.read_only_allow_delete": null }
现在再查看该索引就正常了,也可以正常的插入数据和查询了。
二 illegal_argument_exception
有时候,在聚合中,我们会发现如下报错:
{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead." } ], "type": "search_phase_execution_exception", "reason": "all shards failed", "phase": "query", "grouped": true, "failed_shards": [ { "shard": 0, "index": "z2", "node": "NRwiP9PLRFCTJA7w3H9eqA", "reason": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead." } } ], "caused_by": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.", "caused_by": { "type": "illegal_argument_exception", "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead." } } }, "status": 400 }
这是怎么回事呢?是因为,聚合查询时,指定字段不能是text类型。比如下列示例:
PUT z2/doc/1 { "age":"18" } PUT z2/doc/2 { "age":20 } GET z2/doc/_search { "query": { "match_all": {} }, "aggs": { "my_sum": { "sum": { "field": "age" } } } }
当我们向elasticsearch中,添加一条数据时(此时,如果索引存在则直接新增或者更新文档,不存在则先创建索引),首先检查该age字段的映射类型。
如上示例中,我们添加第一篇文档时(z1索引不存在),elasticsearch会自动的创建索引,然后为age字段创建映射关系(es就猜此时age字段的值是什么类型,如果发现是text类型,那么存储该字段的映射类型就是text),此时age字段的值是text类型,所以,第二条插入数据,age的值也是text类型,而不是我们看到的long类型。我们可以查看一下该索引的mappings信息:
GET z2/_mapping # mapping信息如下 { "z2" : { "mappings" : { "doc" : { "properties" : { "age" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } } } } } } }
上述返回结果发现,age类型是text。而该类型又不支持聚合,所以,就会报错了。解决办法就是:
如果选择动态创建一篇文档,映射关系取决于你添加的第一条文档的各字段都对应什么类型。而不是我们看到的那样,第一次是text,第二次不加引号,就是long类型了不是这样的。如果嫌弃上面的解决办法麻烦,那就选择手动创建映射关系。首先指定好各字段对应什么类型。后续才不至于出错。三 Result window is too large
很多时候,我们在查询文档时,一次查询结果很可能会有很多,而elasticsearch一次返回多少条结果,由size参数决定:
GET e2/doc/_search { "size": 100000, "query": { "match_all": {} } }
而默认是最多范围一万条,那么当我们的请求超过一万条时(比如有十万条),就会报:
Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.
意思是一次请求返回的结果太大,可以另行参考scroll API或者设置index.max_result_window参数手动调整size的最大默认值:
# kibana中设置 PUT e2/_settings { "index": { "max_result_window": "100000" } } # Python中设置 from elasticsearch import Elasticsearch es = Elasticsearch() es.indices.put_settings(index="e2", body={"index": {"max_result_window": 100000}})
如上例,我们手动调整索引e2的size参数最大默认值到十万,这时,一次查询结果只要不超过10万就都会一次返回。
注意,这个设置对于索引es的size参数是永久生效的。
以上就是Elasticsearch在应用中常见错误示例解析的详细内容,更多关于Elasticsearch错误示例解析的资料请关注脚本之家其它相关文章!
X 关闭
X 关闭
- 15G资费不大降!三大运营商谁提供的5G网速最快?中国信通院给出答案
- 2联想拯救者Y70发布最新预告:售价2970元起 迄今最便宜的骁龙8+旗舰
- 3亚马逊开始大规模推广掌纹支付技术 顾客可使用“挥手付”结账
- 4现代和起亚上半年出口20万辆新能源汽车同比增长30.6%
- 5如何让居民5分钟使用到各种设施?沙特“线性城市”来了
- 6AMD实现连续8个季度的增长 季度营收首次突破60亿美元利润更是翻倍
- 7转转集团发布2022年二季度手机行情报告:二手市场“飘香”
- 8充电宝100Wh等于多少毫安?铁路旅客禁止、限制携带和托运物品目录
- 9好消息!京东与腾讯续签三年战略合作协议 加强技术创新与供应链服务
- 10名创优品拟通过香港IPO全球发售4100万股 全球发售所得款项有什么用处?