ElasticSearch提示java.lang.IllegalArgumentException: Document contains at least one immense term in .

 翻译:索引文档中包含了一个巨大的字段

java.lang.IllegalArgumentException: Document contains at least one immense term in field="responseData" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[34, 91, 123, 92, 34, 99, 97, 112, 116, 105, 111, 110, 92, 34, 58, 92, 34, -26, -75, -117, -24, -81, -107, -26, -75, -117, -24, -81, -107, -29]...', original message: bytes can be at most 32766 in length; got 41756at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:772)at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:417)at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:373)at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:231)at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:478)at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1562)at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1307)at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:558)at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:520)at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:409)at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:191)at org.elasticsearch.action.index.TransportIndexAction.onPrimaryShard(TransportIndexAction.java:144)at org.elasticsearch.action.index.TransportIndexAction.onPrimaryShard(TransportIndexAction.java:63)at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:75)at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:48)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:905)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:875)at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:323)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:258)at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:855)at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:852)at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142)at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1655)at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:864)at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:90)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:275)at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:254)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:246)at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:577)at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527)at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)
Caused by: NotSerializableExceptionWrapper[max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 41756]at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:263)at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:149)at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:762)... 38 more
java.lang.IllegalArgumentException: Document contains at least one immense term in field="responseData" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[34, 123, 92, 34, 100, 97, 116, 97, 92, 34, 58, 91, 123, 92, 34, 105, 100, 92, 34, 58, 92, 34, 97, 57, 57, 51, 48, 48, 100, 56]...', original message: bytes can be at most 32766 in length; got 62581at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:772)at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:417)at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:373)at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:231)at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:478)at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1562)at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1307)at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:558)at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:520)at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:409)at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:191)at org.elasticsearch.action.index.TransportIndexAction.onPrimaryShard(TransportIndexAction.java:144)at org.elasticsearch.action.index.TransportIndexAction.onPrimaryShard(TransportIndexAction.java:63)at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:75)at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:48)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:905)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:875)at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:323)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:258)at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:855)at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:852)at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142)at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1655)at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:864)at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:90)at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:275)at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:254)at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:246)at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:577)at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527)at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)
Caused by: NotSerializableExceptionWrapper[max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 62581]at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:263)at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:149)at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:762)... 38 more

提示信息说 responseData 超出了最大长度(the max length 32766)

解决方法:设置字段长度 ignore_above :256 (长度超过256个字节部分忽视)

ElasticSearch已经提供了解决方案,就是ignore_above示例配置如下:curl -XPUT 'http://localhost:9200/twitter' -d '
{
"mappings":{"tweet" : {"properties" : {"message" : {"type" : "string", "index":"not_analyzed","ignore_above":256 }}}}}
}
其中tweet下的message字段不做分词等处理,直接将原始内容来做索引,当内容长度大于256字节时,只索引前面256个字符,后面的内容被丢弃。这样就不会出现前文所提的immense term的错误了。一般ignore_above设置就是为not_analyzed字段存在的,不可滥用。

参考:https://blog.csdn.net/iteye_6322/article/details/82647099


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部