IK分词(IK Analyzer)是一款基于Java开发的中文分词工具,它结合了词典分词和基于统计的分词方法,旨在为用户提供高效、准确、灵活的中文分词服务。
注意:需要自己建立一个敏感词库,然后自己选择方式同步到elasticsearch中,方便比对操作
话不多说,直接上后台代码
这个依赖是我使用的,可以结合自己的情况自己选择适用版本的相关依赖
<dependency><groupId>org.elasticsearch</groupId><artifactId>elasticsearch</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-elasticsearch</artifactId></dependency>
我直接给相关核心代码封装成了工具类,上半部分代码是用于做分词的,会返回一个集合,这个集合里面存放的都是根据入参分出来的词
public static Long ikword(String str){String url = "http://你自己的服务器ip:9200/_analyze"; // Elasticsearch地址// 创建JSONObject并设置analyzerJSONObject jsonObject = new JSONObject();jsonObject.put("analyzer", "ik_max_word");// 创建一个JSONObject用于存放text字段jsonObject.put("text", str);// 将JSONObject转换为JSON字符串String json = jsonObject.toString();CloseableHttpClient httpClient = HttpClients.createDefault();HttpPost httpPost = new HttpPost(url);ArrayList<String> list = new ArrayList<>();try {StringEntity entity = new StringEntity(json, "UTF-8");entity.setContentType("application/json");httpPost.setEntity(entity);CloseableHttpResponse response = httpClient.execute(httpPost);try {HttpEntity responseEntity = response.getEntity();String result = EntityUtils.toString(responseEntity, "UTF-8");ObjectMapper mapper = new ObjectMapper();JsonNode rootNode = mapper.readTree(result);JsonNode tokensNode = rootNode.path("tokens");if (tokensNode.isArray()) {for (JsonNode token : tokensNode) {list.add(token.path("token").asText());}}} finally {response.close();}} catch (IOException e) {e.printStackTrace();} finally {try {httpClient.close();} catch (IOException e) {e.printStackTrace();}}Long aLong = ikword1(list);return aLong;}
下半部分是根据分出来的词做一个检索操作,根据es的索引,检索与集合中词汇对应的敏感词
public static Long ikword1(List<String> list){Long count = 0L;// 创建客户端连接try (RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(new HttpHost("你自己的服务器ip", 9200, "http")))) {// 创建搜索请求SearchRequest searchRequest = new SearchRequest("hyposensitization"); // 替换为你的索引名SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();// 构建match查询(注意这里使用了matchQuery而不是termsQuery)for (String term : list) {boolQueryBuilder.should(QueryBuilders.matchQuery("content", term));}// 将查询添加到SearchSourceBuilder中searchSourceBuilder.query(boolQueryBuilder);// 将查询设置到搜索请求中searchRequest.source(searchSourceBuilder);// 执行搜索并获取响应SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);// 处理搜索结果count = searchResponse.getHits().getTotalHits().value;} catch (IOException e) {e.printStackTrace();}return count;}
然后可以直接在业务代码中使用工具类即可,注意入参是String,这里我的判断条件是大于0就是存在违规词汇,进行相关操作即可
public static void main(String[] args) {//分词Long ikword = EsIkword.ikword("xx傻逼,口齿不清,右边脸明显动不了");//用返回值做判断,如果返回值大于0,则存在违规数据词汇if (ikword > 0) {log.info("存在违规数据词汇");} else {log.info("不存在违规数据词汇");}}