python下结巴中文分词
http://blog.csdn.net/pipisorry/article/details/45311229
jieba中文分词的使用
import jieba sentences = ["我喜欢吃土豆","土豆是个百搭的东西","我不喜欢今天雾霾的北京", ‘costumer service‘] # jieba.suggest_freq(‘雾霾‘, True) # jieba.suggest_freq(‘百搭‘, True) words = [list(jieba.cut(doc)) for doc in sentences] print(words) [[‘我‘, ‘喜欢‘, ‘吃‘, ‘土豆‘], [‘土豆‘, ‘是‘, ‘个‘, ‘百搭‘, ‘的‘, ‘东西‘], [‘我‘, ‘不‘, ‘喜欢‘, ‘今天‘, ‘雾霾‘, ‘的‘, ‘北京‘], [‘costumer‘, ‘ ‘, ‘service‘]][https://github.com/fxsjy/jieba]
from:http://blog.csdn.net/pipisorry/article/details/45311229
郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。