分词

fēn cí
  • participle
分词分词
分词 [fēn cí]
  • [participle]具有动词及形容词二者特征的词;尤指以-ing或-ed,-d,-t,-en或-n结尾的英语动词性形容词,具有形容词功能,同时又表现各种动词性特点,如时态,语态、带状语性修饰语的性能及带宾词的性能

分词[fēn cí]
  1. “reading”是现在分词还是动名词?

    Is " reading " a participle or a gerund ?

  2. 现在完成时的基本结构是“have/has+动词过去分词”。

    This is formed with have / has + a past participle verb .

  3. WEB文本挖掘的中文分词系统的设计与实现

    Design and Implementation of Chinese Automatic Word-cut in Web Text Mining

  4. much或verymuch可用以修饰过去分词

    Much or very much can modify past participles

  5. Web文本挖掘中的一种中文分词算法研究及其实现

    The research and implementation on a Chinese automatic word - segment algorithm in Web text mining

  6. GIS中文查询系统的词典设计与分词研究

    Dictionary Design and Word Segmentation Research in Chinese Query of GIS

  7. 基于K最短路径的中文分词算法研究与实现

    Research and Implementation of Chinese Word Segmentation Algorithm Based on K Shortest Paths

  8. 老师:“John,动词ring的过去分词是什么?”

    Teacher : " John , what is the past participle of the verb to ring ?"

  9. 分词和词性标注模块采用了最大词长匹配算法,句法分析模块采用了改进的Chart算法。

    They have adopted the Maximum Match based approach algorithms and improved Chart algorithm separately .

  10. 基于PATRICIAtree的汉语自动分词词典机制

    PATRICIA - tree based Dictionary Mechanism for Chinese Word Segmentation

  11. 并利用广义语词概念,设计了分词词典,改进了最大匹配分词算法(MM);

    A segmentation is designed by using the generalized concept of phraseology for the improvement of the MM arithmetic .

  12. prove和shave均有两种过去分词形式

    Prove and shave have alternative past participle forms

  13. 然后着重对Web内容分类挖掘的一些关键技术进行了阐述,这些关键技术包括:Web网页数据的采集、中文的分词和分类器的建立它们是Web内容分类挖掘的核心。

    Then studying the key technique of web pages content classing mining . Gathering web pages data , segmentation and building classifier are core technique of web pages content classing mining .

  14. 用VFP实现汉语文献的自动分词

    Using VFP to Realize the Automatic Segmentation of Words of Chinese Literature

  15. 根据传统语法理论,介词与分词词组搭配使用是不符合语法的,但是在语言现实中,像FROMUNDERTHETABLE这类结构经常出现。

    According to the traditional grammar a preposition followed by a prepositional phrase is abnormal . But in the linguistic reality such structure as FROM UNDER THE TABLE is not rare .

  16. 用户无需打开网站,无需点击链接,只需键入URL(UnitResourceLocation,统一资源定位符),即可获取Web汉语料并切分词到汉词库中。

    Users can get web Chinese corpus and segment Chinese word into glossary corpus database after inputing URL ( unit resource location ), without opening websites or clicking link .

  17. 随着Internet飞速发展和网上中文信息的逐渐增多,中文信息处理应用日益广泛,而中文分词是中文信息处理的首要前提。

    With the rapid development of Internet , Chinese information in the Internet is gradually increasing and Chinese information processing applications are increasingly extended , Chinese automatic segmentation is the precondition of Chinese information processing .

  18. 通过Spider自动抓取页面技术、中文分词等技术方法,设计了Web文本挖掘原型,对实用的Web挖掘系统的开发具有较好的参考价值。

    By using Spider crawling automatic page technology and Chinese word segmentation techniques , Web text mining prototype is designed . It does benefits for the exploit of the Web mining system .

  19. 英语悬垂分词被规定语法(prescriptivegrammar)视为语法错误,然而,不少例外又可以被接受,这使得这条规则不能自圆其说。

    Dangling participle is regarded as the solecism in prescriptive grammar , which is not unexceptionable because there are many exceptions which are considered acceptable .

  20. 在最大匹配法(MaximumMatch)长词优先原则的基础上,提出了一种改进的最大匹配(MaximumMatch)自动分词方法,并给出了相应的算法及词典设计。

    Basing on the principle " First Matching the Maximum Word-Length " in the Maximum Match ( MM ) method , we put forwards an improved automatically word segmentation MM , and the corresponding algorithm and dictionary design .

  21. 其他的比如机器翻译(MT)、语音合成、自动分类、自动摘要、自动校对等等,都需要用到分词。

    Others such as machine translation ( MT ), speech synthesis , automatic classification , automatic summarization , automatic proofreading etc. , require the use of word segmentation .

  22. 二是英语动词词根在过去分词后缀-t(us)前的变体,在这种变体中,除个别词根是元音字母变体外,其余词根都是辅音字母变体。

    Secondly , the variants of English verbal roots before - t ( us ) of the suffix of the past participle , in the variation , except several roots letters have the variants of vowel letters , the rest have the variants of the consonant letters .

  23. 将浏览器的HTML解析过程实现为分词和词法分析两部分,实现了解析模块的分词算法和词法分析算法,分词算法具有简单的容错功能。

    This dissertation separates the process of parsing into two process : HTML segmentation and wording analysis . The HTML segmentation algorithm , which has simple fault tolerance ability , and parse module are of realization .

  24. 试论书面汉语自动分词专家系统中的DKS技术

    On the DKS Technique of Experts System for the Automatic Words Segmentation in Written Chinese

  25. 通过对一般文本的表示模型的分析,并结合基于XML的军用信息的特点,对向量空间模型的权值计算方法进行了改进,并在预处理阶段提出了一种新的最大匹配分词算法。

    Through analyzing the denotation model of general text and combining the character of Ml based on XML , the paper improves the original weight calculation method of VSM ( vector space model ) and presents the novel MM ( Maximal Match ) algorithm .

  26. 此Ftp搜索引擎不仅能够自动生成标准格式的XML资源文档,而且采用基于字典的前向最大匹配中文分词法在Lucene中动态更新全文索引。

    And the new designed Ftp search engine can generate an XML resource documents by standard format automatically , thus maximally match Chinese words segmentation and update the full text index dynamically in the Lucene documents .

  27. 在法语中,毢tre加及物动词的过去分词构成被动态外,还可通过其它形式表示被动关系。

    In French , the passive voice can be formed by ê tre added by the past participle of direct transitive verb and other forms .

  28. 具体介绍了如何将Html格式的文档转化为Txt格式文本,以及利用MM法来实现对文档的汉语自动分词。

    It begins to introduce a method of how to change Html form text into Txt form text in detail . Then , it analyzes the use of MM method to realize the Chinese automatic word-cut in Web text .

  29. 给出了一个词库维护及检索系统,它采用基于PATRICIAtree的分词词典机制及灵活的词库维护及检索方法,不仅适用于传统的机械切分,更适合于串行和并行全切分。

    A Chinese word library system is given which adopts PATRICIA tree based on dictionary mechanism and flexible maintenance and retrieve method . The system applies not only to normal Chinese word segmentation but also to serial and parallel omni-segmentation .

  30. 它们是用一种叫做关联定义语言(RDL)的语言来表达的,该语言被转换引擎分词和评估。

    They are expressed in a language called the Relation Definition Language ( RDL ), which is parsed and evaluated by the transformation engine .