Matches in DBpedia 2016-04 for { <http://dbpedia.org/resource/Tokenization_(lexical_analysis)> ?p ?o }
Showing triples 1 to 46 of
46
with 100 triples per page.
- Tokenization_(lexical_analysis) abstract "In lexical analysis, tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens. The list of tokens becomes input for further processing such as parsing or text mining. Tokenization is useful both in linguistics (where it is a form of text segmentation), and in computer science, where it forms part of lexical analysis.".
- Tokenization_(lexical_analysis) wikiPageExternalLink index.html.
- Tokenization_(lexical_analysis) wikiPageExternalLink tokenizer.tool.uniwits.com.
- Tokenization_(lexical_analysis) wikiPageExternalLink tokenization?lang=en.
- Tokenization_(lexical_analysis) wikiPageID "24517557".
- Tokenization_(lexical_analysis) wikiPageLength "2908".
- Tokenization_(lexical_analysis) wikiPageOutDegree "18".
- Tokenization_(lexical_analysis) wikiPageRevisionID "637328355".
- Tokenization_(lexical_analysis) wikiPageWikiLink Ancient_Greek.
- Tokenization_(lexical_analysis) wikiPageWikiLink Category:Tasks_of_natural_language_processing.
- Tokenization_(lexical_analysis) wikiPageWikiLink Chinese_language.
- Tokenization_(lexical_analysis) wikiPageWikiLink Emoticon.
- Tokenization_(lexical_analysis) wikiPageWikiLink Hyphen.
- Tokenization_(lexical_analysis) wikiPageWikiLink IBM_DeveloperWorks.
- Tokenization_(lexical_analysis) wikiPageWikiLink Language_model.
- Tokenization_(lexical_analysis) wikiPageWikiLink Lexical_analysis.
- Tokenization_(lexical_analysis) wikiPageWikiLink Parsing.
- Tokenization_(lexical_analysis) wikiPageWikiLink Poetic_contraction.
- Tokenization_(lexical_analysis) wikiPageWikiLink Scriptio_continua.
- Tokenization_(lexical_analysis) wikiPageWikiLink Text_mining.
- Tokenization_(lexical_analysis) wikiPageWikiLink Text_segmentation.
- Tokenization_(lexical_analysis) wikiPageWikiLink Thai_language.
- Tokenization_(lexical_analysis) wikiPageWikiLink Tokenization_(data_security).
- Tokenization_(lexical_analysis) wikiPageWikiLink Uniform_Resource_Identifier.
- Tokenization_(lexical_analysis) wikiPageWikiLink Whitespace_character.
- Tokenization_(lexical_analysis) wikiPageWikiLinkText "Tokenization (lexical analysis)".
- Tokenization_(lexical_analysis) wikiPageWikiLinkText "Tokenization".
- Tokenization_(lexical_analysis) wikiPageWikiLinkText "tokenization".
- Tokenization_(lexical_analysis) wikiPageWikiLinkText "tokenized".
- Tokenization_(lexical_analysis) wikiPageWikiLinkText "tokenizers".
- Tokenization_(lexical_analysis) wikiPageUsesTemplate Template:Compu-stub.
- Tokenization_(lexical_analysis) wikiPageUsesTemplate Template:Merge_to.
- Tokenization_(lexical_analysis) wikiPageUsesTemplate Template:Refimprove.
- Tokenization_(lexical_analysis) wikiPageUsesTemplate Template:Reflist.
- Tokenization_(lexical_analysis) subject Category:Tasks_of_natural_language_processing.
- Tokenization_(lexical_analysis) hypernym Process.
- Tokenization_(lexical_analysis) type Election.
- Tokenization_(lexical_analysis) comment "In lexical analysis, tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens. The list of tokens becomes input for further processing such as parsing or text mining. Tokenization is useful both in linguistics (where it is a form of text segmentation), and in computer science, where it forms part of lexical analysis.".
- Tokenization_(lexical_analysis) label "Tokenization (lexical analysis)".
- Tokenization_(lexical_analysis) sameAs Q2438971.
- Tokenization_(lexical_analysis) sameAs Tokenisierung.
- Tokenization_(lexical_analysis) sameAs Tokenisasi.
- Tokenization_(lexical_analysis) sameAs m.0gn6j.
- Tokenization_(lexical_analysis) sameAs Q2438971.
- Tokenization_(lexical_analysis) wasDerivedFrom Tokenization_(lexical_analysis)?oldid=637328355.
- Tokenization_(lexical_analysis) isPrimaryTopicOf Tokenization_(lexical_analysis).