Matches in DBpedia 2016-04 for { <http://wikidata.dbpedia.org/resource/Q2438971> ?p ?o }
Showing triples 1 to 24 of
24
with 100 triples per page.
- Q2438971 subject Q8831641.
- Q2438971 abstract "In lexical analysis, tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens. The list of tokens becomes input for further processing such as parsing or text mining. Tokenization is useful both in linguistics (where it is a form of text segmentation), and in computer science, where it forms part of lexical analysis.".
- Q2438971 wikiPageExternalLink index.html.
- Q2438971 wikiPageExternalLink tokenizer.tool.uniwits.com.
- Q2438971 wikiPageExternalLink tokenization?lang=en.
- Q2438971 wikiPageWikiLink Q17086007.
- Q2438971 wikiPageWikiLink Q180309.
- Q2438971 wikiPageWikiLink Q194152.
- Q2438971 wikiPageWikiLink Q1948408.
- Q2438971 wikiPageWikiLink Q2386835.
- Q2438971 wikiPageWikiLink Q31963.
- Q2438971 wikiPageWikiLink Q35497.
- Q2438971 wikiPageWikiLink Q3621696.
- Q2438971 wikiPageWikiLink Q571005.
- Q2438971 wikiPageWikiLink Q591615.
- Q2438971 wikiPageWikiLink Q61694.
- Q2438971 wikiPageWikiLink Q676880.
- Q2438971 wikiPageWikiLink Q7207461.
- Q2438971 wikiPageWikiLink Q7850.
- Q2438971 wikiPageWikiLink Q835922.
- Q2438971 wikiPageWikiLink Q8831641.
- Q2438971 wikiPageWikiLink Q9217.
- Q2438971 comment "In lexical analysis, tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens. The list of tokens becomes input for further processing such as parsing or text mining. Tokenization is useful both in linguistics (where it is a form of text segmentation), and in computer science, where it forms part of lexical analysis.".
- Q2438971 label "Tokenization (lexical analysis)".