Matches in DBpedia 2016-04 for { <http://dbpedia.org/resource/Focused_crawler> ?p ?o }
Showing triples 1 to 46 of
46
with 100 triples per page.
- Focused_crawler abstract "A focused crawler is a web crawler that collects Web pages that satisfy some specific property, by carefully prioritizing the crawl frontier and managing the hyperlink exploration process. Some predicates may be based on simple, deterministic and surface properties. For example, a crawler's mission may be to crawl pages from only the .jp domain. Other predicates may be softer or comparative, e.g., \"crawl pages with large PageRank\", or \"crawl pages about baseball\". An important page property pertains to topics, leading to topical crawlers. For example, a topical crawler may be deployed to collect pages about solar power, or swine flu, while minimizing resources spent fetching pages on other topics. Crawl frontier management may not be the only device used by focused crawlers; they may use a Web directory, a Web text index, backlinks, or any other Web artifact.A focused crawler must predict the probability that an unvisited page will be relevant before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton in a crawler developed in the early days of the Web. Topical crawling was first introduced by Filippo Menczer Chakrabarti et al. coined the term focused crawler and used a text classifier to prioritize the crawl frontier. Andrew McCallum and co-authors also used reinforcement learning to focus crawlers. Diligenti 'et al. traced the context graph leading up to relevant pages, and their text content, to train classifiers. A form of online reinforcement learning has been used along with features extracted from the DOM tree and text of linking pages, to continually train classifiers that guide the crawl. In a review of topical crawling algorithms, Menczer et al. show that such simple strategies are very effective for short crawls, while more sophisticated techniques such as reinforcement learning and evolutionary adaptation can give the best performance over longer crawls.Another type of focused crawlers is semantic focused crawler, which makes use of domain ontologies to represent topical maps and link Web pages with relevant ontological concepts for the selection and categorization purposes. In addition, ontologies can be automatically updated in the crawling process. Dong et al. introduced such an ontology-learning-based crawler using support vector machine to update the content of ontological concepts when crawling Web Pages.Crawlers are also focused on page properties other than topics. Cho et al. study a variety of crawl prioritization policies and their effects on the link popularity of fetched pages. Najork and Weiner show that breadth-first crawling, starting from popular seed pages, leads to collecting large-PageRank pages early in the crawl. Refinements involving detection of stale (poorly maintained) pages have been reported by Eiron et al..A kind of semantic focused crawler, making use of the idea of reinforcement learning has been introduced by Meusel et al. using online-based classification algorithms in combination with a bandit-based selection strategy to efficiently crawl pages with markup languages like RDFa, Microformats, and Microdata.The performance of a focused crawler depends on the richness of links in the specific topic being searched, and focused crawling usually relies on a general web search engine for providing starting points. Davison presented studies on Web links and text that explain why focused crawling succeeds on broad topics; similar studies were presented by Chakrabarti et al.. Seed selection can be important for focused crawlers and significantly influence the crawling efficiency. A whitelist strategy is to start the focus crawl from a list of high quality seed URLs and limit the crawling scope to the domains of these URLs. These high quality seeds should be selected based on a list of URL candidates which are accumulated over a sufficient long period of general web crawling. The whitelist should be updated periodically after it is created.".
- Focused_crawler wikiPageExternalLink the-url-frontier-1.html.
- Focused_crawler wikiPageID "11442799".
- Focused_crawler wikiPageLength "8958".
- Focused_crawler wikiPageOutDegree "24".
- Focused_crawler wikiPageRevisionID "700442291".
- Focused_crawler wikiPageWikiLink Andrew_McCallum.
- Focused_crawler wikiPageWikiLink Backlink.
- Focused_crawler wikiPageWikiLink Breadth-first_search.
- Focused_crawler wikiPageWikiLink Category:Internet_search_algorithms.
- Focused_crawler wikiPageWikiLink Category:Web_crawlers.
- Focused_crawler wikiPageWikiLink Category:World_Wide_Web.
- Focused_crawler wikiPageWikiLink Document_Object_Model.
- Focused_crawler wikiPageWikiLink Domain_name.
- Focused_crawler wikiPageWikiLink Filippo_Menczer.
- Focused_crawler wikiPageWikiLink Inverted_index.
- Focused_crawler wikiPageWikiLink Microdata.
- Focused_crawler wikiPageWikiLink Microformat.
- Focused_crawler wikiPageWikiLink PageRank.
- Focused_crawler wikiPageWikiLink RDFa.
- Focused_crawler wikiPageWikiLink Reinforcement_learning.
- Focused_crawler wikiPageWikiLink Uniform_Resource_Locator.
- Focused_crawler wikiPageWikiLink Web_crawler.
- Focused_crawler wikiPageWikiLink Web_directory.
- Focused_crawler wikiPageWikiLink Web_search_engine.
- Focused_crawler wikiPageWikiLink Whitelist.
- Focused_crawler wikiPageWikiLinkText "Focused crawler".
- Focused_crawler wikiPageWikiLinkText "focused crawler".
- Focused_crawler wikiPageWikiLinkText "topical and adaptive Web crawlers".
- Focused_crawler wikiPageUsesTemplate Template:Internet_search.
- Focused_crawler wikiPageUsesTemplate Template:Reflist.
- Focused_crawler wikiPageUsesTemplate Template:Web_crawlers.
- Focused_crawler subject Category:Internet_search_algorithms.
- Focused_crawler subject Category:Web_crawlers.
- Focused_crawler subject Category:World_Wide_Web.
- Focused_crawler hypernym Crawler.
- Focused_crawler type Software.
- Focused_crawler type Algorithm.
- Focused_crawler comment "A focused crawler is a web crawler that collects Web pages that satisfy some specific property, by carefully prioritizing the crawl frontier and managing the hyperlink exploration process. Some predicates may be based on simple, deterministic and surface properties. For example, a crawler's mission may be to crawl pages from only the .jp domain. Other predicates may be softer or comparative, e.g., \"crawl pages with large PageRank\", or \"crawl pages about baseball\".".
- Focused_crawler label "Focused crawler".
- Focused_crawler sameAs Q5463958.
- Focused_crawler sameAs الزاحف_المركز.
- Focused_crawler sameAs m.02rct99.
- Focused_crawler sameAs Q5463958.
- Focused_crawler wasDerivedFrom Focused_crawler?oldid=700442291.
- Focused_crawler isPrimaryTopicOf Focused_crawler.