Deduplication数据重复删除.ppt
文本预览下载声明
May-20-10 CS572-Summer2010 CAM-* Deduplication CSCI 572: Information Retrieval and Search Engines Summer 2010 Outline What is Deduplication? Importance Challenges Approaches What are web duplicates? The same page, referenced by different URLs What are the differences? URL host (virtual hosts), sometimes protocol, sometimes page name, etc. What are web duplicates? Near identical page, referenced by the same URLs Google search for “search engines” Google search for “search engines” What are the differences? Page is within some delta % similar to the other (where delta is a large number), but may differ in e.g., adds, counters, timestamps, etc. Why is it important to consider duplicates? In search engines, URLs tell the crawlers where to go and how to navigate the information space Ideally, given the web’s scale and complexity, we’ll give priority to crawl content that we haven’t already stored or seen before Saves resources (on the crawler end, as well as the remote host) Increases crawler politeness Reduces the analysis that we’ll have to do later Why is it important to consider duplicates? Identification of website mirrors (or copies of content)used to spread the load andbandwidth consumption S, CPAN, Apache, etc. If you identify a mirror, you canomit crawling many web pagesand save crawler resources “More Like This” Finding similarcontent to whatyou were lookingfor As we discussedduring the lecture on the search engine architecture, much of the time in search engines is spent filtering through the results. Presenting similar documents can cut down on that filtering time XML XML documents, structurally appear very similar What’s the difference between RSS and RDF and OWL and XSL and XSLT and any number of XML documents out there? With the ability to identify similarity and reduce duplication of XML, we could identify XML documents with similar structure RSS feeds that contain the same links Differentiate RSS (crawl more often) from other less frequ
显示全部