lucene4.0的配置

一.配置Java和tomcat

这个很简单了,改下环境变量,帖子网上一大把,我自己之前好像也写过,可以看一下。

 

二.配置lucene

下载地址http://www.apache.org/dyn/closer.cgi/lucene/java/4.0.0

然后解压缩,我是解压缩到D盘。

 

三.修改环境变量

You need four JARs: the Lucene JAR, the queryparser JAR, the common analysis JAR, and the Lucenedemo JAR. You should see the Lucene JAR file in the core/ directory you createdwhen you extracted the archive -- it should be named something likelucene-core-{version}.jar. You should also seefiles called lucene-queryparser-{version}.jar,lucene-analyzers-common-{version}.jar and lucene-demo-{version}.jar under queryparser, analysis/common/ and demo/,respectively.这是docs里面的原文,在lucene-4.0.0/docs/demo/overview-summary.html目录下。

简单来说,就是要把the Lucene JAR, the queryparser JAR, the common analysis JAR, and the Lucenedemo JAR的路径添加到classpath环境变量下(之前看的配置文档,都只说要添加the Lucene JAR,the Lucenedemo JAR路径)

在lucene-4.0.0中,the Lucene JAR的路径是D:\lucene-4.0.0\core\lucene-core-4.0.0.jar , the queryparser JAR的路径是D:\lucene-4.0.0\queryparser\lucene-queryparser-4.0.0.jar,the common analysis JAR的路径是D:\lucene-4.0.0\analysis\common\lucene-analyzers-common-4.0.0.jar,the Lucenedemo JAR的路径是D:\lucene-4.0.0\demo\lucene-demo-4.0.0.jar

将这4个路径添加到classpath中

 

四.运行demo 

1.建立索引
cmd,进入控制台
输入     java org.apache.lucene.demo.IndexFiles -docs XXXX
XXXX是指你要建立索引的目录,如果一切正确的话,就能看到一堆的控制台输出了,建立的索引,会放在一个index文件夹下,index文件夹在你cmd当前所在的目录

2.查询

 输入 java org.apache.lucene.demo.SearchFiles

然后就ok了

 

demo的代码,建立索引的代码在 lucene-4.0.0/docs/demo/src-html/org/apache/lucene/demo/IndexFiles.html , 查询的代码在lucene-4.0.0/docs/demo/src-html/org/apache/lucene/demo/SearchFiles.html,不过都是以html方式保存的,需要把行号给替换一下。

这是替换完的

    package org.apache.lucene.demo;
    
    /*
     * Licensed to the Apache Software Foundation (ASF) under one or more
     * contributor license agreements.  See the NOTICE file distributed with
     * this work for additional information regarding copyright ownership.
     * The ASF licenses this file to You under the Apache License, Version 2.0
     * (the "License"); you may not use this file except in compliance with
     * the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    import org.apache.lucene.analysis.Analyzer;
    import org.apache.lucene.analysis.standard.StandardAnalyzer;
    import org.apache.lucene.document.Document;
    import org.apache.lucene.document.Field;
    import org.apache.lucene.document.LongField;
    import org.apache.lucene.document.StringField;
    import org.apache.lucene.document.TextField;
    import org.apache.lucene.index.IndexWriter;
    import org.apache.lucene.index.IndexWriterConfig.OpenMode;
    import org.apache.lucene.index.IndexWriterConfig;
    import org.apache.lucene.index.Term;
    import org.apache.lucene.store.Directory;
    import org.apache.lucene.store.FSDirectory;
    import org.apache.lucene.util.Version;
    
    import java.io.BufferedReader;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import java.util.Date;
    
    /** Index all text files under a directory.
     * <p>
     * This is a command-line application demonstrating simple Lucene indexing.
     * Run it with no command-line arguments for usage information.
     */
    public class IndexFiles {
      
      private IndexFiles() {}
    
      /** Index all text files under a directory. */
      public static void main(String[] args) {
        String usage = "java org.apache.lucene.demo.IndexFiles"
                     + " [-index INDEX_PATH] [-docs DOCS_PATH] [-update]\n\n"
                     + "This indexes the documents in DOCS_PATH, creating a Lucene index"
                     + "in INDEX_PATH that can be searched with SearchFiles";
        String indexPath = "index";
        String docsPath = null;
        boolean create = true;
        for(int i=0;i<args.length;i++) {
          if ("-index".equals(args[i])) {
            indexPath = args[i+1];
            i++;
          } else if ("-docs".equals(args[i])) {
            docsPath = args[i+1];
            i++;
          } else if ("-update".equals(args[i])) {
            create = false;
          }
        }
    
        if (docsPath == null) {
          System.err.println("Usage: " + usage);
          System.exit(1);
        }
    
        final File docDir = new File(docsPath);
        if (!docDir.exists() || !docDir.canRead()) {
          System.out.println("Document directory '" +docDir.getAbsolutePath()+ "' does not exist or is not readable, please check the path");
          System.exit(1);
        }
        
        Date start = new Date();
        try {
          System.out.println("Indexing to directory '" + indexPath + "'...");
    
          Directory dir = FSDirectory.open(new File(indexPath));
          Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_40);
          IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_40, analyzer);
    
          if (create) {
            // Create a new index in the directory, removing any
            // previously indexed documents:
            iwc.setOpenMode(OpenMode.CREATE);
          } else {
            // Add new documents to an existing index:
            iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);
          }
    
          // Optional: for better indexing performance, if you
          // are indexing many documents, increase the RAM
          // buffer.  But if you do this, increase the max heap
          // size to the JVM (eg add -Xmx512m or -Xmx1g):
          //
          // iwc.setRAMBufferSizeMB(256.0);
    
          IndexWriter writer = new IndexWriter(dir, iwc);
          indexDocs(writer, docDir);
    
          // NOTE: if you want to maximize search performance,
          // you can optionally call forceMerge here.  This can be
          // a terribly costly operation, so generally it's only
          // worth it when your index is relatively static (ie
          // you're done adding documents to it):
          //
          // writer.forceMerge(1);
    
          writer.close();
    
          Date end = new Date();
          System.out.println(end.getTime() - start.getTime() + " total milliseconds");
    
        } catch (IOException e) {
          System.out.println(" caught a " + e.getClass() +
           "\n with message: " + e.getMessage());
        }
      }
    
      /**
       * Indexes the given file using the given writer, or if a directory is given,
       * recurses over files and directories found under the given directory.
       * 
       * NOTE: This method indexes one document per input file.  This is slow.  For good
       * throughput, put multiple documents into your input file(s).  An example of this is
       * in the benchmark module, which can create "line doc" files, one document per line,
       * using the
       * <a href="../../../../../contrib-benchmark/org/apache/lucene/benchmark/byTask/tasks/WriteLineDocTask.html"
       * >WriteLineDocTask</a>.
       *  
       * @param writer Writer to the index where the given file/dir info will be stored
       * @param file The file to index, or the directory to recurse into to find files to index
       * @throws IOException If there is a low-level I/O error
       */
      static void indexDocs(IndexWriter writer, File file)
        throws IOException {
        // do not try to index files that cannot be read
        if (file.canRead()) {
          if (file.isDirectory()) {
            String[] files = file.list();
            // an IO error could occur
            if (files != null) {
              for (int i = 0; i < files.length; i++) {
                indexDocs(writer, new File(file, files[i]));
              }
            }
          } else {
    
            FileInputStream fis;
            try {
              fis = new FileInputStream(file);
            } catch (FileNotFoundException fnfe) {
              // at least on windows, some temporary files raise this exception with an "access denied" message
              // checking if the file can be read doesn't help
              return;
            }
    
            try {
    
              // make a new, empty document
              Document doc = new Document();
    
              // Add the path of the file as a field named "path".  Use a
              // field that is indexed (i.e. searchable), but don't tokenize 
              // the field into separate words and don't index term frequency
              // or positional information:
              Field pathField = new StringField("path", file.getPath(), Field.Store.YES);
              doc.add(pathField);
    
              // Add the last modified date of the file a field named "modified".
              // Use a LongField that is indexed (i.e. efficiently filterable with
              // NumericRangeFilter).  This indexes to milli-second resolution, which
              // is often too fine.  You could instead create a number based on
              // year/month/day/hour/minutes/seconds, down the resolution you require.
              // For example the long value 2011021714 would mean
              // February 17, 2011, 2-3 PM.
              doc.add(new LongField("modified", file.lastModified(), Field.Store.NO));
    
              // Add the contents of the file to a field named "contents".  Specify a Reader,
              // so that the text of the file is tokenized and indexed, but not stored.
              // Note that FileReader expects the file to be in UTF-8 encoding.
              // If that's not the case searching for special characters will fail.
              doc.add(new TextField("contents", new BufferedReader(new InputStreamReader(fis, "UTF-8"))));
    
              if (writer.getConfig().getOpenMode() == OpenMode.CREATE) {
                // New index, so we just add the document (no old document can be there):
                System.out.println("adding " + file);
                writer.addDocument(doc);
              } else {
                // Existing index (an old copy of this document may have been indexed) so 
                // we use updateDocument instead to replace the old one matching the exact 
                // path, if present:
                System.out.println("updating " + file);
                writer.updateDocument(new Term("path", file.getPath()), doc);
              }
              
            } finally {
              fis.close();
            }
          }
        }
      }
    }

 

    package org.apache.lucene.demo;
    
    /*
     * Licensed to the Apache Software Foundation (ASF) under one or more
     * contributor license agreements.  See the NOTICE file distributed with
     * this work for additional information regarding copyright ownership.
     * The ASF licenses this file to You under the Apache License, Version 2.0
     * (the "License"); you may not use this file except in compliance with
     * the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    
    import java.io.BufferedReader;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import java.util.Date;
    
    import org.apache.lucene.analysis.Analyzer;
    import org.apache.lucene.analysis.standard.StandardAnalyzer;
    import org.apache.lucene.document.Document;
    import org.apache.lucene.index.DirectoryReader;
    import org.apache.lucene.index.IndexReader;
    import org.apache.lucene.queryparser.classic.QueryParser;
    import org.apache.lucene.search.IndexSearcher;
    import org.apache.lucene.search.Query;
    import org.apache.lucene.search.ScoreDoc;
    import org.apache.lucene.search.TopDocs;
    import org.apache.lucene.store.FSDirectory;
    import org.apache.lucene.util.Version;
    
    /** Simple command-line based search demo. */
    public class SearchFiles {
    
      private SearchFiles() {}
    
      /** Simple command-line based search demo. */

      public static void main(String[] args) throws Exception {
        String usage =
          "Usage:\tjava org.apache.lucene.demo.SearchFiles [-index dir] [-field f] [-repeat n] [-queries file] [-query string] [-raw] [-paging hitsPerPage]\n\nSee http://lucene.apache.org/java/4_0/demo.html for details.";
        if (args.length > 0 && ("-h".equals(args[0]) || "-help".equals(args[0]))) {
          System.out.println(usage);
          System.exit(0);
        }
    
        String index = "index";
        String field = "contents";
        String queries = null;
        int repeat = 0;
        boolean raw = false;
        String queryString = null;
        int hitsPerPage = 10;
        
        for(int i = 0;i < args.length;i++) {
          if ("-index".equals(args[i])) {
            index = args[i+1];
            i++;
          } else if ("-field".equals(args[i])) {
            field = args[i+1];
            i++;
          } else if ("-queries".equals(args[i])) {
            queries = args[i+1];
            i++;
          } else if ("-query".equals(args[i])) {
            queryString = args[i+1];
            i++;
          } else if ("-repeat".equals(args[i])) {
            repeat = Integer.parseInt(args[i+1]);
            i++;
          } else if ("-raw".equals(args[i])) {
            raw = true;
          } else if ("-paging".equals(args[i])) {
            hitsPerPage = Integer.parseInt(args[i+1]);
            if (hitsPerPage <= 0) {
              System.err.println("There must be at least 1 hit per page.");
              System.exit(1);
            }
            i++;
          }
        }
        
        IndexReader reader = DirectoryReader.open(FSDirectory.open(new File(index)));
        IndexSearcher searcher = new IndexSearcher(reader);
        Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_40);
    
        BufferedReader in = null;
        if (queries != null) {
          in = new BufferedReader(new InputStreamReader(new FileInputStream(queries), "UTF-8"));
        } else {
          in = new BufferedReader(new InputStreamReader(System.in, "UTF-8"));
        }
        QueryParser parser = new QueryParser(Version.LUCENE_40, field, analyzer);
        while (true) {
          if (queries == null && queryString == null) {                        // prompt the user
            System.out.println("Enter query: ");
          }
    
          String line = queryString != null ? queryString : in.readLine();
    
          if (line == null || line.length() == -1) {
            break;
          }
    
          line = line.trim();
          if (line.length() == 0) {
            break;
          }
          
          Query query = parser.parse(line);
          System.out.println("Searching for: " + query.toString(field));
                
          if (repeat > 0) {                           // repeat & time as benchmark
            Date start = new Date();
            for (int i = 0; i < repeat; i++) {
              searcher.search(query, null, 100);
            }
            Date end = new Date();
            System.out.println("Time: "+(end.getTime()-start.getTime())+"ms");
          }
    
          doPagingSearch(in, searcher, query, hitsPerPage, raw, queries == null && queryString == null);
    
          if (queryString != null) {
            break;
          }
        }
        reader.close();
      }
    
      /**
       * This demonstrates a typical paging search scenario, where the search engine presents 
       * pages of size n to the user. The user can then go to the next page if interested in
       * the next hits.
       * 
       * When the query is executed for the first time, then only enough results are collected
       * to fill 5 result pages. If the user wants to page beyond this limit, then the query
       * is executed another time and all hits are collected.
       * 
       */
      public static void doPagingSearch(BufferedReader in, IndexSearcher searcher, Query query, 
                                         int hitsPerPage, boolean raw, boolean interactive) throws IOException {
     
        // Collect enough docs to show 5 pages
        TopDocs results = searcher.search(query, 5 * hitsPerPage);
        ScoreDoc[] hits = results.scoreDocs;
        
        int numTotalHits = results.totalHits;
        System.out.println(numTotalHits + " total matching documents");
    
        int start = 0;
        int end = Math.min(numTotalHits, hitsPerPage);
            
        while (true) {
          if (end > hits.length) {
            System.out.println("Only results 1 - " + hits.length +" of " + numTotalHits + " total matching documents collected.");
            System.out.println("Collect more (y/n) ?");
            String line = in.readLine();
            if (line.length() == 0 || line.charAt(0) == 'n') {
              break;
            }
    
            hits = searcher.search(query, numTotalHits).scoreDocs;
          }
          
          end = Math.min(hits.length, start + hitsPerPage);
          
          for (int i = start; i < end; i++) {
            if (raw) {                              // output raw format
              System.out.println("doc="+hits[i].doc+" score="+hits[i].score);
              continue;
            }
    
            Document doc = searcher.doc(hits[i].doc);
            String path = doc.get("path");
            if (path != null) {
              System.out.println((i+1) + ". " + path);
              String title = doc.get("title");
              if (title != null) {
                System.out.println("   Title: " + doc.get("title"));
              }
            } else {
              System.out.println((i+1) + ". " + "No path for this document");
            }
                      
          }
    
          if (!interactive || end == 0) {
            break;
          }
    
          if (numTotalHits >= end) {
            boolean quit = false;
            while (true) {
              System.out.print("Press ");
              if (start - hitsPerPage >= 0) {
                System.out.print("(p)revious page, ");  
              }
              if (start + hitsPerPage < numTotalHits) {
                System.out.print("(n)ext page, ");
              }
              System.out.println("(q)uit or enter number to jump to a page.");
              
              String line = in.readLine();
              if (line.length() == 0 || line.charAt(0)=='q') {
                quit = true;
                break;
              }
              if (line.charAt(0) == 'p') {
                start = Math.max(0, start - hitsPerPage);
                break;
              } else if (line.charAt(0) == 'n') {
                if (start + hitsPerPage < numTotalHits) {
                  start+=hitsPerPage;
                }
                break;
              } else {
                int page = Integer.parseInt(line);
                if ((page - 1) * hitsPerPage < numTotalHits) {
                  start = (page - 1) * hitsPerPage;
                  break;
                } else {
                  System.out.println("No such page");
                }
              }
            }
            if (quit) break;
            end = Math.min(numTotalHits, start + hitsPerPage);
          }
        }
      }
    }

 

posted on 2013-10-26 10:57  JimSow  阅读(301)  评论(0编辑  收藏  举报

导航