|
|
| Line 1: |
Line 1: |
| ToDO: Formatting, it is badly formatted. Any idea on how to effectively format code?
| |
| == Overview == | | == Overview == |
|
| |
|
| Line 9: |
Line 8: |
|
| |
|
| CLucene is a full-text indexing engine that stores the index as B+ Trees in files. It uses a very efficient method for storage and retrieval. It has an excellent support for CJK languages. The Places system is a new incorporation into firefox. Hence, it is important that during its initial stages all the code that is written or used is flexible, small and tightly integrated with it. Tighter integration would allow future enhancements specific to firefox. Hence this approach was dropped. | | CLucene is a full-text indexing engine that stores the index as B+ Trees in files. It uses a very efficient method for storage and retrieval. It has an excellent support for CJK languages. The Places system is a new incorporation into firefox. Hence, it is important that during its initial stages all the code that is written or used is flexible, small and tightly integrated with it. Tighter integration would allow future enhancements specific to firefox. Hence this approach was dropped. |
|
| |
| SQLite's FTS1 and FTS2 module are open source implementation of full-text indexing integrated with SQLite. FTS1 and FTS2 stores the text in B+ Trees and the access to the full-text index is using SQL. However, there are number of short-comings for our usage. FTS2 is still in stage where the API and data might change without backward compatibility. FTS1 does not have custom tokenizer which means no CJK support. Also, FTS1 stores the entire page duplicating what is aleady stored in the cache.
| |
|
| |
|
| A custom implementation using B+ Tree is a very good option but however, it would require additional B+ Tree engine. In light of availability of an efficient algorithm for implementing full-text indexing using relational database, this method is used. | | A custom implementation using B+ Tree is a very good option but however, it would require additional B+ Tree engine. In light of availability of an efficient algorithm for implementing full-text indexing using relational database, this method is used. |
|
| |
|
| A naive implementation of full-text indexing is very costly in terms of storage. I'll briefly explain how it is so. Let us define term. A term is any word that appear in a page. So a relational database contains a table with two columns, term and id. Another table contains two columsn term id and doc id(id of the document the term appeard in). The GNU manuals were analyzed [1]. It is 5.15 Mb of text containing 958,774 occurrences of word out of which 27,554 are unique. But table 2 will require that every occurrence has a corresponding doc id. If term id were stored as int, the amount of space required to store the first column alone, would be 958,774 * 4 bytes, which is about 3 Mb. A B+ Tree implementation is atleast 3Mb more efficient. However a nice encoding scheme and storage model proposed by [2] is almost as efficient as a B+ Tree implementation. This algorithm also leverages the capabilites of relational database system while not losing too much in terms of storage and performance. Hence I propose to implement this algorithm. | | A naive implementation of full-text indexing is very costly in terms of storage. I'll briefly explain how it is so. Let us define term. A term is any word that appear in a page. So a relational database contains a table with two columns, term and id. Another table contains two columsn term id and doc id(id of the document the term appeard in). The GNU manuals were analyzed [1]. It is 5.15 Mb of text containing 958,774 occurrences of word out of which 27,554 are unique. But table 2 will require that every occurrence has a corresponding doc id. If term id were stored as int, the amount of space required to store the first column alone, would be 958,774 * 4 bytes, which is about 3 Mb. A B+ Tree implementation is atleast 3Mb more efficient. However a nice encoding scheme and storage model proposed by [2] is almost as efficient as a B+ Tree implementation. This algorithm also leverages the capabilites of relational database system while not losing too much in terms of storage and performance. |
|
| |
|
| | SQLite's FTS1 and FTS2 module are open source implementation of full-text indexing integrated with SQLite. According Scott Hess, FTS developer, "It sounds like a lot of what you're discussing here matches what we did for fts1 in SQLite. fts2 was a great improvement in terms of performance, and has no breaking changes expected" Hence Fts2 is a great option. I have built mozilla with sqlite and fts2 and it was easy. Moreover, FTS2 is integrated nicely with SQLite requiring no change in sqlite3file.h and sqlite.def files which are essentially exports. One can give queries like |
| | <pre> Create virtual table history_index using fts2(title, meta, content) </pre> |
| | The index gets created. Further to insert, |
| | <pre> insert into history_index(title, meta, content) values('some value', 'some value', 'some value') </pre> |
| | And to search, |
| | <pre> select * from history_index where content matches 'value'</pre> |
|
| |
|
| == Use Case == | | == Use Case == |
| Line 28: |
Line 31: |
|
| |
|
| The use cases above will be used to validate the design. | | The use cases above will be used to validate the design. |
|
| |
| == Database Design ==
| |
|
| |
| TODO: Check the url_table and put it here. The url_table acts as the document table. The url table will contain additionally the document length
| |
|
| |
| {| border="1" cellpadding="2"
| |
| |+'''Word Table'''
| |
| |-
| |
| !columnn!!type!!bytes!!description
| |
| |-
| |
| |word||varchar||<=100||term for indexing(Shouldn't it be unicode? how do i store unicode?)
| |
| |-
| |
| |wordnum||integer||4||unique id. Integer works cause the number of unique words will be atmost a million. Non-english language?
| |
| |-
| |
| |doc_count||integer||4||number of documents the word occurred in
| |
| |-
| |
| |word_count||integer||4||number of occurrences of the word
| |
| |}
| |
| <br>
| |
| {| border="1" cellpadding="2"
| |
| |+'''Posting Table'''
| |
| |-
| |
| !column!!type!!bytes!!description
| |
| |-
| |
| |wordnum||integer||4||This is the foreign key matching that in the word table
| |
| |-
| |
| |firstdoc||integer||4||lowest doc id referenced in the block
| |
| |-
| |
| |flags||tinyint||1||indicates the block type, length of doc list, sequence number
| |
| |-
| |
| |block||varbinary||<=255||contains encoded document and/or position postings for the word
| |
| |}
| |
|
| |
| To Do
| |
| # We might need a table or two more for ranking efficiently
| |
| # Check if SQLite has varbinary datatype. There is a BLOB data type, I am sure.
| |
|
| |
| Note that the tables structure is subject to change to improve efficiency. New Tables might be formed and/or the table might add/remove column
| |
|
| |
|
| == Detailed Design == | | == Detailed Design == |
| Line 98: |
Line 63: |
| ===nsNavFullTextIndex=== | | ===nsNavFullTextIndex=== |
|
| |
|
| This class interacts with SQLite database. This implements the algorithm for adding document to the index, removing document from the index and search for a given term. This search function is used by nsNavHistoryQuery::search function. Look at [2] for the algorithm used. | | This class interacts with FTS2. It is responsible for executing queries that will insert content into the index and generate URI of matching documents on a search request. This class will be private and will not be exposed outside. A search request will also generate text snippets to be displayed for UI to display |
| | |
| Block is a struct used to encode and decode block. A block is variable length delta encoded. Variable Length delta Encoding, compresses very efficiently balancing speed and storage requirement.
| |
| <pre>struct Block {
| |
| //The width of the field is 255 bytes. The return value is an int indicating number of elements in the data that were encoded in the out byte array.
| |
| int encode(in int[] data, out byte[] encodedBlock) {
| |
| //How to encode more efficiently, any idea?
| |
| int[] bigEndian;
| |
| int k = 0;
| |
| data[i - 1] = 0;
| |
| for(int i = 0; i < data.length; i++) {
| |
| data[i] -= data[i - 1];
| |
| int j = 0;
| |
| while(data[i] != 0) {
| |
| bigEndian[j++] = data[i] % 128;
| |
| data[i] /= 128;
| |
| }
| |
| for( ; j > 0; j--, k++) {
| |
| encodedBlock[k] = (1 << 8) & bigEndian[j];
| |
| }
| |
| encodedBlock[k++] = bigEndian[0];
| |
| if (k > 255)
| |
| return i - 1
| |
| }
| |
| }
| |
| void decode(in byte[] encodedBlock, out int[] data) {
| |
| int j = 0;
| |
| for(int i = 0; i < encodedBlock.length; i++) {
| |
| if (encodedBlock[i] && (1 << 8)) {
| |
| data[j] *= 128;
| |
| data[j] += encodedBlock[i] & ((1 << 8) - 1);
| |
| }
| |
| else {
| |
| data[j] += data[j - 1]; //Because it was delta encoded
| |
| data[j++] = encodedBlock[i];
| |
| }
| |
| }
| |
| }
| |
| }
| |
| | |
| | |
| AddDocument(connection, document, analyzer) {
| |
| AddDocument in two pass. Scan the document for terms and then invert. We have the doc id. We will require a hash map library to make it very efficient. Term is a hash map. Usage: term['termname'] = array of pos.
| |
| <pre> while(analyzer.hasTerms()) {
| |
| cTerm = analyzer.nextTerm();
| |
| term[cTerm.Name].add(cTerm.pos);
| |
| }
| |
| iterator i = term.iterator();
| |
| while(i.hasNext()) {
| |
| termName = i.next();
| |
| termPos[] = term[termName];
| |
|
| |
| //hopefully sqlite caches query results, it will be inefficient otherwise.
| |
| | |
| if (term already in table) {
| |
| record = executeQuery("SELECT firstdoc, flags, block FROM postings
| |
| WHERE word = termName
| |
| AND firstdoc == (Given as >= in [2], == is correct in my opinion)
| |
| (SELECT max (firstdoc) FROM postings
| |
| WHERE word = termName and flags < 128)");
| |
| //Refer to [2] for explanation of this.
| |
| //only one record is retrieved with flags == 0 or flags between 0 and 128
| |
| //when flag == 0, the block contains only document list
| |
| if (flag == 0) {
| |
| 1) Decode the block
| |
| 2) See if one more document can be fitted in this.
| |
| 3) Yes? add to it
| |
| i) find the position list
| |
| positionList = executeQuery("SELECT firstdoc, flags, block FROM positings
| |
| where firstdoc =
| |
| SELECT max(firstdoc) FROM postings
| |
| WHERE word = termName
| |
| AND firstdoc >= record.firstdoc AND flags >= 128
| |
| ii) Try to add as many position in this block
| |
| iii) when the block is done, create a new row, with firstdoc == currentdoc
| |
| and flags == 128 or 129 depending on whether the prev firstdoc was same as this.
| |
| | |
| iv) goto ii) if there are more left.
| |
| 4) no?
| |
| i) create a new row with firstdoc = docid, flags=2
| |
| ii) To the block add, doc id and doc freq. And all the pos. Note the position listings are never split when flag == 2.
| |
| We must try to fit all the position listing in this block.99% of the case this should be possible. Make a small calculation, you'll find out i am correct
| |
| iii) The rare case it is not possible? create two rows one with flags==0 and flags==128
| |
| }
| |
| else {
| |
| //This is slightly complex in that there is both document list and position list in the same block. We have to decode the block. Try to add document id and and all the position to the position list. This might not be possible. And we'll have to create two new rows, one with flags == 0 and other with flags == 128
| |
| }
| |
| update the word count in the word list table appropriately
| |
| }
| |
| }
| |
| commit to the database
| |
| }
| |
| | |
| RemoveDocument() {
| |
| Inherently inefficient because of the structure that we have adopted. Needs to be optimized. The general algorithm revolves around finding the record whose firstDoc is immediately less or same as the docId we are searching for.
| |
| | |
| <pre> e.g. Say we want to delete docid = 20. We got two records
| |
| firstDoc = 18, block="blahblah"
| |
| firstDoc = 22, block="blahblah"
| |
| So we select the record with docid = 18 that is immediately before docid = 20;
| |
|
| |
| query to achieve this: SELECT word, firstdoc, block FROM postings where
| |
| firstdoc = SELECT max(firstdoc) FROM postings where
| |
| firstdoc < docIdWeAreSearchingFor
| |
| This returns a number of records with flag == 0 or 0 < flag < 128 or flag == 128 or flag > 128.
| |
| for each record we found do the following:
| |
| docAndPostingTable = decodeBlock(block);
| |
| we have just decoded the block using Block::decodeBlock().
| |
| docAndPostingTable.find(docId) //we check the decode block if it contains docId of our interest.
| |
| if docId found
| |
| Check the flag
| |
| when flag == 0 //only document list
| |
| remove the document and freq from the block
| |
| update the word count for the term in word table
| |
| update the delta coding for other doc in the block.
| |
| if docId == firstDoc update firstDoc with the immediately following doc
| |
| if no more docs left in this row, delete the row
| |
| when 0 < flag < 128, contains both document list and posting table
| |
| remove the document from the block.
| |
| update the word count for the term in word table
| |
| update the delta coding for other doc in the block.
| |
| remove all the postings of the document for the term in the block
| |
| if (docId == firstDoc) update firstDoc with immediately following doc
| |
| if no more docs left in the row, delete the row
| |
| when flag >= 128 //only postings table
| |
| remove all the postings corresponding to the doc
| |
| update the firstDoc for the record
| |
| delete the record if block is empty
| |
|
| |
| }
| |
| | |
| SearchDocument(terms) {
| |
| terms is something like: "mac apple panther". Basically a collection of terms
| |
| The ranking algorithm as described in this document http://lucene.apache.org/java/2_1_0/api/org/apache/lucene/search/Similarity.html is used.
| |
| }</pre>
| |
|
| |
|
| ===nsNavFullTextIndexHelper=== | | ===nsNavFullTextIndexHelper=== |