carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ajan...@apache.org
Subject [carbondata] branch master updated: [CARBONDATA-3680] Add Secondary Index Document
Date Thu, 02 Apr 2020 12:24:18 GMT
This is an automated email from the ASF dual-hosted git repository.

ajantha pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
     new 72b9b05  [CARBONDATA-3680] Add Secondary Index Document
72b9b05 is described below

commit 72b9b059e1e3c2f9f66e957bd3daa0b902ed89f2
Author: akashrn5 <akashnilugal@gmail.com>
AuthorDate: Tue Mar 31 17:32:09 2020 +0530

    [CARBONDATA-3680] Add Secondary Index Document
    
    Why is this PR needed?
    No documentation present for Secondary index.
    
    What changes were proposed in this PR?
    Added documentation for secondary index.
    
    Does this PR introduce any user interface change?
    No
    
    Is any new testcase added?
    No
    
    This closes #3689
---
 README.md                           |   1 +
 docs/configuration-parameters.md    |   3 +
 docs/index/secondary-index-guide.md | 189 ++++++++++++++++++++++++++++++++++++
 3 files changed, 193 insertions(+)

diff --git a/README.md b/README.md
index d54b913..c7f935d 100644
--- a/README.md
+++ b/README.md
@@ -59,6 +59,7 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com
  * [CarbonData BloomFilter DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/bloomfilter-datamap-guide.md)

  * [CarbonData Lucene DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/lucene-datamap-guide.md)

  * [CarbonData MV DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/mv-datamap-guide.md)
+* [Carbondata Secondary Index](https://github.com/apache/carbondata/blob/master/docs/index/secondary-index-guide.md)
 * [SDK Guide](https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md) 
 * [C++ SDK Guide](https://github.com/apache/carbondata/blob/master/docs/csdk-guide.md)
 * [Performance Tuning](https://github.com/apache/carbondata/blob/master/docs/performance-tuning.md)

diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index a570d4c..dafb992 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -117,6 +117,7 @@ This section provides the details of all the configurations required for
the Car
 | carbon.compaction.prefetch.enable | false | Compaction operation is similar to Query +
data load where in data from qualifying segments are queried and data loading performed to
generate a new single segment. This configuration determines whether to query ahead data from
segments and feed it for data loading. **NOTE: **This configuration is disabled by default
as it needs extra resources for querying extra data. Based on the memory availability on the
cluster, user can enable it to imp [...]
 | carbon.merge.index.in.segment | true | Each CarbonData file has a companion CarbonIndex
file which maintains the metadata about the data. These CarbonIndex files are read and loaded
into driver and is used subsequently for pruning of data during queries. These CarbonIndex
files are very small in size(few KB) and are many. Reading many small files from HDFS is not
efficient and leads to slow IO performance. Hence these CarbonIndex files belonging to a segment
can be combined into  a sin [...]
 | carbon.enable.range.compaction | true | To configure Ranges-based Compaction to be used
or not for RANGE_COLUMN. If true after compaction also the data would be present in ranges.
|
+| carbon.si.segment.merge | false | Making this true degrade the LOAD performance. When the
number of small files increase for SI segments(it can happen as number of columns will be
less and we store position id and reference columns), user an either set to true which will
merge the data files for upcoming loads or run SI rebuild command which does this job for
all segments. (REBUILD INDEX <index_table>) |
 
 ## Query Configuration
 
@@ -147,6 +148,8 @@ This section provides the details of all the configurations required for
the Car
 | carbon.query.stage.input.enable | false | Stage input files are data files written by external
applications (such as Flink), but have not been loaded into carbon table. Enabling this configuration
makes query to include these files, thus makes query on latest data. However, since these
files are not indexed, query maybe slower as full scan is required for these files. |
 | carbon.driver.pruning.multi.thread.enable.files.count | 100000 | To prune in multi-thread
when total number of segment files for a query increases beyond the configured value. |
 | carbon.load.all.segment.indexes.to.cache | true | Setting this configuration to false,
will prune and load only matched segment indexes to cache using segment metadata information
such as columnid and it's minmax values, which decreases the usage of driver memory.  |
+| carbon.secondary.index.creation.threads | 1 | Specifies the number of threads to concurrently
process segments during secondary index creation. This property helps fine tuning the system
when there are a lot of segments in a table. The value range is 1 to 50. |
+| carbon.si.lookup.partialstring | true | When true, it includes starts with, ends with and
contains. When false, it includes only starts with secondary indexes. |
 
 ## Data Mutation Configuration
 | Parameter | Default Value | Description |
diff --git a/docs/index/secondary-index-guide.md b/docs/index/secondary-index-guide.md
new file mode 100644
index 0000000..e588ed9
--- /dev/null
+++ b/docs/index/secondary-index-guide.md
@@ -0,0 +1,189 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+
+# CarbonData Secondary Index
+
+* [Quick Example](#quick-example)
+* [Secondary Index Table](#Secondary-Index-Introduction)
+* [Loading Data](#loading-data)
+* [Querying Data](#querying-data)
+* [Compaction](#compacting-SI-table)
+* [DDLs on Secondary Index](#DDLs-on-Secondary-Index)
+
+## Quick example
+
+Start spark-sql in terminal and run the following queries,
+```
+CREATE TABLE maintable(a int, b string, c string) stored as carbondata;
+insert into maintable select 1, 'ab', 'cd';
+CREATE index inex1 on table maintable(c) AS 'carbondata';
+SELECT a from maintable where c = 'cd';
+// NOTE: run explain query and check if query hits the SI table from the plan
+EXPLAIN SELECT a from maintable where c = 'cd';
+```
+
+## Secondary Index Introduction
+  Sencondary index tables are created as a indexes and managed as child tables internally
by
+  Carbondata. Users can create secondary index based on the column position in main table(Recommended
+  for right columns) and the queries should have filter on that column to improve the filter
query
+  performance.
+  
+  SI tables will always be loaded non-lazy way. Once SI table is created, Carbondata's 
+  CarbonOptimizer with the help of `CarbonSITransformationRule`, transforms the query plan
to hit the
+  SI table based on the filter condition or set of filter conditions present in the query.
+  So first level of pruning will be done on SI table as it stores blocklets and main table/parent
+  table pruning will be based on the SI output, which helps in giving the faster query results
with
+  better pruning.
+
+  Secondary Index table can be create with below syntax
+
+   ```
+   CREATE INDEX [IF NOT EXISTS] index_name
+   ON TABLE maintable(index_column)
+   AS
+   'carbondata'
+   [TBLPROPERTIES('table_blocksize'='1')]
+   ```
+  For instance, main table called **sales** which is defined as
+
+  ```
+  CREATE TABLE sales (
+    order_time timestamp,
+    user_id string,
+    sex string,
+    country string,
+    quantity int,
+    price bigint)
+  STORED AS carbondata
+  ```
+
+  User can create SI table using the Create Index DDL
+
+  ```
+  CREATE INDEX index_sales
+  ON TABLE sales(user_id)
+  AS
+  'carbondata'
+  TBLPROPERTIES('table_blocksize'='1')
+  ```
+ 
+ 
+#### How SI tables are selected
+
+When a user executes a filter query, during query planning phase, CarbonData with help of
+`CarbonSITransformationRule`, checks if there are any index tables present on the filter
column of
+query. If there are any, then filter query plan will be transformed such a way that, execution
will
+first hit the corresponding SI table and give input to main table for further pruning.
+
+
+For the main table **sales** and SI table  **index_sales** created above, following queries
+```
+SELECT country, sex from sales where user_id = 'xxx'
+
+SELECT country, sex from sales where user_id = 'xxx' and country = 'INDIA'
+```
+
+will be transformed by CarbonData's `CarbonSITransformationRule` to query against SI table
+**index_sales** first which will be input to the main table **sales**
+
+
+## Loading data
+
+### Loading data to Secondary Index table(s).
+
+*case1:* When SI table is created and the main table does not have any data. In this case
every
+consecutive load will load to SI table once main table data load is finished.
+
+*case2:* When SI table is created and main table already contains some data, then SI creation
will
+also load to SI table with same number of segments as main table. There after, consecutive
load to
+main table will load to SI table also.
+
+ **NOTE**:
+ * In case of data load failure to SI table, then we make the SI table disable by setting
a hive serde
+ property. The subsequent main table load will load the old failed loads along with current
load and
+ makes the SI table enable and available for query.
+
+## Querying data
+Direct query can be made on SI tables to see the data present in position reference columns.
+When a filter query is fired, if the filter column is a secondary index column, then plan
is
+transformed accordingly to hit SI table first to make better pruning with main table and
in turn
+helps for faster query results.
+
+User can verify whether a query can leverage SI table or not by executing `EXPLAIN`
+command, which will show the transformed logical plan, and thus user can check whether SI
table
+table is selected.
+
+
+## Compacting SI table
+
+### Compacting SI table table through Main Table compaction
+Running Compaction command (`ALTER TABLE COMPACT`)[COMPACTION TYPE-> MINOR/MAJOR] on main
table will
+automatically delete all the old segments of SI and creates a new segment with same name
as main
+table compacted segmet and loads data to it.
+
+### Compacting SI table's individual segment(s) through REBUILD command
+Where there are so many small files present in the SI table, then we can use REBUILD command
to
+compact the files within an SI segment to avoid many small files.
+
+  ```
+  REBUILD INDEX sales_index
+  ```
+This command merges data files in  each segment of SI table.
+
+  ```
+  REBUILD INDEX sales_index WHERE SEGMENT.ID IN(1)
+  ```
+This command merges data files within specified segment of SI table.
+
+## How to skip Secondary Index?
+When Secondary indexes are created on a table(s), always data fetching happens from secondary
+indexes created on the main tables for better performance. But sometimes, data fetching from
the
+secondary index might degrade query performance in case where the data is sparse and most
of the
+blocklets need to be scanned. So to avoid such secondary indexes, we use NI as a function
on filters
+with in WHERE clause.
+
+  ```
+  SELECT country, sex from sales where NI(user_id = 'xxx')
+  ```
+The above query ignores column user_id from secondary index and fetch data from main table.
+
+## DDLs on Secondary Index
+
+### Show index Command
+This command is used to get information about all the secondary indexes on a table.
+
+Syntax
+  ```
+  SHOW INDEXES  on [db_name.]table_name
+  ```
+
+### Drop index Command
+This command is used to drop an existing secondary index on a table
+
+Syntax
+  ```
+  DROP INDEX [IF EXISTS] index_name on [db_name.]table_name
+  ```
+
+### Register index Command
+This command registers the secondary index with the main table in case of compatibility scenarios

+where we have old stores.
+
+Syntax
+  ```
+  REGISTER INDEX TABLE index_name ON [db_name.]table_name
+  ```
\ No newline at end of file


Mime
View raw message