phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ramkrish86 <...@git.apache.org>
Subject [GitHub] phoenix pull request: Phoenix-180
Date Wed, 17 Sep 2014 08:54:48 GMT
Github user ramkrish86 commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/12#discussion_r17653038
  
    --- Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
    @@ -462,6 +471,60 @@ public MutationState createTable(CreateTableStatement statement,
byte[][] splits
             return connection.getQueryServices().updateData(plan);
         }
     
    +    public MutationState updateStatistics(UpdateStatisticsStatement updateStatisticsStmt)
throws SQLException {
    +        String tableName = updateStatisticsStmt.getTable().getName().getTableName();
    +        // Check before updating the stats if we have reached the configured time to
reupdate the stats once again
    +        long minTimeForStatsUpdate = connection.getQueryServices().getProps()
    +                .getLong(StatisticsConstants.MIN_STATS_FREQ_UPDATION, StatisticsConstants.DEFAULT_STATS_FREQ_UPDATION);
    +        ColumnResolver resolver = FromCompiler.getResolver(updateStatisticsStmt, connection);
    +        PTable table = resolver.getTables().get(0).getTable();
    +        PName physicalName = table.getPhysicalName();
    +        byte[] tenantIdBytes = ByteUtil.EMPTY_BYTE_ARRAY;
    +       KeyRange analyzeRange = KeyRange.EVERYTHING_RANGE;
    +        if (connection.getTenantId() != null) {
    +            tenantIdBytes = connection.getTenantId().getBytes();
    +            List<List<KeyRange>> tenantIdKeyRanges = Collections.singletonList(Collections.singletonList(KeyRange
    +                    .getKeyRange(tenantIdBytes)));
    +            byte[] lowerRange = ScanUtil.getMinKey(table.getRowKeySchema(), tenantIdKeyRanges,
    +                    ScanUtil.SINGLE_COLUMN_SLOT_SPAN);
    +            byte[] upperRange = ScanUtil.getMaxKey(table.getRowKeySchema(), tenantIdKeyRanges,
    +                    ScanUtil.SINGLE_COLUMN_SLOT_SPAN);
    +            analyzeRange = KeyRange.getKeyRange(lowerRange, upperRange);
    +        }
    +        byte[] schemaNameBytes = ByteUtil.EMPTY_BYTE_ARRAY;
    +        if (connection.getSchema() != null) {
    +            schemaNameBytes = Bytes.toBytes(connection.getSchema());
    +        }
    +
    +        Long scn = connection.getSCN();
    +        // Always invalidate the cache
    +        long clientTS = connection.getSCN() == null ? HConstants.LATEST_TIMESTAMP : scn;
    +        connection.getQueryServices().clearCacheForTable(schemaNameBytes, Bytes.toBytes(tableName),
    +                clientTS);
    +        String schema = Bytes.toString(schemaNameBytes);
    +        // Clear the cache also. So that for cases like major compaction also we would
be able to use the stats
    +        updateCache(schema, tableName, true);
    --- End diff --
    
    You mean the updateCache and clearCache lines before issuing the query has to be removed?
 But that may be needed if suppose a major compaction happens and we update the stats time.
 Now when an analyze happens we may not allow it to happen if the stats time is not elapsed.
Now that if a query is issued we  may need to still use the stats from major compaction hence
that update and clear.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message