phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Taylor (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (PHOENIX-1147) Add test cases to cover more index update failure scenarios
Date Thu, 07 Aug 2014 00:21:15 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088502#comment-14088502
] 

James Taylor edited comment on PHOENIX-1147 at 8/7/14 12:20 AM:
----------------------------------------------------------------

So the root of the problem (as you've found) is that our ALTER INDEX call does not cause the
data table to be sent over again next time. I think the easiest way to solve this is to do
a Put on the data table by adding it to the tableMetaData before you call region.mutateRowsWithLocks(tableMetadata,
Collections.<byte[]> emptySet()), here in updateIndexState():

{code}
@@ -1497,7 +1505,14 @@ public class MetaDataEndpointImpl extends MetaDataProtocol implements
Coprocesso
                     region.mutateRowsWithLocks(tableMetadata, Collections.<byte[]>
emptySet());
                     // Invalidate from cache
                     Cache<ImmutableBytesPtr,PTable> metaDataCache = GlobalCache.getInstance(this.env).getMetaDataCache();
-                    metaDataCache.invalidate(cacheKey);
+                    PTable table = metaDataCache.getIfPresent(cacheKey);
+                    if(table != null){
+                        if(table.getParentName() != null){
+                            byte[] parentName = SchemaUtil.getTableKeyFromFullName(table.getParentName().getString());
+                            metaDataCache.invalidate(new ImmutableBytesPtr(parentName));
+                        }
+                        metaDataCache.invalidate(cacheKey);
+                    }
                 }
{code}

Maybe you can just do a put on the empty key value (as you know the value is an empty byte
array without having to look it up)? Might need to tweak the logic that calculates the timeStamp
in getTable(). Just add the following to the else block as well, like this, so that if the
empty key value has the biggest ts, it'll be used too:

{code}
    private PTable getTable(RegionScanner scanner, long clientTimeStamp, long tableTimeStamp)
{
        ...

        while (i < results.size() && j < TABLE_KV_COLUMNS.size()) {
            Cell kv = results.get(i);
            timeStamp = Math.max(timeStamp, kv.getTimestamp()); // Find max timestamp of table
header row
            Cell searchKv = TABLE_KV_COLUMNS.get(j);
            int cmp =
                    Bytes.compareTo(kv.getQualifierArray(), kv.getQualifierOffset(),
                        kv.getQualifierLength(), searchKv.getQualifierArray(),
                        searchKv.getQualifierOffset(), searchKv.getQualifierLength());
            if (cmp == 0) {
                tableKeyValues[j++] = kv;
                i++;
            } else if (cmp > 0) {
                tableKeyValues[j++] = null;
            } else {
                i++; // shouldn't happen - means unexpected KV in system table header row
            }
        }
{code}




was (Author: jamestaylor):
So the root of the problem (as you've found) is that our ALTER INDEX call does not cause the
data table to be sent over again next time. I think the easiest way to solve this is to do
a Put on the data table by adding it to the tableMetaData before you call region.mutateRowsWithLocks(tableMetadata,
Collections.<byte[]> emptySet()). Maybe you can just do a put on the empty key value
(as you know the value is an empty byte array without having to look it up)? Might need to
tweak the logic that calculates the timeStamp in getTable(). Just add the following to the
else block as well, like this, so that if the empty key value has the biggest ts, it'll be
used too:

{code}
    private PTable getTable(RegionScanner scanner, long clientTimeStamp, long tableTimeStamp)
{
        ...

        while (i < results.size() && j < TABLE_KV_COLUMNS.size()) {
            Cell kv = results.get(i);
            timeStamp = Math.max(timeStamp, kv.getTimestamp()); // Find max timestamp of table
header row
            Cell searchKv = TABLE_KV_COLUMNS.get(j);
            int cmp =
                    Bytes.compareTo(kv.getQualifierArray(), kv.getQualifierOffset(),
                        kv.getQualifierLength(), searchKv.getQualifierArray(),
                        searchKv.getQualifierOffset(), searchKv.getQualifierLength());
            if (cmp == 0) {
                tableKeyValues[j++] = kv;
                i++;
            } else if (cmp > 0) {
                tableKeyValues[j++] = null;
            } else {
                i++; // shouldn't happen - means unexpected KV in system table header row
            }
        }
{code}



> Add test cases to cover more index update failure scenarios
> -----------------------------------------------------------
>
>                 Key: PHOENIX-1147
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1147
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.0.0, 5.0.0
>            Reporter: Jeffrey Zhong
>            Assignee: Jeffrey Zhong
>         Attachments: Phoenix-1147-v1.patch
>
>
> Add one test to cover RegionServer being killed while index is begin updated
> Add steps to make sure UPSERT & SELECT should still work after index is disabled.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message