kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiaoxiang Yu (Jira)" <>
Subject [jira] [Commented] (KYLIN-4427) Wrong FileSystem error when trying to enable system cubes and Dashboard in Kylin 2.6.4
Date Sun, 05 Apr 2020 03:07:00 GMT


Xiaoxiang Yu commented on KYLIN-4427:

Hi, I think you should use the property key "fs.defaultFS", it is case-sensitive. 
 And use _wasb://<containername>@<accountname> as its value.


If still not works, please let me know, I will try ask for a account and do a test in 4/7.

> Wrong FileSystem error when trying to enable system cubes and Dashboard in Kylin 2.6.4
> --------------------------------------------------------------------------------------
>                 Key: KYLIN-4427
>                 URL:
>             Project: Kylin
>          Issue Type: Bug
>          Components: Metrics
>    Affects Versions: v2.6.4
>            Reporter: Preeti V
>            Assignee: Xiaoxiang Yu
>            Priority: Major
>         Attachments: KylinMetrics.JPG, image-2020-04-03-10-45-15-290.png, image-2020-04-03-10-45-20-859.png,
>  I am trying to enable system cubes for the Dashboard using Kylin version 2.6.4 The
tables are created correctly and the cube builds successfully but there is no query or job
data on the dashboard, it shows 0. 
> We use Azure storage for Hive(wasb:// file system). I can see that there is no data being
updated on the Hive_Metrics tables in Azure. In Kylin logs I see the below error
> 2020-03-12 20:02:41,790 ERROR [metrics-blocking-reservoir-scheduler-0] hive.HiveReservoirReporter:119
: Wrong FS: wasb://*****,
expected: hdfs://*****-prod-bn01
> java.lang.IllegalArgumentException: Wrong FS: wasb://*****,
expected: hdfs://*****-prod-bn01
>         at org.apache.hadoop.fs.FileSystem.checkPath(
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(
>         at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(
>         at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
>         at org.apache.hadoop.fs.FileSystem.exists(
>         at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.write(
>         at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.send(
>         at org.apache.kylin.metrics.lib.impl.hive.HiveReservoirReporter$HiveReservoirListener.onRecordUpdate(
>         at org.apache.kylin.metrics.lib.impl.BlockingReservoir.notifyListenerOfUpdatedRecord(
> I checked the hive configs and it has the warehouse metastore dir correctly pointing
to azure. I found another thread with similar problem where they are trying to use S3 instead
of hdfs. [] 
> I also followed the recommendations here [] 
and enabled all the necessary config values.
>  Is this a bug in Kylin or a configuration issue on my cluster? Any help or guidance
is appreciated.

This message was sent by Atlassian Jira

View raw message