Query INFORMATION_SCHEMA.SCHEMATA for the Storage plugin and workspace information.
Will give you the name, type and whether it is mutable.
i.e.
0: jdbc:drill:> select * from SCHEMATA;
+---------------+---------------------+---------------+----------------+-------------+
| CATALOG_NAME | SCHEMA_NAME | SCHEMA_OWNER | TYPE | IS_MUTABLE |
+---------------+---------------------+---------------+----------------+-------------+
| DRILL | INFORMATION_SCHEMA | <owner> | ischema | NO
|
| DRILL | dfs.clicks | <owner> | file | NO
|
| DRILL | dfs.data | <owner> | file | NO
|
| DRILL | dfs.default | <owner> | file | NO
|
| DRILL | dfs.orders | <owner> | file | NO
|
| DRILL | dfs.orders_camp | <owner> | file | NO
|
| DRILL | dfs.root | <owner> | file | NO
|
| DRILL | dfs.tmp | <owner> | file | YES
|
| DRILL | dfs.twitter | <owner> | file | NO
|
| DRILL | dfs.views | <owner> | file | YES
|
| DRILL | maprdb | <owner> | hbase | NO
|
| DRILL | sys | <owner> | system-tables | NO
|
+---------------+---------------------+---------------+----------------+-------------+
--Andries
> On Dec 4, 2016, at 5:55 PM, Charles Givre <cgivre@gmail.com> wrote:
>
> That’s exactly what I was looking for. I’m working on an adapter for a BI tool.
Is there a way also see what type of storage plugin is being used?
>
>> On Dec 4, 2016, at 20:54, Aman Sinha <amansinha@apache.org> wrote:
>>
>> Charles,
>> Drill does not have a metastore for tables, so unless you have defined a
>> view with CAST or are querying Hive tables (Hive has a metastore), the
>> column types are determined at run-time. Have you tried the typeof()
>> function ?
>> SELECT typeof(column) FROM dfs.`data.json` LIMIT 1;
>>
>>
>> On Sun, Dec 4, 2016 at 8:47 AM, Charles Givre <cgivre@gmail.com> wrote:
>>
>>> Hello all,
>>> I’m trying to figure out if there is some way to determine what types of
>>> columns a given data source contains. It looks like this should exist in
>>> the INFORMATION_SCHEMA somewhere, but I’m not clear how to do it.
>>>
>>> For instance, let’s say that I have a file called data.json. I could
>>> execute:
>>> SELECT *
>>> FROM dfs.`data.json`
>>> LIMIT 0
>>> and get the column names, but is there some way to get at the data types
>>> for these columns?
>>> TIA,
>>> — Charles
>
|