spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Terry Kim <>
Subject Re: [Spark SQL]: Does namespace name is always needed in a query for tables from a user defined catalog plugin
Date Mon, 02 Dec 2019 03:50:57 GMT
Hi Xufei,
I also noticed the same while looking into relation resolution behavior
(See Appendix A in this doc
I created SPARK-30094 <> and
will follow up.


On Sun, Dec 1, 2019 at 7:12 PM xufei <> wrote:

> Hi,
> I'm trying to write a catalog plugin based on spark-3.0-preview,  and I
> found even when I use 'use catalog.namespace' to set the current catalog
> and namespace, I still need to qualified name in the query.
> For example, I add a catalog named 'example_catalog', there is a database
> named 'test' in 'example_catalog', and a table 't' in
> 'example_catalog.test'. I can query the table using 'select * from
> example_catalog.test.t' under default catalog(which is spark_catalog).
> After I use 'use example_catalog.test' to change the current catalog to
> 'example_catalog', and the current namespace to 'test', I can query the
> table using 'select * from test.t', but 'select * from t' failed due to
> table_not_found exception.
> I want to know if this is an expected behavior?  If yes, it sounds a
> little weird since I think after 'use example_catalog.test', all the
> un-qualified identifiers should be interpreted as
> 'example_catalog.test.identifier'.
> Attachment is a test file that you can use to reproduce the problem I met.
> Thanks.
> ---------------------------------------------------------------------
> To unsubscribe e-mail:

View raw message