flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "xiezhiqiang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (FLINK-12633) flink sql-client throw No context matches
Date Mon, 27 May 2019 09:20:00 GMT
xiezhiqiang created FLINK-12633:
-----------------------------------

             Summary: flink sql-client throw No context matches
                 Key: FLINK-12633
                 URL: https://issues.apache.org/jira/browse/FLINK-12633
             Project: Flink
          Issue Type: Bug
    Affects Versions: 1.7.2
            Reporter: xiezhiqiang


test.yarml
{code:java}
tables:
- name: Test
type: source-table
schema:
- name: id
type: INT
- name: cnt
type: INT
connector:
type: filesystem
path: "/data/demo/test.csv"
format:
type: csv
fields:
- name: id
type: INT
- name: cnt
type: INT
line-delimiter: "\n"
comment-prefix: "#"
field-delimiter: ","

# # Define user-defined functions here.

# # functions:
# # - name: myUDF
# # from: class
# # class: foo.bar.AggregateUDF
# # constructor:
# # - 7.6
# # - false

# # Execution properties allow for changing the behavior of a table program.

execution:
type: streaming # required: execution mode either 'batch' or 'streaming'
result-mode: table # required: either 'table' or 'changelog'
time-characteristic: event-time # optional: 'processing-time' or 'event-time' (default)
parallelism: 1 # optional: Flink's parallelism (1 by default)
periodic-watermarks-interval: 200 # optional: interval for periodic watermarks (200 ms by
default)
max-parallelism: 16 # optional: Flink's maximum parallelism (128 by default)
min-idle-state-retention: 0 # optional: table program's minimum idle state time
max-idle-state-retention: 0 # optional: table program's maximum idle state time

# Deployment properties allow for describing the cluster to which table programs are submitted
to.

deployment:
response-timeout: 5000

{code}
test.csv
{code:java}
id,cnt
1,2
2,3{code}
{code:java}
throws
Searching for '/usr/local/Cellar/apache-flink/1.7.2/libexec/conf/sql-client-defaults.yaml'...found.
Reading default environment from: file:/usr/local/Cellar/apache-flink/1.7.2/libexec/conf/sql-client-defaults.yaml
Reading session environment from: file:/data/demo/test.yaml Validating current environment...
Exception in thread "main" org.apache.flink.table.client.SqlClientException: The configured
environment is invalid. Please check your environment files again. at org.apache.flink.table.client.SqlClient.validateEnvironment(SqlClient.java:140)
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:99) at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)
Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: Could not create execution
context. at org.apache.flink.table.client.gateway.local.LocalExecutor.getOrCreateExecutionContext(LocalExecutor.java:488)
at org.apache.flink.table.client.gateway.local.LocalExecutor.validateSession(LocalExecutor.java:316)
at org.apache.flink.table.client.SqlClient.validateEnvironment(SqlClient.java:137) ... 2 more
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable
table factory for 'org.apache.flink.table.factories.StreamTableSourceFactory' in the classpath.
Reason: No context matches. The following properties are requested: connector.path=/data/demo/test.csv
connector.type=filesystem format.comment-prefix=# format.field-delimiter=, format.fields.0.name=id
format.fields.0.type=INT format.fields.1.name=cnt format.fields.1.type=INT format.line-delimiter=\n
format.type=csv schema.0.name=id schema.0.type=INT schema.1.name=cnt schema.1.type=INT The
following factories have been considered: org.apache.flink.formats.json.JsonRowFormatFactory
org.apache.flink.table.sources.CsvBatchTableSourceFactory org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory org.apache.flink.table.sinks.CsvAppendTableSinkFactory
at org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
at org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
at org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:100)
at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.scala) at
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:236)
at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$new$0(ExecutionContext.java:121)
at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at org.apache.flink.table.client.gateway.local.ExecutionContext.<init>(ExecutionContext.java:119)
at org.apache.flink.table.client.gateway.local.LocalExecutor.getOrCreateExecutionContext(LocalExecutor.java:484)
... 4 more
 
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message