trafodion-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suresh Subbiah <suresh.subbia...@gmail.com>
Subject Re: HammerDB issues
Date Sat, 12 Sep 2015 18:10:29 GMT
Hi Radu,

Q1. I got the source java files from HammerDB source code.
/home/trafodion/HammerDB-2.18/hdb-components/ hdb_tpcc.tcl
They are also available from HammerDB sourceforge site from the same file.
was not sure about the version on the web so I used the file from the
install.
The java source is some kind of env var inside the tcl script. I cut and
pasted it out into separate files.

Q2. Let me try testing HammerDB install on a small cluster. I have
previously played with it only on a single node. We can ask HammerDB author
if we both see the same isue.

Q3. I think running one instance on first node will be fine. Please keep an
eye on the process with top. If we have several virtual/users with threads
it could max out that node. Another possibility now that the build stage is
over is that HammerDB could we installed on a client machine away from Traf
cluster, we point HammerDB to the Traf cluster and do the running from
remote machine (if we find that first node resources are being hogged by
HammeDB). Traf has a restriction that SPJ can be built wit local HammerDB
install, but otherwise HammerDB can be off-platform.

Thank you.
Suresh


On Sat, Sep 12, 2015 at 4:30 AM, Radu Marias <radumarias@gmail.com> wrote:

> Some questions:
>
> 1. By using Approach A) from where did you got the java files for SPJ
> like DELIVERY.java
> to build jars?
> 2. I also tried yesterday before you resolved the issue Approach B) to
> select the "Copy Stored Procedures to Remote Nodes" with the hostnames
> from $MY_NODES but it didn't copied the only one generated one
> NEWORDER.jar.
> 3. Now when I continue to run the tests do I need to setup the master/slave
> hammerdb instances or running just one instance on the first node will be
> fine? Assuming we are ok with the work load one instance generates.
>
> On Sat, Sep 12, 2015 at 12:05 PM, Radu Marias <radumarias@gmail.com>
> wrote:
>
> > Thanks for all the help, will continue to run HammerDB from Monday.
> >
> > I noticed there are only 2 indexes, is this as expected? I assume this is
> > because TPC-C mostly inserts data and if needed all other tables are
> > accessed by primary key.
> >
> > On Sat, Sep 12, 2015 at 3:03 AM, Suresh Subbiah <
> > suresh.subbiah60@gmail.com> wrote:
> >
> >> Hi Radu,
> >>
> >> With Amanda's help I think we have everything ready to run HammerDB.
> >> Both indexes are created and 5 SPJs are created (and pushed to all
> nodes).
> >> We also verified that indexes has same rowcount as base table.
> >>
> >> The previous problem including cores may have been because Trafodion
> stack
> >> was only partially up.
> >> Stack trace on the cores shows this
> >> #0  0x00007f619ae22b56 in __rawmemchr_sse2 () from /lib64/libc.so.6
> >> #1  0x00007f619ae0d710 in _IO_str_init_static_internal () from
> >> /lib64/libc.so.6
> >> #2  0x00007f619ae016b5 in vsscanf () from /lib64/libc.so.6
> >> #3  0x00007f619adfb728 in sscanf () from /lib64/libc.so.6
> >> #4  0x000000000040aa40 in MemoryMonitor::update (this=0x7fffe7279c10,
> >> scale=@0x7f618ceb047c) at ../cli/memorymonitor.cpp:207
> >> #5  0x000000000040adf3 in memMonitorUpdateThread (param=0x7fffe7279c10)
> at
> >> ../cli/memorymonitor.cpp:75
> >> #6  0x00007f6197cdea51 in start_thread () from /lib64/libpthread.so.0
> >> #7  0x00007f619ae809ad in clone () from /lib64/libc.so.6
> >>
> >> We do need to look at this core more and understand it. However when you
> >> restarted Trafodion stack most of our problems went away.
> >>
> >> I used Approach A to get the SPJs. I do think checking the last 3 boxes
> in
> >> Figure 8 in HammerDB-Traf quick start guide will allow us to create SPJs
> >> as
> >> part of build step.
> >>
> >> Thanks
> >> Suresh
> >>
> >> >>get tables ;
> >>
> >> Tables in Schema TRAFODION.TPCC
> >> ===============================
> >>
> >> CUSTOMER
> >> DISTRICT
> >> HISTORY
> >> ITEM
> >> NEW_ORDER
> >> ORDERS
> >> ORDER_LINE
> >> STOCK
> >> WAREHOUSE
> >>
> >> --- SQL operation complete.
> >> >>get indexes ;
> >>
> >> Indexes in Schema TRAFODION.TPCC
> >> ================================
> >>
> >> CUSTOMER_I2
> >> ORDERS_I2
> >>
> >> --- SQL operation complete.
> >> >>get procedures ;
> >>
> >> Procedures in Schema TRAFODION.TPCC
> >> ===================================
> >>
> >> DELIVERY
> >> NEWORDER
> >> ORDERSTATUS
> >> PAYMENT
> >> STOCKLEVEL
> >>
> >> --- SQL operation complete.
> >>
> >> select count(*) from table(index_table orders_i2) ;
> >>
> >> (EXPR)
> >> --------------------
> >>
> >>                30000
> >>
> >> --- 1 row(s) selected.
> >> >>select count(*) from table(index_table customer_i2) ;
> >>
> >> (EXPR)
> >> --------------------
> >>
> >>                30000
> >>
> >> --- 1 row(s) selected.
> >>
> >>
> >> On Fri, Sep 11, 2015 at 2:56 PM, Dave Birdsall <dave.birdsall@esgyn.com
> >
> >> wrote:
> >>
> >> > I'm guessing Suresh and Amanda are on top of this so I'll bug out on
> the
> >> > cores...
> >> >
> >> > -----Original Message-----
> >> > From: Suresh Subbiah [mailto:suresh.subbiah60@gmail.com]
> >> > Sent: Friday, September 11, 2015 12:54 PM
> >> > To: dev@trafodion.incubator.apache.org
> >> > Subject: Re: HammerDB issues
> >> >
> >> > Hi Radu,
> >> >
> >> > I think multiple instances of HammerDB are needed only we find that a
> >> > single
> >> > instance is not sufficient to drive the system. We have not tested
> this
> >> > coniguration (HammerDB with remote nodes). So far in our tests, all
> >> > parallelism is handled/occurs at the Trafodion and HBase level. A
> single
> >> > instance of HammerDB isues one of the 5 TPC-C transactions against the
> >> > database and when that completes the next transaction is issued. A
> >> single
> >> > instance of HammerDB supports multiple virtual users (presumably with
> >> > threads) so at any given instant there are several users/threads
> issuing
> >> > one
> >> > of the 5 transactions against the same database. I think we need to
> >> > consider
> >> > the second HammerDB instance only if we find that the first instance
> is
> >> not
> >> > able to saturate the system.
> >> >
> >> > For the jar file copy, under trafodion id we could do something
> >> something
> >> > like pdcp $MY_NODES /home/trafodion/HammerDB-2.18/*.jar
> >> > /home/trafodion/HammerDB-2.18/
> >> > pdsh $MY_NODES "ls -l  /home/trafodion/HammerDB-2.18/"
> >> >
> >> > But before this we do have to get all the jar files on the first node
> >> > though. Broadly speaking I think we have two approaches
> >> >
> >> > Approach A) "Do it yourself", outside HammerDB We create the
> remaining 4
> >> > jar
> >> > files on the first node and pdcp it to the other nodes.
> >> > javac DELIVERY.java
> >> > jar cvf DELIVERY.jar DELIVERY.class
> >> > --repeat and move jar file HammerDB-2.18 folder
> >> >
> >> > Approach B) Let HammerDB do this
> >> > This approach may require work already done to be discarded. I see in
> >> > http://hammerora.sourceforge.net/hammerdb_quickstart_trafodion.pdf
> (Pg
> >> 6,
> >> > Figure 8) three items that must set for SPJ to build correctly "Build
> >> Java
> >> > Stored Procedures Locally" should be checked "Copy Stored Procedures
> to
> >> > Remote Nodes" should be checked "Node List(Space separated value)"
> >> should
> >> > have the output of echo $MY_NODES without all the -w option
> characters.
> >> > I was not aware of this, but it looks like HammerDB can deploy jar
> >> files in
> >> > a multi-node Trafodion cluster.
> >> > Note that Remote node here means something slightly different from
> >> previous
> >> > usage (in that HammerDB does not need to be installed on these nodes
> for
> >> > this line to work)
> >> >
> >> > Without the SPJs HammerDB cannot really be run. It does all DML
> through
> >> > these 5SPJs.
> >> >
> >> > On the CREATE INDEX error, core files would be great.
> >> > Otherwise can we try the failed create index again with cqd
> >> > TRAF_LOAD_USE_FOR_INDEXES 'OFF' ; I see that the index has now been
> >> created
> >> > successfully, so maybe this is not an issue anymore. Will check with
> >> Amanda
> >> > if all the tables and indexes look good.
> >> >
> >> > Thanks
> >> >
> >> > On Fri, Sep 11, 2015 at 2:25 PM, Radu Marias <radumarias@gmail.com>
> >> wrote:
> >> >
> >> > > Dave, Narendra
> >> > >
> >> > > BTW, Amanda has access to our cluster. Please feel free to take a
> look
> >> > > if you think it helps in debugging the issue.
> >> > >
> >> > > On Fri, Sep 11, 2015 at 10:24 PM, Radu Marias <radumarias@gmail.com
> >
> >> > > wrote:
> >> > >
> >> > > > $ file core.36479
> >> > > > core.36479: ELF 64-bit LSB core file x86-64, version 1 (SYSV),
> >> > > SVR4-style,
> >> > > > from 'tdm_arkesp SQMON1.1 00000 00000 036479 $Z000US9 <IP>:35596
> >> > > > 00004
> >> > > 0000'
> >> > > >
> >> > > >
> >> > > > On Fri, Sep 11, 2015 at 10:22 PM, Narendra Goyal <
> >> > > narendra.goyal@esgyn.com
> >> > > > > wrote:
> >> > > >
> >> > > >> Hi Radu,
> >> > > >>
> >> > > >> Could you please run 'file core*' in the same directory.
At least
> >> > > >> it
> >> > > will
> >> > > >> identify the program that core'd.
> >> > > >>
> >> > > >> Thanks,
> >> > > >> -Narendra
> >> > > >>
> >> > > >> -----Original Message-----
> >> > > >> From: Radu Marias [mailto:radumarias@gmail.com]
> >> > > >> Sent: Friday, September 11, 2015 12:15 PM
> >> > > >> To: dev <dev@trafodion.incubator.apache.org>
> >> > > >> Subject: Re: HammerDB issues
> >> > > >>
> >> > > >> yes,
> >> > > >>
> >> > > >> $ pwd
> >> > > >> /home/trafodion/trafodion-20150828_0830/sql/scripts
> >> > > >> $ ls -ltrh | grep core.
> >> > > >> -rwxr-x--- 1 trafodion trafodion 3.3K Aug 28 08:30 sqcorefile
> >> > > >> -rw------- 1 trafodion trafodion 152M Sep 11 13:27 core.42133
> >> > > >> -rw------- 1 trafodion trafodion 162M Sep 11 17:36 core.9560
> >> > > >> -rw------- 1 trafodion trafodion 162M Sep 11 17:39 core.10408
> >> > > >> -rw------- 1 trafodion trafodion 152M Sep 11 18:27 core.36479
> >> > > >>
> >> > > >> All are about 5.3 MB archived. Do you want to connect to
the
> >> > > >> cluster and debug them? Or should I share over email? Because
I
> >> > > >> don't know if sensitive information is included to post them
to
> >> > > >> mailing list also.
> >> > > >>
> >> > > >> A quick look, but don't think it helps without symbol-file:
> >> > > >>
> >> > > >> Program terminated with signal 11, Segmentation fault.
> >> > > >> #0  0x00007f619ae22b56 in ?? ()
> >> > > >> (gdb) bt
> >> > > >> #0  0x00007f619ae22b56 in ?? ()
> >> > > >> #1  0x00007f619ae0d710 in ?? ()
> >> > > >> #2  0x00007f618ceafb30 in ?? ()
> >> > > >> #3  0x00007fffe7279c10 in ?? ()
> >> > > >> #4  0x000000000040bd4d in ?? ()
> >> > > >> #5  0x00007f618ceafb30 in ?? ()
> >> > > >> #6  0x0000000000000000 in ?? ()
> >> > > >>
> >> > > >> On Fri, Sep 11, 2015 at 9:53 PM, Dave Birdsall
> >> > > >> <dave.birdsall@esgyn.com
> >> > > >
> >> > > >> wrote:
> >> > > >>
> >> > > >> > Sorry, I meant core dumps from abending processes.
> >> > > >> >
> >> > > >> > See any files with file name "core.<node
> name>.<pid>.<executable
> >> > > >> > name>" out there?
> >> > > >> >
> >> > > >> > If so might be interesting to use gdb to see the stack
trace.
> >> > > >> >
> >> > > >> > -----Original Message-----
> >> > > >> > From: Radu Marias [mailto:radumarias@gmail.com]
> >> > > >> > Sent: Friday, September 11, 2015 11:44 AM
> >> > > >> > To: dev <dev@trafodion.incubator.apache.org>
> >> > > >> > Subject: Re: HammerDB issues
> >> > > >> >
> >> > > >> > By "core files" you mean mxssmp processes? If so then
this is
> >> > > >> > present on all
> >> > > >> > nodes:
> >> > > >> >
> >> > > >> > $ ps x | grep mxssmp
> >> > > >> > 46157 ?        SNl    0:02 mxssmp SQMON1.1 00004 00004
046157
> >> > $ZSM004
> >> > > >> > 188.138.61.180:35762 00011 00004 00005 SPARE
> >> > > >> >
> >> > > >> > I tried this:
> >> > > >> >
> >> > > >> > sqstop (then needed to use ckillall because it was stuck)
> >> > > >> > restarted hdp sqstart
> >> > > >> >
> >> > > >> > Now this works from sqlci:
> >> > > >> >
> >> > > >> > CREATE UNIQUE INDEX CUSTOMER_I100 ON CUSTOMER (C_W_ID,
C_D_ID,
> >> > > >> > C_LAST, C_FIRST, C_ID);
> >> > > >> >
> >> > > >> > But when I run hammerdb without SPJ it inserts data
in some
> >> > > >> > tables and then it fails again with
> >> > > >> >
> >> > > >> > Vuser 1:Creating Index CUSTOMER_I2...
> >> > > >> > Vuser 1:Failed to Create Index
> >> > > >> >
> >> > > >> > And now if I run the create statement in sqlci I get
the same
> >> > > >> > error "Operating system error 201".
> >> > > >> >
> >> > > >> > *So it seems that while hammerdb inserts data some trafodion
> >> > > >> > processes went away unexpectedly?*
> >> > > >> >
> >> > > >> > On Fri, Sep 11, 2015 at 8:47 PM, Dave Birdsall
> >> > > >> > <dave.birdsall@esgyn.com>
> >> > > >> > wrote:
> >> > > >> >
> >> > > >> > > The first one ("Operating system error 201") means
a process
> >> > > >> > > went away unexpectedly. Do you see any core files?
> >> > > >> > >
> >> > > >> > > -----Original Message-----
> >> > > >> > > From: Radu Marias [mailto:radumarias@gmail.com]
> >> > > >> > > Sent: Friday, September 11, 2015 10:46 AM
> >> > > >> > > To: dev <dev@trafodion.incubator.apache.org>
> >> > > >> > > Subject: Re: HammerDB issues
> >> > > >> > >
> >> > > >> > > This is what I get for the DDL you mentioned:
> >> > > >> > >
> >> > > >> > > >>CREATE UNIQUE INDEX CUSTOMER_I2 ON CUSTOMER
(C_W_ID,
> C_D_ID,
> >> > > >> > > >>C_LAST,
> >> > > >> > > C_FIRST, C_ID);
> >> > > >> > >
> >> > > >> > > *** ERROR[2034] $Z00083Z:144: Operating system
error 201
> while
> >> > > >> > > communicating with server process $Z0201A7:26.
> >> > > >> > >
> >> > > >> > > *** ERROR[1081] Loading of index TRAFODION.TPCC.CUSTOMER_I2
> >> > > >> > > failed unexpectedly.
> >> > > >> > >
> >> > > >> > > --- SQL operation failed with errors.
> >> > > >> > >
> >> > > >> > > The one for ORDERS is working:
> >> > > >> > >
> >> > > >> > > >>CREATE UNIQUE INDEX ORDERS_I2 ON ORDERS
(O_W_ID, O_D_ID,
> >> > > >> > > >>O_C_ID, O_ID);
> >> > > >> > >
> >> > > >> > > --- SQL operation complete.
> >> > > >> > >
> >> > > >> > > $ sqcheck
> >> > > >> > > Checking if processes are up.
> >> > > >> > > Checking attempt: 1; user specified max: 2. Execution
time in
> >> > > seconds:
> >> > > >> > > 0.
> >> > > >> > >
> >> > > >> > > The SQ environment is up!
> >> > > >> > >
> >> > > >> > >
> >> > > >> > > Process         Configured      Actual      Down
> >> > > >> > > -------         ----------      ------      ----
> >> > > >> > > DTM             5               5
> >> > > >> > > RMS             10              10
> >> > > >> > > MXOSRVR         20              20
> >> > > >> > >
> >> > > >> > > Don't see any errors in trafodion logs.
> >> > > >> > >
> >> > > >> > > On Fri, Sep 11, 2015 at 8:19 PM, Radu Marias
> >> > > >> > > <radumarias@gmail.com>
> >> > > >> > wrote:
> >> > > >> > >
> >> > > >> > > > pdsh is available but how should the jar files
be
> replicated?
> >> > > >> > > >
> >> > > >> > > > I also installed HammerDB on the other 4 nodes
and setup
> >> > > >> > > > first node as master and the 4 nodes as slaves
as described
> >> > > >> > > > here
> >> > > >> > > > http://hammerora.sourceforge.net/hammerdb_remote_modes.pdf
> >> > > >> > > > I also checked "*Copy Stored Procedures to
Remote Nodes*"
> and
> >> > > >> > > > added the list of all 5 nodes but I don't
see
> *NEWORDER.jar*
> >> > > >> > > > on the other
> >> > > >> > > > 4 nodes in *HammerDB-2.18* folder. I assume
all the jars
> are
> >> > > >> > > > generated on the first node and copied to
the others. But
> it
> >> > > >> > > > only generates
> >> > > >> > > > *NEWORDER.jar* on the first node and it fails
and it's not
> >> > > >> > > > present on the other nodes. Also the rest
of the stored
> >> > > >> > > > procedure jars are not created.
> >> > > >> > > >
> >> > > >> > > > For the create index DDL used, I changed config.xml
file
> from
> >> > > >> > > > hammerdb to enable logs in /tmp and in that
file I can only
> >> > > >> > > > see the trimmed statement.
> >> > > >> > > > Is there anywhere I can see additional logs?
Or can I see
> in
> >> > > >> > > > trafodion logs the last statement executed
from hammerdb
> over
> >> > > odbc?
> >> > > >> > > >
> >> > > >> > > > On Fri, Sep 11, 2015 at 6:38 PM, Suresh Subbiah
<
> >> > > >> > > > suresh.subbiah60@gmail.com> wrote:
> >> > > >> > > >
> >> > > >> > > >> Hi Radu,
> >> > > >> > > >>
> >> > > >> > > >> Thanks for trying this out.
> >> > > >> > > >>
> >> > > >> > > >> I think what is happening here is that
the jar file did
> not
> >> > > >> > > >> get pushed to all the nodes. HammerDB
has so far been
> tested
> >> > > >> > > >> on single node instances or multiple nodes
where we did
> some
> >> > > >> > > >> intermediate steps manually. I don't think
HammerDB uses
> >> > > >> > > >> pdsh to push the jar file to all nodes,
or maybe pdsh is
> >> > > >> > > >> somehow not available. Could you please
check if the jar
> >> > > >> > > >> file is available from every node in the
cluster? If this
> >> > > >> > > >> does turn out to be the problem, I promise
to add more
> info
> >> > > >> > > >> about HammerDB in the Trafodion wiki and
mention a
> >> workaround
> >> > > >> > > >> for these issues.
> >> > > >> > > >>
> >> > > >> > > >> For the Failed To Create Index issue,
is there an error
> >> > message?
> >> > > >> > > >> IF not we can take the create index statement
HammerDB is
> >> > > >> > > >> using and issue it from sqlci. I think
thisis the DDL that
> >> is
> >> > > >> > > >> used.
> >> > > >> > > >>
> >> > > >> > > >> CREATE UNIQUE INDEX CUSTOMER_I2 ON CUSTOMER
(C_W_ID,
> C_D_ID,
> >> > > >> > > >> C_LAST, C_FIRST, C_ID) ; CREATE UNIQUE
INDEX ORDERS_I2 ON
> >> > > >> > > >> ORDERS (O_W_ID, O_D_ID, O_C_ID, O_ID)
;
> >> > > >> > > >>
> >> > > >> > > >> Thanks
> >> > > >> > > >> Suresh
> >> > > >> > > >>
> >> > > >> > > >>
> >> > > >> > > >>
> >> > > >> > > >> On Fri, Sep 11, 2015 at 9:27 AM, Radu
Marias
> >> > > >> > > >> <radumarias@gmail.com>
> >> > > >> > > >> wrote:
> >> > > >> > > >>
> >> > > >> > > >> > Hi,
> >> > > >> > > >> >
> >> > > >> > > >> > I'm trying to run HammerDB TPC-C
from this tutorial
> >> > > >> > > >> >
> >> http://hammerora.sourceforge.net/hammerdb_quickstart_trafo
> >> > > >> > > >> > dion
> >> > > .
> >> > > >> > > >> > pd
> >> > > >> > > >> > f
> >> > > >> > > >> >
> >> > > >> > > >> > I have this environment:
> >> > > >> > > >> >
> >> > > >> > > >> >
> >> > > >> > > >> > *CentOS release 6.7 (Final)* *Ambari
2.1.1* *HDP 2.2*
> >> > > >> > > >> > *trafodion-20150828_0830*
> >> > > >> > > >> >
> >> > > >> > > >> > *HammerDB-2.18*
> >> > > >> > > >> >
> >> > > >> > > >> > *java version "1.7.0_79"*
> >> > > >> > > >> >
> >> > > >> > > >> > On *Schema Build* I get errors when
stored procedures
> are
> >> > > >> created.
> >> > > >> > > >> >
> >> > > >> > > >> > *This is from the hammerdb logs:*
> >> > > >> > > >> >
> >> > > >> > > >> > Vuser 1:CREATING TPCC STORED PROCEDURES
Vuser 1:Failed
> to
> >> > > >> > > >> > create library
> >> > > >> > > >> /home/trafodion/HammerDB-2.18/NEWORDER.jar
> >> > > >> > > >> >
> >> > > >> > > >> > *This is what I see in the UI (attached
is also a
> >> > > >> > > >> > screenshot):*
> >> > > >> > > >> >
> >> > > >> > > >> > loading history file ... 0 events
added Main console
> >> > > >> > > >> > display active
> >> > > >> > > >> > (Tcl8.6.0 / Tk8.6.0) The xml in config.xml
is
> well-formed,
> >> > > >> > > >> > applying variables Error in Virtual
User 1: [Trafodion
> >> > > >> > > >> > ODBC Driver][Trafodion Database]
SQL
> >> > > >> > > >> > ERROR:*** ERROR[1382] JAR or DLL
file
> >> > > >> > > >> > /home/trafodion/HammerDB-2.18/NEWORDER.jar
not found.
> >> > > >> > > >> > [2015-09-11
> >> > > >> > > >> > 07:50:53]À€ç¿™À€éŽˆç»„ç¿™À€éŽ„组翙À€éŽ€ç»„ç¿™À€é
> >> > > >> > > >> > 组翙À€é 组翙À€éŒ¸ç»„ç¿™À€å
¦è
> >> > > >> > > >> > šç¿™À€ìªˆÊŒÀ€À€ì¹°Ê‰À€À€é
> >> > > >> > > >> >
> >> > > >> > > >>
> 组翙
> >> > > >> > > >> À€À€Õ
> >> > > >> > > >> €Ë
> >> > > >> > > >> ®À
> >> > > >> > > >> €À€ìŸ
> >> > > >> > > >> > Ì…À€À€À€À€À€À€À€À€À€À€ï¿¿ï¿¿À€À€ì«
ï¿¿
> >> > > >> > > >> À€À€À€À€À€À€À€À€À€À€À€À€À€À€À€À€À€é
> >> > > >> > > >> > °À€À€À€ì² ʉÀ€À€îˆ°Ò»À€À€éŸ¼ç«
ç¿™À€ë¾°Ç´À€À€å–€Ò À€À€ç½
> >> > > >> > > >> > Ë—À€À€ì² ʉÀ€À€é’€ç»„ç¿™À€ã
µç« ç¿™À€é‘ 组翙À€ìµ
> ʉÀ€À€é
> >> > > >> > > >> > °ç»„ (executing the statement)
> >> > > >> > > >> > (HammerDB-2.18) 1 %
> >> > > >> > > >> >
> >> > > >> > > >> > By looking in the tpcc schema I see
some tables are
> >> > > >> > > >> > created and are populated with data:
> >> > > >> > > >> >
> >> > > >> > > >> > >>set schema tpcc;
> >> > > >> > > >> >
> >> > > >> > > >> > --- SQL operation complete.
> >> > > >> > > >> > >>get tables;
> >> > > >> > > >> >
> >> > > >> > > >> > Tables in Schema TRAFODION.TPCC
> >> > > >> > > >> > ===============================
> >> > > >> > > >> >
> >> > > >> > > >> > CUSTOMER
> >> > > >> > > >> > DISTRICT
> >> > > >> > > >> > HISTORY
> >> > > >> > > >> > ITEM
> >> > > >> > > >> > NEW_ORDER
> >> > > >> > > >> > ORDERS
> >> > > >> > > >> > ORDER_LINE
> >> > > >> > > >> > STOCK
> >> > > >> > > >> > WAREHOUSE
> >> > > >> > > >> >
> >> > > >> > > >> > The file '*/home/trafodion/HammerDB-2.18/NEWORDER.jar*'
> >> > > >> > > >> > exists, attached is the NEWORDER.java
> >> > > >> > > >> >
> >> > > >> > > >> > *When trying to create the stored
procedure from sqlci I
> >> > > >> > > >> > get
> >> > > >> > > >> > this:*
> >> > > >> > > >> >
> >> > > >> > > >> > set schema tpcc;
> >> > > >> > > >> > create library testrs file
> >> > > >> > > >> > '/home/trafodion/HammerDB-2.18/NEWORDER.jar';
> >> > > >> > > >> > create procedure NEWORD()
> >> > > >> > > >> >        language java
> >> > > >> > > >> >        parameter style java
> >> > > >> > > >> >        external name 'NEWORDER.NEWORD'
> >> > > >> > > >> >        dynamic result sets 1
> >> > > >> > > >> >        library testrs;
> >> > > >> > > >> >
> >> > > >> > > >> > *** ERROR[11239] No compatible Java
methods named
> 'NEWORD'
> >> > > >> > > >> > were found in Java class 'NEWORDER'.
> >> > > >> > > >> >
> >> > > >> > > >> > *** ERROR[1231] User-defined routine
> >> > > >> > > >> > TRAFODION.TPCC.NEWORDER could not
> >> > > >> > > >> be
> >> > > >> > > >> > created.
> >> > > >> > > >> >
> >> > > >> > > >> > Is there an API change on methods
for stored procedures
> in
> >> > > >> > > >> > the latest trafodion and hammerdb
uses the older syntax?
> >> > > >> > > >> > This is the
> >> > > >> > > >> > method:
> >> > > >> > > >> >
> >> > > >> > > >> > public static void NEWORD (int no_w_id,
int no_max_w_id,
> >> > > >> > > >> > int no_d_id,
> >> > > >> > > >> int
> >> > > >> > > >> > no_c_id, int no_o_ol_cnt, BigDecimal[]
no_c_discount,
> >> > > >> > > >> > String[]
> >> > > >> > > >> no_c_last,
> >> > > >> > > >> > String[] no_c_credit, BigDecimal[]
no_d_tax,
> BigDecimal[]
> >> > > >> > > >> > no_w_tax,
> >> > > >> > > >> int[]
> >> > > >> > > >> > no_d_next_o_id, Timestamp tstamp,
ResultSet[] opres)
> >> > > >> > > >> > throws SQLException
> >> > > >> > > >> >
> >> > > >> > > >> > Also If I disable "*Build Java Stored
Procedures
> Locally*"
> >> > > >> > > >> > from hammerdb I  get:
> >> > > >> > > >> >
> >> > > >> > > >> > Vuser 1:CREATING TPCC INDEXES Vuser
1:Creating Index
> >> > > >> > > >> > CUSTOMER_I2...
> >> > > >> > > >> > Vuser 1:Failed to Create Index
> >> > > >> > > >> >
> >> > > >> > > >> > --
> >> > > >> > > >> > And in the end, it's not the years
in your life that
> >> count.
> >> > > >> > > >> > It's the
> >> > > >> > > >> life
> >> > > >> > > >> > in your years.
> >> > > >> > > >> >
> >> > > >> > > >>
> >> > > >> > > >
> >> > > >> > > >
> >> > > >> > > >
> >> > > >> > > > --
> >> > > >> > > > And in the end, it's not the years in your
life that count.
> >> > > >> > > > It's the life in your years.
> >> > > >> > > >
> >> > > >> > >
> >> > > >> > >
> >> > > >> > >
> >> > > >> > > --
> >> > > >> > > And in the end, it's not the years in your life
that count.
> >> > > >> > > It's the life in your years.
> >> > > >> > >
> >> > > >> >
> >> > > >> >
> >> > > >> >
> >> > > >> > --
> >> > > >> > And in the end, it's not the years in your life that
count.
> It's
> >> > > >> > the life in your years.
> >> > > >> >
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> --
> >> > > >> And in the end, it's not the years in your life that count.
It's
> >> > > >> the
> >> > > life
> >> > > >> in
> >> > > >> your years.
> >> > > >>
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > And in the end, it's not the years in your life that count. It's
> the
> >> > > > life in your years.
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > And in the end, it's not the years in your life that count. It's the
> >> > > life in your years.
> >> > >
> >> >
> >>
> >
> >
> >
> > --
> > And in the end, it's not the years in your life that count. It's the life
> > in your years.
> >
>
>
>
> --
> And in the end, it's not the years in your life that count. It's the life
> in your years.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message