tika-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (TIKA-2322) Video labeling using existing ObjectRecognition
Date Fri, 28 Apr 2017 05:04:04 GMT

    [ https://issues.apache.org/jira/browse/TIKA-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988207#comment-15988207

ASF GitHub Bot commented on TIKA-2322:

chrismattmann commented on issue #168: fix for TIKA-2322 contributed by msharan@usc.edu
URL: https://github.com/apache/tika/pull/168#issuecomment-297907725
   OK I was able to build your latest Docker @smadha from https://github.com/apache/tika/pull/168/commits/434736be63373e8caa85fd8c9bd117e6edbec555
https://github.com/apache/tika/pull/168/commits/58a116c2123d9c01ba054969121244364059c0d2 and
and found the following:
   == Running the Tika App Client Command
   LMC-053601:smadha-tika mattmann$ java -jar tika-app/target/tika-app-1.15-SNAPSHOT.jar --config=tika-parsers/src/test/resources/org/apache/tika/parser/recognition/tika-config-tflow-video-rest.xml
   WARN  JBIG2ImageReader not loaded. jbig2 files will be ignored
   INFO  Available = true, API Status = HTTP/1.0 200 OK
   INFO  minConfidence = 0.015, topN=4
   INFO  Recogniser = org.apache.tika.parser.recognition.tf.TensorflowRESTVideoRecogniser
   INFO  Recogniser Available = true
   WARN  Response = <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
   <title>500 Internal Server Error</title>
   <h1>Internal Server Error</h1>
   <p>The server encountered an internal error and was unable to complete your request.
 Either the server is overloaded or there is an error in the application.</p>
   WARN  NO objects
   LMC-053601:smadha-tika mattmann$ 
   == Results (from Tensorflow Video Docker Server)
    * Running on (Press CTRL+C to quit) - - [28/Apr/2017 05:01:26] "GET /inception/v4/ping HTTP/1.1" 200 -
   [2017-04-28 05:01:26,287] ERROR in app: Exception on /inception/v4/classify/video [POST]
   Traceback (most recent call last):
     File "/opt/conda/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
       response = self.full_dispatch_request()
     File "/opt/conda/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
       rv = self.handle_user_exception(e)
     File "/opt/conda/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
       reraise(exc_type, exc_value, tb)
     File "/opt/conda/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
       rv = self.dispatch_request()
     File "/opt/conda/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
       return self.view_functions[rule.endpoint](**req.view_args)
     File "/usr/bin/inceptionapi.py", line 489, in classify_video
       classids, classnames, confidence = zip(*classes)
   ValueError: need more than 0 values to unpack - - [28/Apr/2017 05:01:26] "POST /inception/v4/classify/video?mode=fixed&ext=.mp4
HTTP/1.1" 500 
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

> Video labeling using existing ObjectRecognition
> -----------------------------------------------
>                 Key: TIKA-2322
>                 URL: https://issues.apache.org/jira/browse/TIKA-2322
>             Project: Tika
>          Issue Type: Improvement
>          Components: parser
>            Reporter: Madhav Sharan
>            Assignee: Chris A. Mattmann
>              Labels: memex
>             Fix For: 1.15
> Currently TIKA supports ObjectRecognition in Images. I am proposing to extend this to
support videos. 
> Idea is -
> 1. Extract frames from video and run IncV3 to get labels for these frames. 
> 2. We average confidence scores of same labels for each frame. 
> 3. Return results in sorted order of confidence score. 
> I am writing code for different modes of frame extractions -
> 1. Extract center image.
> 2. Extract frames after every fixed interval.
> 3. Extract N frames equally divided across video.
> We used this approach in [0]. Code in [1]
> [0] https://github.com/USCDataScience/hadoop-pot
> [1] https://github.com/USCDataScience/video-recognition

This message was sent by Atlassian JIRA

View raw message