tika-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (TIKA-2322) Video labeling using existing ObjectRecognition
Date Thu, 27 Apr 2017 17:08:05 GMT

    [ https://issues.apache.org/jira/browse/TIKA-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987045#comment-15987045
] 

ASF GitHub Bot commented on TIKA-2322:
--------------------------------------

chrismattmann commented on a change in pull request #168: fix for TIKA-2322 contributed by
msharan@usc.edu
URL: https://github.com/apache/tika/pull/168#discussion_r113751331
 
 

 ##########
 File path: tika-parsers/src/main/resources/org/apache/tika/parser/recognition/tf/inceptionapi.py
 ##########
 @@ -310,6 +320,32 @@ def index():
                 </td></tr>
                 </table>
             </li>
+            <li> <code>/inception/v3/classify/video</code> - <br/>
+                <table>
+                <tr><th align="left"> Description </th><td> This
is a classifier service that can classify videos</td></tr>
+                <tr><td></td> <td>Query Params : <br/>
+                   <code>topk </code>: type = int : top classes to get; default
: 10 <br/>
+                   <code>human </code>: type = boolean : human readable class
names; default : true <br/>
+                   <code>mode </code>: options = <code>{"center", "interval",
"fixed"}</code> : Modes of frame extraction; default : center <br/>
+                    &emsp; <code>"center"</code> - Just one frame in center.
<br/>
+                    &emsp; <code>"interval"</code> - Extracts frames after
fixed interval. <br/>
+                    &emsp; <code>"fixed"</code> - Extract fixed number of
frames.<br/>
+                   <code>frame-interval </code>: type = int : Interval for frame
extraction to be used with INTERVAL mode. If frame_interval=10 then every 10th frame will
be extracted; default : 10 <br/>
+                   <code>num-frame </code>: type = int : Number of frames to
be extracted from video while using FIXED model. If num_frame=10 then 10 frames equally distant
from each other will be extracted; default : 10 <br/>
+      
+                 </td></tr>
+                <tr><th align="left"> How to supply Video Content </th></tr>
+                <tr><th align="left"> With HTTP GET : </th> <td>
+                    Include a query parameter <code>url </code> which is path
on file system <br/>
+                    Example: <code> curl "localhost:8764/inception/v3/classify/video?url=filesystem/path/to/video"</code><br/>
+                </td></tr><br/>
+                <tr><th align="left"> With HTTP POST :</th><td>
+                    POST video content as binary data in request body. If video can be decoded
by OpenCV it should be fine. It's tested on mp4 and avi on mac <br/>
+                    Include a query parameter <code>ext </code>this extension
is needed to tell OpenCV which decoder to use, default is ".mp4" </br>
+                    Example: <code> curl -X POST "localhost:8764/inception/v3/classify?topk=10&human=false"
--data-binary @example.mp4 </code>
 
 Review comment:
   v4
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Video labeling using existing ObjectRecognition
> -----------------------------------------------
>
>                 Key: TIKA-2322
>                 URL: https://issues.apache.org/jira/browse/TIKA-2322
>             Project: Tika
>          Issue Type: Improvement
>          Components: parser
>            Reporter: Madhav Sharan
>            Assignee: Chris A. Mattmann
>              Labels: memex
>             Fix For: 1.15
>
>
> Currently TIKA supports ObjectRecognition in Images. I am proposing to extend this to
support videos. 
> Idea is -
> 1. Extract frames from video and run IncV3 to get labels for these frames. 
> 2. We average confidence scores of same labels for each frame. 
> 3. Return results in sorted order of confidence score. 
> I am writing code for different modes of frame extractions -
> 1. Extract center image.
> 2. Extract frames after every fixed interval.
> 3. Extract N frames equally divided across video.
> We used this approach in [0]. Code in [1]
> [0] https://github.com/USCDataScience/hadoop-pot
> [1] https://github.com/USCDataScience/video-recognition



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message