lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Rowe (JIRA)" <>
Subject [jira] [Commented] (LUCENE-5685) Add file:// support to
Date Mon, 19 May 2014 14:55:38 GMT


Steve Rowe commented on LUCENE-5685:

Robert, I think it should be possible to just skip the crawler script step if you have a local

bq. Or alternatively, perhaps we could just change the instructions on
to work with a local copy of the release?

They already say that?  Feel free to fix if it's not clear:

bq. Download the Lucene/Solr Maven artifacts *_(if you don't already have them)_*

bq. The crawler script intentionally excludes certain file types, so I'm not sure what happens
during the publishing if they are present, or if thats just a small optimization...

The crawler script excludes {{"\*.md5,\*.sha1,maven-metadata.xml\*,index.html\*"}} - {{\*.md5}},
{{\*.sha1}} and {{maven-metadata.xml\*}} files are excluded because these are re-generated
by Maven Ant Tasks when it uploads artifacts to the staging repository, so they aren't needed;
and {{index.html*}} are files auto-generated by the web server (and won't be present in a
copy zipped up on the server).  In all cases, the excluded files are unnecessary, but their
presence wouldn't actually cause trouble, because the artifact uploading process explicitly
names all files to be uploaded.

> Add file:// support to
> --------------------------------------------------
>                 Key: LUCENE-5685
>                 URL:
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Robert Muir
> During the release process i always zip up and download the _entire_ voted on RC folder
locally, so I can commit the release artifacts. This is just the simplest way to avoid mistakes.
> Maven publishing is a mystery to me, I just follow the instructions exactly because I'm
not totally sure what the directory structure should be that the scripts expect. 
> Currently this means i have to do a large file transfer over the internet again, because
the crawl script wont work with a file:// url (the unzipped contents of the release folder
i just downloaded).
> It would be great if it could just use 'cp -r' or something for that, rather than wget,
to save another large transfer.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message