View this message in context http //www.nabble.com/sorting-issue-with-un-tokenized-field-tf3029674.html#a8418417
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
--------Is there a way to index/search so that a query could be written to
search on a field using arithmetic comparison operators?
What I mean is if I had a date/time field called CREATEDATE I would
search---------- Forwarded message ----------
From Scott Green <smallbadguy@(protected) >
Date Jan 17 2007 11 15 AM
Subject How to index in real time?
Firstly iHi everyone!
I 'm trying to index .jsp-pages.
I dont want to index jsp pages that the user would see but the pure jsp
code before translation to html pages.
Is there any way? Can I simply use the hHi
I am confused using IndexReader.docFreq...
I am using lucene 1.9 ....my code snippet is
int noofdoc mreader.docFreq(new Term( "TITLE " "friends "))
where mreader is MultiReader...
few doc fHi all
I 'm confused by the two argument "required " and "prohibited " in
public void add(Query query boolean required boolean prohibited)
There are two statementHi all
the pdf format "Lucene in Action " I 'm reading now is talk about Lucene
1.4 Is the book updated with Lucene 2.0? I don 't have any information
Appreciate your help
It seems that range query is not going through tokenization process. E.g. I have a field call "iso " which contains the photographic iso number such as 100 200 400 ... I have a special tokenizer
I am quite new to lucene so forgive me if I cannot see something obvious.
I have the following code
Is it possible to specify a sort on a field using standard Lucene search
query syntax? I was not able to find it in the query doc so I assume not
but I would like to make sure before going on to useHi
Has anyone encountered significant amounts of Websphere Dark Matter
generation when using Lucene?
We have a scenario where a web search app using Lucene causes
Websphere 5.1 allocated memory to Hi Lucene Users!
I 've been playing around with dotLucene on a few projects since for about 4
months and I 've found Lucene to be exceptionally powerful speedy and
thanks to LIA really easy to usI looked through the archive a bit and found some Q & A 's regarding this
but I didn 't see anything definitive so I thought I 'd ask again...
Basically I have a web page that can search through a datadefault order is different between L1 and
Content-Type text/plain charset us-ascii
My search aHi all
I want use to Lucene to store GData format data the user case is
1. The data is only stored in Lucene Lucene is used for Store Index
2. each data have different attHi all
I want first erase the original index and then create an index for
appending I use the following python code using ports pyLucene.
I 'm wondering what will happened if I performance indexing and have 10
peoples do searching at the same time? Can I retrieve the results while I do
index and the other way around?
With a project we want to use Lucene in we are running into
performance problems with regard to building filter sets.
Let me give you a quick overview of what we need to do
We are indexing infoHi there
I 'm having some strange behaviour using the highlighter and I 'm wondering if
it is a bug or should I take a different approach ?
I want to highlight the search terms that were used to exec
Hello I 'd like to index a web forum (phpBB) with Lucene. I wonder how to
best map the forum document model (topics and their messages) to the Lucene
Usually some forum member creatHi
I have problem searching wildcards terms contains _ or -.
When I search for non-linear I see the query as title "non linear " but
when I search for non-lin* I get the query as title non-lin* andCan somebody please let me know if it it possible to get the source
code for Lucene version 1.3 and other earlier versions. I need them for
How do Lucene give each document an ID when the document is added
and How do we retrieve a document by document ID? appreciate your help!
can someone please tell me where the most appropriate place to report
bugs might be - in this case for the hit-highlighter contribution
begin 666 luke_diffs.dat
M3VYL >2!I B N+B]L 6ME.B!B 6EL9 ID 69F( "UU( "UR( "XO <W)C+V]R9R]G
M971O < '0O '5K92] 6ME+FIA F$@+BXO '5K92]S <F O W)G+V E &]P "]L
M 6ME+TQU V4N F%V80HM+2T@+B]S <F O W)G+V E &]P "]My experience tonight is that the stock 1.9-based Luke won 't open my 2.0
indices. So I fixed up a version of the source.
Anyone else want it?
I need to modify the StandardAnalyzer so that it will tokenize zip codes
that look like this
I think the part I need to modify is in here - specifically
<HAS_DIGIT > <PHi
We are having some trouble with the results that we get from certain
Basically .. we have documents that we index each document has a bunch of
tags the tags could be of the sort
I have to index 37million documents retrieved from the database.
I was trying to do by loading intervals of 10000 records but it is too slow.
Anybody could sugest a better way to ge
I would like to draw your attention to an open and rather devious
long-standing index corruption issue that we 've only now finally
gotten to the bottom of