Re: Updated guidelines; Request for comment on shot features by 27. March
- Subject: Re: Updated guidelines; Request for comment on shot features by 27. March
- From: "Arnon Amir" <firstname.lastname@example.org>
- Date: Mon, 25 Mar 2002 21:01:51 -0800
- Content-type: text/plain; charset=us-ascii
- Importance: Normal
I have some concern regarding the Manual task - if one does the manual
as described, then looks at the results, notices some errors, changes the
little bit "because it was not such a good query - I could have done it
better in the
first place" and then repeats the experiment, "to improve the results" -
submit it as manual_run_2 - but this will no longer be a manual run...
group will use different queries - what part of the difference in
results is due to a better query, and what part is due to a better system?
Overall, I am not sure how much difference would it make compare to
the interactive run.
However, if NIST provides the manually crafted queries to be used by all
then we can compare those results across systems. This can be done in the
way: All the groups will report which features they might want to use for
run, say, two weeks after NIST will publish the set of topics. NIST will
then publish the
set of features, and will assign each topic with the relevant feature
(keywords="space rocket launch", faces_in_image="2", indoor="0", water="0"
Then everyone would use only those feature values as the input for the
Other minor comments:
- It says "One must be designated at submission as the best for system
How can we tell beforehand which run would be the best? Can't we compare
evaluated runs, like we did last year? or let each group to select its
run after they get evaluated.
- Note that SB test set and Search test set should share no videos in
common - otherwise
it will conflict when we provide the common SB set for the Search task.
Dr. Arnon Amir
Research Staff Member
IBM Almaden Research Center
650 Harry Rd., San Jose, CA 95120, USA.
Paul Over <email@example.com>@nist.gov on 03/22/2002 12:42:24 PM
Please respond to firstname.lastname@example.org
Sent by: email@example.com
To: Multiple recipients of list <firstname.lastname@example.org>
Subject: Updated guidelines; Request for comment on shot features by 27.
I've posted a revised set of guidelines after running them by Alan.
They attempt to strike reasonable compromises, where required, as
we juggle various desires/limitations and try to reach closure very
soon. The track needn't be perfect, just as useful and efficient
as we can practically make it this year. It still says "draft", but
major objections only, please.
As for outstanding issues:
- We hope to have a final collection definition by the end of next
- We need input on the idea of extending the shot detection task to
include extraction of additional features.
=====> If you are really interested in this, please post a
prioritized list of the 3-5 features (clearly defined)
that you would like to see evaluated. Please keep in
mind one consideration will be the cost of annotation.
Thanks for all your help!
Have a good weekend,
Paul Over - Retrieval Group
Information Access Division
Information Technology Laboratory
National Institute of Standards and Technology
Bldg. 225 Rm. A211 (Mailstop 8940)
Gaithersburg, MD 20899-8940 USA
Voice: 301 975-6784 Fax: 301 975-5287
Date Index |
Thread Index |
Problems or questions? Contact email@example.com