References

1   Adam Berger, Stephen Della Pietra, and Vincent Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 21--22. 

2   Avrim Blum , Tom Mitchell, Combining labeled and unlabeled data with co-training, Proceedings of the eleventh annual conference on Computational learning theory, p.92-100, July 24-26, 1998, Madison, Wisconsin, United States 

3   Branimir K. Boguraev , Mary S. Neff, The effects of analysing cohesion on document summarisation, Proceedings of the 18th conference on Computational linguistics, p.76-82, July 31-August 04, 2000, Saarbrcken, Germany 

4   Branimir K. Boguraev , Mary S. Neff, Discourse Segmentation in Aid of Document Summarization, Proceedings of the 33rd Hawaii International Conference on System Sciences-Volume 3, p.3004, January 04-07, 2000 

5   Eugene Charniak, A Maximum-Entropy-Inspired Parser, Brown University, Providence, RI, 1999 

6   Stanley F. Chen and Ronald Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, Carnegie Mellon University. 

7   Pedro Domingos , Michael Pazzani, On the Optimality of the Simple Bayesian Classifier under Zero-One Loss, Machine Learning, v.29 n.2-3, p.103-130, Nov./Dec. 1997 

8   Jade Goldstein , Mark Kantrowitz , Vibhu Mittal , Jaime Carbonell, Summarizing text documents: sentence selection and evaluation metrics, Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, p.121-128, August 15-19, 1999, Berkeley, California, United States 

9   Julian Kupiec , Jan Pedersen , Francine Chen, A trainable document summarizer, Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, p.68-73, July 09-13, 1995, Seattle, Washington, United States 

10   Stephen Della Pietra , Vincent Della Pietra , John Lafferty, Inducing Features of Random Fields, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.19 n.4, p.380-393, April 1997 

11   Daniel Marcu, The automatic construction of large-scale corpora for summarization research, Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, p.137-144, August 15-19, 1999, Berkeley, California, United States 

12   A. McCallum and K. Nigam. 1998. A comparison of event models for naive bayes text classificatio. In AAAI-98 Workshop on Learning for Text Categorization. 

13   Kamal Nigam, John Lafferty, and Andrew McCallum. 1999. Using maximum entropy for text classification. In IJCAI-99 Workshop on Machine Learning for Information Filtering,. 

14   William H. Press , Saul A. Teukolsky , William T. Vetterling , Brian P. Flannery, Numerical recipes in C (2nd ed.): the art of scientific computing, Cambridge University Press, New York, NY, 1992 

15   Adwait Ratnaparkhi. 1996. A Maximum Entropy Part-Of-Speech Tagger. In Proceedings of Empirical Methods in Natural Language, University of Pennsylvania, May. Tagger: ftp://ftp.cis.upenn.edu/pub/adwait/jmx. 

16   S. Teufel and M. Moens. 1997. Sentence extraction as a classification task. In ACL/EACL-97 Workshop on Intelligent and Scalable Text Summarization, Madrid, Spain. 

17   Simone Teufel. 2001. Task-Based Evaluation of Summary Quality: Describing Relationships Between Scientific Papers. In NAACL Workshop on Automatic Summarization, Pittsburgh, Pennsylvania, USA, June. Carnegie Mellon University. 
