File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-2401_concl.xml
Size: 3,496 bytes
Last Modified: 2025-10-06 13:54:25
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-2401"> <Title>A Linear Programming Formulation for Global Inference in Natural Language Tasks</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 5 Discussion </SectionTitle> <Paragraph position="0"> We presented an linear programming based approach for global inference where decisions depend on the outcomes of several different but mutually dependent classifiers. Even in the presence of a fairly general constraint structure, deviating from the sequential nature typically studied, this approach can find the optimal solution efficiently. null Contrary to general search schemes (e.g., beam search), which do not guarantee optimality, the linear programming approach provides an efficient way to finding the optimal solution. The key advantage of the linear programming formulation is its generality and flexibility; in particular, it supports the ability to incorporate classifiers learned in other contexts, &quot;hints&quot; supplied and decision time constraints, and reason with all these for the best global prediction. In sharp contrast with the typically used pipeline framework, our formulation does not blindly trust the results of some classifiers, and therefore is able to overcome mistakes made by classifiers with the help of constraints.</Paragraph> <Paragraph position="1"> Our experiments have demonstrated these advantages by considering the interaction between entity and relation classifiers. In fact, more classifiers can be added and used within the same framework. For example, if coreference resolution is available, it is possible to incorporate it in the form of constraints that force the labels of the coreferred entities to be the same (but, of course, allowing the global solution to reject the suggestion of these classifiers). Consequently, this may enhance the performance of entity/relation recognition and, at the same time, correct possible coreference resolution errors. Another example is to use chunking information for better relation identification; suppose, for example, that we have available chunking information that identifies Subj+Verb and Verb+Object phrases. Given a sentence that has the verb &quot;murder&quot;, we may conclude that the subject and object of this verb are in a &quot;kill&quot; relation. Since the chunking information is used in the global inference procedure, this information will contribute to enhancing its performance and robustness, relying on having more constraints and overcoming possible mistakes by some of the classifiers.</Paragraph> <Paragraph position="2"> Moreover, in an interactive environment where a user can supply new constraints (e.g., a question answering situation) this framework is able to make use of the new information and enhance the performance at decision time, without retraining the classifiers.</Paragraph> <Paragraph position="3"> As we show, our formulation supports not only improved accuracy, but also improves the 'human-like&quot; quality of the decisions. We believe that it has the potential to be a powerful way for supporting natural language inferences.</Paragraph> <Paragraph position="4"> Acknowledgements This research has been supported by NFS grants CAREER IIS-9984168, ITR IIS-0085836, EIA-0224453, an ONR MURI Award, and an equipment donation from AMD. We also thank the anonymous referees for their useful comments.</Paragraph> </Section> class="xml-element"></Paper>