File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/98/w98-0208_concl.xml
Size: 11,062 bytes
Last Modified: 2025-10-06 13:58:14
<?xml version="1.0" standalone="yes"?> <Paper uid="W98-0208"> <Title>ACL-COLING Workshop on Content Visualization and Intermedia Representations</Title> <Section position="6" start_page="57" end_page="61" type="concl"> <SectionTitle> 6. Sensor Visualization </SectionTitle> <Paragraph position="0"> The Multisource Integrated Information Analysis (MSIIA) project, led by Steve Hansen at MITRE, is exploring effective mechanisms for sensor and battlefield visualization. For example, national and military intelligence analysts are charged with monitoring and exploiting dozens of sources of information in real time. These range from sensors which capture images (infrared, electrooptical, multispectral) to moving target indicators characterized by a point and some features (e.g., tracked vs. wheeled vehicle) to signals intelligence characterized by centroids and error elipses. Knowing which source to select and which sensors to task is paramount to successful situation assessment. An integrated view into what sensors are where when, as well as a fused picture of their disparate information types and outputs, would be invaluable. Figure 8 illustrates one such visualization. The x-y dimension of the upper display captures the coordinates of a geospatial area whereas the y coordinate displays time. This enables the user to view which areas are being sensed by which type of sensor (encoded by color or implicitly by the resultant characteristic shape). For example, a large purple cylinder represents the area over time imaged by a geosychronous satellite, the green cylinders are images taken over time of spots on the surface of the earth, whereas the wavy blue line is the ground track of a sensor flying across an area (e.g., characteristic of a unmanned air vehicle such as predator). If we take a slice at a particular time of the upper display in Figure 8 we get the coverage of particular areas from a specific time. If we project all sensor coverages over an area downward to the surface, we obtain the image shown in the lower display of Figure 8.</Paragraph> <Paragraph position="1"> military for planning and training. The sand can be sculpted to match the terrain in a specific geographic region. People standing around the table can place plastic or metal models of vehicles and other assets over this terrain model to indicate force deployment and move them around the terrain to indicate and/or rehearse force movements.</Paragraph> <Paragraph position="2"> A user can utilize this display to determine what material is available for a given time and space, analyze unattended coverage areas, and plan future collections. MSIIA is also investigating georegistration and display of the results of collections in an integrated, synthetic view of the world (e.g., fusing maps with images with radar returns). We consider next another example of synthetic views of the world.</Paragraph> <Paragraph position="3"> 7. Collaboration and Battlefield Visualization Just as visualization plays an important role in information space visualization for MSIIA, MITRE's research on the Collaborative Omniscient Sandtable Metaphor (COSM) seeks to define a new generation of human-machine interfaces for military Command and Control (C2). The &quot;sandtable&quot; underlying COSM is a physical table whose top is rimmed with short walls and filled with sand. It is used in the In defining COSM, we expanded the functionality of a sandtable and moved it into an electronic domain. It now taps into global gigabyte databases of C2 information which range from static data on airfield locations, to real-time feeds from hundreds ground, air, and space based sensors. This data is used to synthesize macroscopic or microscopic views of the world that form the foundation of a collaborative visualization system. Manipulating these views leads not only to modifying data, but also directing the actions of the underlying physical assets (e.g., moving an icon causes an aircraft to be redirected from point A to point B). A conceptual view of COSM is shown in Figure 9, where participants at air, land, and sea locations collaborate over an electronic sandtable. Some users are physically present, while others are represented by their avatars.</Paragraph> <Paragraph position="4"> The key elements of COSM are geographic independence (transparent access to people, data, software, or assets regardless of location), a multimodal, direct manipulation interface with an initial emphasis is on the visual modality, heterogeneous platform support (enabling users to tailor data depictions to a range of platform capabilities), and data linkage (maintaining all parent, child, and peer relationships in the data). A first instantiation of COSM was implemented using Virtual Reality (VR) technology, as illustrated in Figure 10. The table is a stereoscopic projection system driven by a graphics workstation. It uses a horizontal display surface approximately 6 feet wide and 4 feet deep to display maps, imagery, and models of the terrain and objects upon or above the terrain.</Paragraph> <Paragraph position="5"> Since it is stereoscopic, objects above the terrain, such as airbome aircraft, appear to be above the surface of the table. The vertical screen behind the table is a rear-projection display used primarily used for collaboration support. At the top, we see a panel of faces representing all the remote users who have similar systems and are currently connected to this one with audio, video, and data links. The table serves as a shared whiteboard that is visible to all the users and can be manipulated by them. The larger faces at the bottom of the vertical screen are two users who have &quot;stepped up to the podium&quot; and currently have control of what it being seen on the table.</Paragraph> <Paragraph position="6"> The figure shows the user interacting with the table through the use of two magnetic position trackers. The first is attached to a pair of stereoscopic glasses, and as the user moves his head and walks around the table the computer determines his eyepoint location from the tracker and recomputes his view accordingly. The second tracker is attached to a glove that serves as an input device. The user's gloved hand becomes a cursor and he can use his fingers to touch an object to indicate selection or grab and move an object to indicate an action.</Paragraph> <Paragraph position="7"> Several different kinds of information can be displayed on the table. Figure 11 illustrates a display of current air and ground information.</Paragraph> <Paragraph position="8"> There are several aircraft depicted as realistic models, with the relative scale of the models representing the relative sizes of the respective aircraft. They move in real-time, with the stereoscopic display making them appear to be flying above the table. Conceptually, the positions of the aircraft are provided in real-time by a radar system and the user has the option of displaying them as symbols or models. Remote users worldwide have real-time access to the data. The hemisphere in the upper left is a simple, unclassified representation of the threat dome of a Surface to Air Missile (SAM) emplacement. The large arrow is a cursor that is controlled by a remote user who is collaborating over this display. The amorphous blob in the lower left is a depiction of a small storm cell that is also moving through the region. This weather data is visually integrated in real-time with the current air picture data. The aircraft position, weather, and threat information are all provided by different sensor systems. However, they share a common spatiotemporal reference that allows them to be fused in this real-time synthetic view of the world. Every object in this synthetic view also serves as a visual index into the underlying global C2 database. Selecting an aircraft would let us determine its current status (airborne with a certain speed and heading) and plans (origin, destination, and mission), as well as associated information such as logistics at its base of origin. Figure 11. Synthetic View of the World Our current research is focused on the use of aggregation and deaggregation of data within visual depictions, in order to support a wide range of users. A weaponeer wants to study the details of a target (e.g., construction material, distance below ground) that is only a few hundred feet by a few hundred feet in size. A commander wants an overview of all airborne assets, targets, etc. for a region that is several hundred by several hundred miles in size.</Paragraph> <Paragraph position="9"> However, those examining an overview will frequently wish to &quot;drill down&quot; for maximum detail in certain areas, while those examining a detailed area may wish to examine a more global view to retain context. Allowing the visualization of data with this wide range of geographic scopes, as well as iterative travel between detail and overview, poses challenges in both data depiction, data simplification, and intuitive navigation techniques.</Paragraph> <Paragraph position="10"> 8. Conclusion and Research Areas The above varied and rich application spaces e.g., visualizing search results, topics, relations and events in news broadcasts, battlefield activities - provide a number of challenges for visualization research. Fundamental issues include: 1. What are effective information encoding/ visualization techniques for static and dynamic information visualization, including complex semantic objects such as properties, relations, and events? 2. What are the most effective methods for utilizing geospatial, temporal, and other contexts in synthetic displays of real world events that facilitate interface tasks (e.g., location, navigation), comprehension, analysis and inference? 3. What kinds of interactive devices (e.g., visual and spatial query) are most effective for which kinds of tasks (e.g., anomaly detection, trend analysis, comparative analysis).</Paragraph> <Paragraph position="11"> 4. What new evaluation methods, metrics, and measures are necessary for these new visualization methods? In visualization, we tend to deal with complexity through methodologies involving abstraction, aggregation, filtering, and focusing. Insights from natural language processing promise to help extract semantic information from text channels, to provide a richer, task-relevant characterization of the information space. Visualization can certainly benefit from other aspects natural language processing in achieving economy of interaction such as notions of context in reference (e.g., &quot;fast_forward <the next week>&quot;) or relation (e.g., move &quot;<enemy_icon> behind <Bunker Hill_icon>&quot; in the currently focused display). An investigation of many applications, tasks, and interaction methods will be required to make progress in better understanding and answering these and other ~ndamental questions.</Paragraph> </Section> class="xml-element"></Paper>