Final Literature Review Summary Report

Final Lit Review Summary Report_4-10-19.docx

Surveys to Collect Data on Use of the NOAA National Weather Service Cone of Uncertainty

Final Literature Review Summary Report

OMB: 0648-0791

Document [docx]
Download: docx | pdf







Cone of Uncertainty

Social and Behavioral Science Research


GSA Contract: GS-00F-079CA

NOAA BPA #EA-133C-14-BA-0040


TO #32


Literature Review Summary Report





April 10, 2019














Cone of Uncertainty: Literature Review Summary Report

  1. Introduction

The NOAA National Hurricane Center’s (NHC’s) tropical cyclone forecast track graphic, commonly referred to as the cone of uncertainty (COU), may be both the most viewed and the most misinterpreted product within the tropical cyclone product suite. Designed to convey the probable track of a tropical cyclone’s center, the cone’s visual features have come under scrutiny with many studies and reports pointing to an array of misunderstandings that have important consequences on decision-making.

To understand these issues, ERG conducted a thorough literature review of both government and academic works. These include social, behavioral, and economic science studies that examine misunderstandings of the cone, implications on decision-making, and visualization concerns and alternatives.

  1. Background

The COU was introduced to the NHC website in 2002 (E. Rappaport, personal communication, October 24, 2018). The cone (see sample in Figure 1) represents the probable track of the center of a tropical cyclone and is formed by enclosing the area swept out by a set of circles (not shown) along the forecast track (at 12, 24, 36 hours, etc.). The size of each circle is set so that two-thirds of historical official forecast errors over a five-year sample fall within the circle (https://www.nhc.noaa.gov/aboutcone.shtml).

Shape3 The visualization has changed in important ways over the years. For example, as forecast track errors have decreased over time, the size of the cone has also decreased. Other changes include the introduction of a distance scale in 2005, which was later replaced in 2011 by a statement that the cone contains the probable path of the center of the storm, does not represent the size of the storm, and that hazardous conditions can occur outside of the cone. In 2007, an option was added to view the cone without the center track line. In 2009, the no track line became the default toggle view on the NHC website. In 2003, a five-day track and cone were added. Some of the other major changes to the graphic have included adding symbols to indicate forecast intensity for sustained winds less than 39 miles per hour (mph), 39–73 mph, and greater than 73 mph (2004); adding a symbol for sustained winds greater than 110 mph (2009); adding post-tropical and extratropical symbols (also in 2009); adding initial tropical-storm-force and hurricane-force wind fields (2017); and adding potential tropical cyclone symbols (also in 2017).

  1. Methodology

Our literature review primarily focused on studies from 2013 to 2018 to cover research findings on the most recent iterations of the cone (thus containing the most relevant data to inform future cone developments). However, ERG did consult earlier, seminal papers and NWS service assessments to see how frequently the cone is referenced and to assess its performance over time. An accompanying Excel spreadsheet to this report includes full publication information, research topics, sample and method information, and abstracts for the listed studies (as well as related studies that we did not include in the summary report because they are out of date; full copies of most studies are available upon request).

Our literature review search terms included, but were not limited to:

  • (“cone of uncertainty” OR “track forecast cone”) AND (“hurricane” OR “tropical cyclone”)

  • (“cone of uncertainty” OR “track forecast cone” AND (“misinterpret*”)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“emergency manage*”)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“public”)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“evacuat*”)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“decision*”)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“broadcast*)

  • (“cone of uncertainty” OR “track forecast cone”) AND (“visualiz*)


After reviewing our initial list of studies, ERG looked at salient reports that were cross-referenced in studies and added them to our review. From here, we coded the studies by major topic (i.e., misunderstandings, implications on decision-making, and/or visualization). A number of studies covered more than one topic. We then summarized key research points from each study and identified common themes among them. Our findings are described below.

  1. Common Misunderstandings

The literature reveals several types of misunderstandings (primarily documented among members of the public or nonexperts) with the COU graphic. These misunderstandings arise both from the way people comprehend the map’s salient visual features (e.g., track line, cone boundary, legend), as well as from the way they perceive and process this information (Losee et al., 2017). Judgments of probability can be influenced by cognitive biases that come into play as well as by factors such as past experience, the severity or frequency of relevant events, and the salience and memorability of those events (Milch et al., 2018; Padilla et al., 2017; Demuth et al., 2016; Lazo, et al., 2015; Wu et al., 2014). Common misunderstandings documented in the literature are explained in more detail below.

Common Misunderstandings

Misinterpreting the cone as an impact visualization. The COU is widely misunderstood as the storm impact area. A study of the 2004 hurricane season, in which four named storms struck Florida, found that Florida residents and media reporters believed that the COU represented an area of impact rather than a distribution of potential hurricane tracks (Broad et al., 2007). Even as recently as 2016, the NWS service assessment of Hurricane Matthew found that many NWS partners and the public view the COU as an impact area/zone. Part of the issue may be that people often conflate uncertainty about a hurricane’s arrival with ambiguity about the magnitude of the potential impacts (Milch et al., 2018). Additionally, the COU is a binary representation, which may cause people to feel a false sense of security outside of the cone or an exaggerated sense of insecurity inside of it (Liu et al., 2015). In fact, one study found that participants believed the COU communicates certainty instead of uncertainty: the study’s interviewees indicated they believed they will certainly experience impacts if they live within the “cone of death” (Bostrom et al., 2018).

Confounding uncertainty as storm size or intensity. Studies have also found that some viewers of the COU incorrectly believe that it depicts the size or intensity of the hurricane, which grows as the storm approaches land (Padilla et al., 2017; Ruginski et al., 2016). A potential source of this misinterpretation is the cone’s border, which is the salient feature of the visualization, meaning it is an element that attracts attention (Padilla et al., 2017). The border may be contributing to viewers’ beliefs that the storm is growing in size over time compared with other kinds of visualizations that do not have this salient feature (Padilla et al., 2017; Ruginski et al., 2016).

Shape4 Misperceptions related to the center track line. Studies that examined earlier iterations of the cone repeatedly emphasize the misperception caused by the center line. The NWS service assessment of Hurricane Charley (2006), which made landfall on the southwest coast of Florida just west of Fort Myers on August 13, 2004, indicated that “Many people focused on the specific forecast track which indicated the projected path of the center of Hurricane Charley making landfall near Tampa Bay instead of the cone of uncertainty” (NOAA, 2006). As part of the service assessment, approximately 100 people in the Fort Myers and Punta Gorda areas were interviewed. Over 90 percent said they heard the hurricane was going to hit Tampa and did not realize they could be impacted until it was too late to prepare (NOAA, 2006).

In Broad et al.’s (2007) study, meteorologists and public officials in Florida criticized the track line for causing confusion and leading residents to believe they were safe, particularly during Hurricane Charley. Several meteorologists indicated that they remove the black line altogether from their broadcasts because it “conveys more certainty than actually exists” (Broad et al., 2007). Although the NHC removed the center line as the default toggle review in 2009, the NWS service assessment of Hurricane Matthew found that “Significant confusion continues regarding the official NHC tropical cyclone track forecast. NWS partners and the public alike continue to focus on the ‘skinny black line’” (NOAA, 2017).

Focusing on the forecast track may also have implications on risk perception and preparedness actions. Sherman-Morris and Del Valle-Martinez (2017) found that people’s attention (measured by web searches and visits to a weather website) were greatest at the location near the end-point of the five-day track forecast, but decreased as the track forecast moved away from the location. If a person perceives the forecast track is shifting away from their location, but they are still in an area of potential impact, they may perceive lower risk or lose interest, which could also decrease their interest in evacuating or taking hurricane preparedness actions (Meyer et al. 2013).

Although the literature has shown ongoing confusion around the center track line, there also is evidence that people do realize that hurricanes might not always follow the forecast track or even fall within the uncertainty cone (Wu et al., 2014). One study even found that the track line might help in preparedness decisions. In an experiment where participants used a ‘‘virtual living room’’ to search for information from simulated television, radio, newspaper, internet, and peers as a hurricane approaches, participants who viewed forecast graphics containing the track line had higher levels of preparation than those who saw only uncertainty cones, and this was true even for people living far from the predicted center path (Meyer et al., 2013). 

Anchoring to storm intensity.1 Wu et al. (2014) also found that participants’ judgments about the likelihood of a hurricane striking an area were affected by hurricane intensity expressed as the Saffir-Simpson wind categories. They thought that a storm was more likely to strike along the given forecast track and in all sectors on a map (even those not in the COU) with a category 4 storm versus a category 1 storm. Thus, people thought the storm was more likely to strike everywhere when the forecast was for a more intense storm.

Misinterpreting the near-term forecast track. At times, the storm’s current movement text description at the bottom of the COU can differ significantly from the near-term forecast track that is depicted visually for the following hours, as was the case with Hurricane Joaquin in 2015 (NTSB, 2017).This discrepancy occurs because when the NHC issues intermediate public advisories (providing only updated current storm information with no new forecast), the storm position can unexpectedly deviate from the forecast the NHC issued three hours earlier. This visual discrepancy may confuse users “who look at the product to follow a storm’s near-term progress” (NTSB, 2017).

  1. Implications on Decision-Making

Members of the public, public officials, broadcasters, and other impacted groups make a variety of protective and preparatory decisions leading up to and during a storm. More information on what role the COU plays in the decisions of these specific groups is provided below.

Members of the Public

Some of the most common short-term decisions members of the public make in preparing for an imminent storm include protecting homes (mostly from wind impacts; e.g., shuttering windows), buying emergency supplies and backup generators, securing pets and loose objects (e.g., on porch/outside), making evacuation plans, and actually evacuating (Meyer et al., 2014; Bostrom et al., 2018). Long-term protective actions (done well in advance of a storm) include installing features to make homes more resilient to hurricane damage (e.g., installing wind-resistant glass windows and doors) or purchasing flood insurance (Meyer et al., 2014; Bostrom et al., 2018).

The literature shows that the public’s risk perception and decision framework for taking the aforementioned protective actions is holistic, comprising information from a multitude of sources, including NOAA watches and warnings, the NHC website, forecast tracks, text discussions, and probabilistic wind field and storm surge maps (Meyer et al., 2013; Milch et al., 2018). Other information sources include private weather products (derived from NOAA’s offerings), public officials, television and radio broadcasts, the internet and social media, past experience with hurricane impacts and evacuation, and cultural worldviews (Meyer et al., 2013, 2014; Bostrom et al., 2016; Morss et al., 2016).

Although members of the public may consult these myriad sources, the literature repeatedly suggests that the COU is the most well-known and pervasive (and sometimes the only) NOAA product they access. For example, in Bostrom et al.’s (2018) study of how coastal residents in hurricane-prone Miami-Dade County, Florida, interact with hurricane forecasts, no members of the public mentioned viewing any other NHC product besides the cone—supporting earlier research that suggests it is one of the primary products the public sees when making hurricane response decisions and determining personal risk (Sherman-Morris and Del Valle-Martinez, 2017; Cox et al., 2013; Broad et al., 2007).

However, despite being the most familiar NOAA product to the public, the literature has found that the cone does not always positively impact the public’s decision to prepare for a storm or evacuate. For example, in a study of how Mid-Atlantic residents perceive hurricane hazards, Saunders and Senkbeil (2017) showed participants eight scenarios that featured the cone. Although storm track influenced their level of concern about personal risk (particularly in coastal areas), it did not have a major impact on their decision to evacuate, with the majority of participants indicating they would not evacuate for any of the eight scenarios. Sixty-one percent of the study participants also stated that they “never or only occasionally use hurricane graphics” in their evacuation decision-making (Saunders and Senkbeil, 2017). Furthermore, in the same Bostrom et al. (2018) study in which the cone was the only NHC product mentioned by the public, most interviewees chose not to evacuate, deciding instead to “ride out” hurricanes at home.

The literature provides two prevailing reasons for this inaction. First, the underlying concepts of probability and uncertainty confuse the public and can create ambiguity that causes them to misinterpret messaging. For example, they may perceive high forecast uncertainty regarding a storm’s location as meaning it has a low probability of occurring (despite potentially high impacts), thus decreasing their motivation to prepare and their likelihood to evacuate (Hogan Carr et al., 2016; Martinez, 2017). As mentioned earlier, members of the public may also place undue confidence on the center track line (Broad et al., 2007; Cox et al., 2013), potentially resulting in people who are on the outer edges of the COU being less likely to take preparedness or evacuation actions (Pugh et al., 2017).


The second reason is that the cone does not provide the types of information (storm surge, size, impacts, calls-to-action) that are more likely to influence the public’s decision-making, particularly in regard to evacuation (Bostrom et al., 2018; Milch et al., 2018; Saunders and Senkbeil, 2017). If the COU continues to be the primary or only NOAA hurricane product the public sees and uses, its lack of storm surge information may be particularly problematic, as surge is often the greatest threat to life and property during a hurricane. The ubiquity of the cone, but perceived absence of other types of key information, highlights a broader challenge for NOAA: the NHC and NWS weather forecast offices (WFOs) do create products that provide these other decision-making information needs, but members of the public may not know about them or be able to understand them.

For example, Bostrom et al. (2016) estimated that as Hurricane Katrina progressed from Louisiana to Mississippi, the NHC issued “35 weather advisories, including five hurricane watches and warnings and over 10 tropical storm watches and warnings.” During Sandy, “the NHC and WFOs issued over 500 different forecast, warning, and advisory products; the NHC alone issued 12 different communication product types and over 300 graphical products.” Yet despite the availability of this NOAA information (which was augmented by an abundance of hurricane information from broadcasters and public officials), members of the public still failed to receive key information about their storm surge risk (Bostrom et al., 2016). Milch et al. (2018) reiterated this point, noting that although the NHC does provide storm surge information via the Potential Storm Surge Flooding Map, it is several clicks away from the COU graphic on the NHC website.

The research indicates that without other hurricane information, the public may inaccurately perceive risks and make the wrong decisions. For example, in a real-time study of risk perception leading up to Hurricanes Isaac and Sandy, 100 percent of surveyed residents indicated they were aware of the approaching storms, yet they had poor mental models of personal impacts—often times underestimating surge and overestimating wind (Meyer et al., 2014). These poor mental models often result in only a small fraction of residents taking appropriate actions to protect themselves (Meyer et al., 2014; Milch et al., 2018).

Furthermore, the research repeatedly emphasizes the public’s tendency to focus on wind (Bostrom et al., 2018; Milch et al., 2018; Saunders and Senkbeil, 2017; Meyer et al., 2014). According to Milch et al. (2018), the cone partially reinforces this misperception—perhaps due to its textual description of current wind speeds and graphical depiction of the storm’s initial wind field. In one study with Mid-Atlantic residents, even when shown a multiple-pane graphic showing the forecast track, potential storm surge, and potential damaging winds, participants indicated they were most concerned with damaging winds, followed by falling trees and the size of the storm (Saunders and Senkbeil, 2017). Most participants found wind hazards more threatening than water hazards (Saunders and Senkbeil, 2017).

Public Officials and Broadcasters

Leading up to a hurricane, public officials (e.g., emergency managers, local law enforcement, and government workers) must make many decisions about communicating risks and protective actions to the public. They also are taking many preparedness actions (such as opening shelters and emergency operation centers, coordinating resources, and making evacuation decisions), while broadcasters primarily focus on interpreting and communicating forecast information to the public. As with the public, the literature suggests that the COU is the most well-known hurricane product to both public officials and broadcasters. One study even suggested that other forecast and warning products are only salient to the NOAA forecasters who produce them (Bostrom et al., 2016).

Broadcasters often convey uncertainty using the cone or their own version of it. However, because the COU confuses the public, broadcasters also rely on their body language and verbal messaging to communicate (un)certainty, while others avoid conveying uncertainty information as much as possible. Public officials must also contend with a lot of uncertainty information while making protective action and preparedness decisions, but they do not rely on the cone graphic alone. To understand uncertainty, they speak with trusted forecasters at NHC and WFOs and look at all the information available (Bostrom et al., 2016). Many coastal emergency management divisions also use decision-support tools, such as HURREVAC, to help in their planning and decision-making.

  1. Visualizations

Visualizations have desirable properties that can enhance the understanding of risk. They can reveal data patterns, attract and hold people’s attention, and allow an end-user to process information more effectively than when data are presented alone. Visualizations of uncertainty in weather forecasts are often used by public officials, weather experts, and members of the public who need to make critical decisions, such as those related to evacuations or allocation of emergency resources in advance of a storm.

There are various ways to visualize uncertainty. One approach is to use a summary display, such as the COU. Summary displays are created by plotting statistical parameters (e.g., mean, median, standard deviations) instead of plotting actual data points. Compared to other methods, summary displays are often used for public presentations of data (Pang, 2008) because they are simple to understand and enable viewers to recognize patterns easily (Harrower and Brewer, 2003; Dobson, 1973, 1980).

However, researchers have also documented several drawbacks to summary displays. For example, summary displays require an explanation and legend, placing additional cognitive load on the viewer (Liu et al., 2015). They can also hide important features in the data such as outliers (Whitaker et al., 2013). Additionally, studies have demonstrated that the public, students, and even trained experts can misinterpret summary displays (Savelli and Joslyn, 2013).

As a summary display, the COU does not follow foundational cartographic principles of hierarchy, which require the level of salience to correspond with the level of importance. One of the most salient features of the COU is its border, which has been misinterpreted as a display of size instead of spatial uncertainty (Padilla et al., 2017). As mentioned earlier, studies have found the changing diameter causes novice viewers of the cone to incorrectly believe that it depicts a hurricane growing over time (Cox et. al., 2013). Given these drawbacks, researchers suggest not using summary displays for applications in which spatial boundaries of uncertainty can be inherently misunderstood as physical boundaries, as is the case with the COU (Padilla et al., 2017).

Alternative Visualizations

Ensemble displays are another method of visualizing uncertainty. Over the last decade, a wealth of ensemble visualization techniques have been introduced in many scientific domains (including weather prediction) (Sanyal et al., 2010), and encouraging results have been reported (Wang et al., 2018). Ensemble displays are created by generating or collecting multiple data values or “ensemble members” and then, plotting all, or a subset of, the ensemble members on a plane.

Supporters of ensemble displays cite many benefits to this kind of visualization, including the ability to depict all (or the majority) of the ensemble data, making a representative portion of the data visually available (Liu et al., 2017). Unlike summary displays, ensemble visuals preserve discrete cluster and outlier information that reveal non-normal spatial relationships. Viewers also can, in some cases, accurately derive statistical information depicted by these displays, including probability distributions (Cox et al., 2013). For example, ensemble displays of the COU appear to be less susceptible to the misconception that the cone grows in size or intensity over time (Ruginski et al., 2016).

However, there are also some drawbacks to ensemble displays. The primary issue is visual crowding, which happens when ensemble members are plotted too closely together and cannot be easily differentiated (Liu et al., 2015). In addition, research has shown that ensemble visualizations can provoke their own unintended bias (Padilla et al., 2017). For example, viewers may overweight individual ensemble members when they overlap with a point of interest (Padilla et al., 2017). This could lead to situations where individuals may be more likely to evacuate or take precautionary actions if a hurricane forecast track overlaps with their own town but feel less concerned or do not take action if the track does not (Padilla et al., 2017).

Examples of Alternative Visualizations

Shape5 One of the earliest attempts at using geospatially displayed path ensembles was an experiment (Cox et al., 2013) that presented an ensemble of continuously updated forecast tracks, demonstrating the range of possible hurricane outcomes. The ensemble display (see Figure 3) was preferred over the COU by all but one of the 24 participants that took part in the experiment (Cox et al., 2013).

Another study (Liu et al., 2015) proposed an approach for generating smooth scalar fields from a predicted storm path ensemble so that a user could examine the predicted state of the storm at any chosen time. The research supported a visualization that allows a user to display the predicted storm position, including its uncertainty, at any time in the forecast (see Figure 4).

Shape6
Liu et al. (2015) envisioned that the visualization could lead to a set of interactive tools for exploring a hurricane prediction in both time and space. It also has the potential to enable an integrated visualization application of other storm parameters, such as storm speed, bearing, wind speed, and flood risk (Liu et al., 2015). However, the researchers also acknowledged impediments remain to developing this kind of interactive application; in particular, the speed of the current algorithm is too slow for a typical system of 1,000 or more samples (Liu et al., 2015). The visualization also would need to be studied to see how users evaluate hurricane risk using this version versus other alternatives (including the current cone).

A follow-up study by Liu et al. (2017) involved a cognitive experiment of four different ensemble visualizations (see Figure 5). Two of the four displays were static; the other two were animated, showing visualizations that continuously added new ensemble members to the display while fading out those that had been on the screen for a while. The study showed that for all four displays, study participants understood 1) that the area being shown has a chance of being damaged, 2) where the center of the storm is likely to be, and 3) that the forecasters are uncertain of the storm’s location (Liu et al., 2017). Of the four displays, the animated icon visualization (second from right) was most effective in reducing the tendency to confound uncertainty with storm size, thereby improving viewers’ ability to estimate the potential for storm damage (Liu et al., 2017). However, the study revealed other misconceptions with these visuals; for example, many viewers mistakenly saw the animated displays as indicating the passage of time (Liu et al., 2017).

Shape8 Shape7
Radford et al. (2013) conducted several related field studies of different visualizations in Pensacola and Jacksonville, Florida, and found a color probability cone (shown as “e” in Figure 6) was preferred by study respondents, who praised the visualization for being simple and communicative. Most respondents said they wanted to be able to glance at the graphic in 20 seconds or less and be able to glean everything they need to understand the map. The study also found that some segments of the population desire to see additional storm attributes, such as hurricane size, color threat levels, and post-landfall hazards (Radford et al., 2013).

  1. Research Limitations

While much work has been done in recent years to understand what visual features of the COU are leading to misinterpretations and errors in judgments, researchers also have identified limitations in their work and where additional research would be beneficial. For example, many of the studies conducted have been controlled laboratory experiments, often with small samples and students who may have a different decision-making process or lower perception of risk (because they are young, do not own homes, and are not caregivers) (Sherman-Morris and Del Valle-Martinez, 2017). There can also be possible regional differences in information-seeking and decision-making and how these relate to hurricane graphics (Saunders and Senkbeil, 2017). Generalizing results to the larger public could be a useful future direction.

Furthermore, study participants are also often non-experts who are not provided with any information concerning the conventions of the display they may be evaluating or any social information. Meyer et al. (2013) is an exception; in this study, participants experienced the approach of a hurricane in a computer-based environment that allowed them to gather storm information through various media and even hear neighbors’ opinions. In addition to providing a social context, some researchers are also looking to examine the effects of providing additional information about a forecast visualization (such as verbal instructions about the meaning of the cone) on participants’ understanding and damage judgments. On a similar note, participants in many experiments did not have direct experience with hurricanes. Generalizing results to individuals with direct experience of hurricanes and to domain experts are other useful future directions.

Another limitation is that study participants frequently make judgments based on a single piece of information in a controlled environment. In a real-world setting, people would get information from multiple sources—and this information would change as the storm approaches land. Losee et al. (2017) and Meyer et al. (2013) are a few of the researchers who have examined people’s storm judgments in the context of receiving updates about an approaching hurricane.

Additionally, the technology for producing alternative visualizations, such as ensemble displays, is still advancing. Challenges remain in creating uncluttered visualizations that preserve spatial distributions and other information such as storm size and strength (Liu et al., 2017) and in developing interactive visualizations due to the speed of the algorithms used (Liu et al., 2015).


  1. Conclusions and Recommendations

The literature reveals several types of misinterpretations with the COU graphic (see Table 1).

Table 1. Common COU Misinterpretations

Problem

Implication

Misinterpreting the cone as the swath of damage from the storm (i.e., an impact visualization).

  • Believing a person is “safe” if located outside of the cone or having an exaggerated sense of not being safe if located inside the cone.

Misinterpreting the cone as the actual size and or intensity of the hurricane.

  • Believing the hurricane is growing in size or strength as it approaches land.

Focusing on the forecast track.

  • Failing to recognize that landfall/impacts could occur at adjacent locations.

  • Loss of interest in a hurricane when a track shifts away from a person’s location.

  • Failing to recognize that a storm’s present track could change in the future.


Anchoring to storm intensity.



  • Perceiving a higher intensity may occur even if the storm intensity is later downgraded.


Misinterpreting the near-term track.

  • If storm’s current movement (in text description) differs notably from near-term forecast track (in visual), it may confuse users who use the product to follow a storm’s near-term progress.


Despite being the most well-known hurricane product people see before a storm, the COU does not always positively impact decision-making. Instead, the cone may function more as one information source (out of many) that elevates peoples’ level of concern, but does not ultimately lead them to evacuate or prepare; in some cases, it has even been shown to lower perceptions of personal risk. This is most likely due to misinterpretation of uncertainty (mentioned in Table 1), as well as the cone’s lack of information about localized impacts that are more likely to influence decision-making. As a result, many researchers have considered how the cone’s visualization as a summary display contributes to this continued misunderstanding, and they have tested alternative designs such as ensemble displays.

Summary and ensemble visualizations have both advantages and drawbacks. While summary visualizations such as the cone are often easier to understand, they can also be misinterpreted, place additional cognitive load on the viewer, and hide important elements in the data. Ensemble displays can reduce some interpretation problems, such as the misconception that the cone represents a storm’s size or intensity, but they can also be visually cluttered and introduce their own biases (such as overweighting individual ensemble members when they overlap with a point of interest). Researchers are exploring ways to avoid potential unintended biases in visualizations, develop intuitive and accurate graphic displays, and integrate additional storm attributes or use techniques such as interactivity and animation to better convey important temporal and spatial tropical cyclone storm information.

Possible Next Steps

Based on the research conducted to date, there are some observations and gaps that can be inferred from the literature that may be worth considering as NHC considers possible changes to the COU or the messaging around the graphic. These possible next steps are presented below in no specific order of importance.

Understand the end-users of the COU and what kinds of tasks they complete with the graphic. A key challenge with the COU is that the graphic is so widely disseminated/reproduced across many types of media (e.g., television, weather apps, social media) and used by both sophisticated partners and members of the public (Bostrom et al., 2016). Yet, these different audiences are using the COU in different ways to address different needs. Experts and non-experts also have very different levels of scientific and spatial acuity, training, and experience. Also, the tasks that different users are completing change as a hurricane approaches land. While members of the public may be simply monitoring the situation three to five days out, at the 48- hour and 36- hour timeframes when watches and warnings are in effect, citizens should be listening to local officials to stay safe. On the other hand, emergency managers and other local officials are using the COU (with other information) to make localized planning and preparedness decisions; for this reason, they want time- and location-specific information as far as possible in advance to inform these decisions (Liu et al., 2015). A key consideration is whether a single graphic can address the needs and tasks of different users.

Consider alternative visualizations. The literature repeatedly shows that uncertainty graphics confuse users, with many researchers pointing out drawbacks in summary displays like the cone (Liu et al., 2015; Savelli and Joslyn, 2013; Whitaker et al., 2013). Furthermore, the center line continues to produce an anchoring effect (NOAA, 2017) despite no longer being the default, and research has shown that simple cosmetic changes to the current COU, such as changing the color and opacity to reveal uncertainty, cannot address the fundamental misinterpretations (Gedminas, 2011). NOAA should thus consider if alternative visualizations would better convey uncertainty information.

NOAA can make use of technological advances and the mainstreaming of more visual techniques (e.g., simulations, virtual reality, interactive displays) to develop and test alternative cone visualizations. As mentioned earlier, ensemble displays have experienced some degree of success in recent studies (Cox et al., 2013; Ruginski et al., 2016; Liu et al., 2015, 2017). NOAA could also consider exploring the possibility of creating an integrated visualization of multiple storm attributes (possibly animated or with layered levels of information) that may help to minimize the negative effects of decision biases while better communicating the time series aspect of the graphic and the increasing level of uncertainty the more days out. Another approach is to develop a visualization that combines information about the multiple hazards of a hurricane more easily. NHC’s information is now displayed on separate maps, and the research has shown that people tend to pay most attention to wind.

Focus more on messaging content. One of the drawbacks of the cone is that it does not provide all of the information that is important to decision-making for different users. If the COU continues to be the only or primary hurricane product the public sees before a storm, NOAA should consider adding more information to the graphic—especially localized messaging about potential impacts—as this would help the public more easily relate the forecast to their personal risk and increase their protective decision-making (Meyer et al., 2014; Hogan Carr et al., 2016; Milch et al., 2018).

However, we caution against attempting to create a cone that visually presents all this information in one static graphic, which could potentially overwhelm the end user and cause them to misinterpret/miss key information (Milch et al., 2018). Instead, an interactive or dynamic visualization would open up the possibility to customize the COU so that users could obtain localized impacts, as discussed above. Also, impact-messaging alone may evoke fear and lead to defensive thinking instead of protective action (Morss et al., 2016). Adding a call-to-action statement that encourages members of the public to listen to public officials (e.g., for evacuation information) may increase protective decision-making and their perception of strike probability (Hogan Carr et al., 2016; Morss et al., 2016; Wu et al., 2015b).

Alternatively, NOAA could consider redesigning the NHC website so that users can more easily access the other products that fulfill their decision-making needs. For example, NOAA might consider a website that allows users to input their zip code and obtain the forecast products relevant to them—with graphics depicting the hazards of most concern for the current storm, along with alerting language letting them know they are in a vulnerable area and should listen to public officials (Milch et al., 2018). NOAA should thoroughly test and obtain user feedback on any visual or messaging changes it considers making to the cone or its website.

Determine whether the day-6 to -7 timeframe is useful to stakeholders. As its forecasting accuracy becomes more sophisticated, the NHC is considering extending the cone forecast track out to 6 or 7 days. However, Hogan Carr et al. (2016) found that the public is most interested in cone track information in the T-5 to T-3 timeframe (especially if it encompasses their area), because T-7 is too far out to relate the cone to their personal risk. Before extending the cone forecast to 7 days, NOAA should conduct additional research to determine how useful this timeframe is to the public, as well as more sophisticated users, such as core partners or workers in vulnerable industries, who are more likely benefit from having information further out.

Determine whether removing the center line altogether helps or hinders decision-making. The literature disagrees on how much the center line impacts risk perceptions and decision-making. Some research suggests having the line actually increases preparation and protective behavior versus seeing a cone only (Meyer et al., 2013; Morss et al., 2016). Others suggest that there are “no appreciable differences” in probability strike judgments when viewing a cone and track, a cone only, or a track only, and that people prefer having both types of information (Wu et al., 2014, 2015a).


Research the use of NOAA’s larger suite of hurricane products. The literature largely lacks data about how stakeholders use the cone in conjunction with other NOAA hurricane communication products. One study did suggest that products other than the cone appear to only be useful to the NOAA forecasters who produce them (Bostrom et al., 2016). Furthermore, the literature disagrees on how important hurricane graphics in general are to public decision-making; Radford et al.’s (2013) Florida study indicated that most residents used hurricane graphics, while Saunders and Senkbeil (2017) found the opposite held true in the Mid-Atlantic. Some partners are also making use of decision-support tools, like HURREVAC (which NOAA and other federal agencies support) that ingest information from multiple sources and tailor it to the needs of the end-users, so ensuring that NOAA is providing the information to support these platforms is also critical. To avoid spending resources on products and graphics that provide little or no value to stakeholders, NOAA could research how important its larger suite of hurricane products is to the public and core partners.




  1. References

Bostrom, A., Morss, R.E., Lazo, J.K., Demuth, J., and Lazrus, H. (2018). Eyeing the Storm: How Residents of Coastal Florida See Hurricane Forecasts and Warnings. International Journal of Disaster Risk Reduction. https://www.sciencedirect.com/science/article/pii/S221242091830219X.

Bostrom, A., Morss, R.E., Lazo, J.K., Demuth, J., Lazrus, H., and Hudson, R. (2016). A Mental Models Study of Hurricane Forecast and Warning Production, Communication, and Decision-Making. Weather, Climate, and Society, 8: 111–129. https://journals.ametsoc.org/doi/pdf/10.1175/WCAS-D-15-0033.1.

Broad, K., Leiserowitz, A., Weinkle, J., and Steketee, M. (2007). Misinterpretations of the “Cone of Uncertainty” in Florida during the 2004 Hurricane Season. Bulletin of the American Meteorological Society, 88(5): 651–667. https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-88-5-651

Cox, J., House, D., and Lindell, M. (2013). Visualizing Uncertainty in Predicted Hurricane Tracks. International Journal for Uncertainty Quantification, 3(2): 143–156. http://www.dl.begellhouse.com/download/article/7d41c3a64ba14ca8/IJUQ-3966.pdf

Demuth, J.L., Morss, R.E., Lazo, J.K., and Trumbo, C. (2016). The Effects of Past Hurricane Experiences on Evacuation Intentions through Risk Perception and Efficacy Beliefs: A Mediation Analysis. Weather, Climate and Society, 8: 327–344. https://journals.ametsoc.org/doi/full/10.1175/WCAS-D-15-0074.1

Dobson, M.W. (1973). Choropleth Maps Without Class Intervals?: A Comment. Geographical Analysis, 5(4): 358–360. https://doi.org/10.1111/j.1538-4632.1973.tb00498.x

Dobson, M.W. (1980). Perception of Continuously Shaded Maps. Annals of the Association of American Geographers, 70(1): 106–107. https://doi.org/10.1111/j.1467-8306.1980.tb01301.x

Gedminas, L. (2011). Evaluating hurricane advisories using eye-tracking and biometric data. Master’s thesis, Faculty Dept. Geography, East Carolina University, Greenville, NC. http://thescholarship.ecu.edu/bitstream/handle/10342/3644/Gedminas_ecu_0600M_10461.pdf?sequence=1&isAllowed=y

Harrower, M., and Brewer, C.A. (2003). ColorBrewer.org: An Online Tool for Selecting Colour Schemes for Maps. The Cartographic Journal, 40(1): 27–37. https://doi.org/10.1111/j.1538-4632.1973.tb00498.x

Hogan Carr, R., Montz, B., Maxfield, K., Hoekstra, S., Semmens, K., and Goldman, E. (2016). Effectively Communicating Risk and Uncertainty to the Public: Assessing the National Weather Service’s Flood Forecast and Warning Tools. Bulletin of the American Meteorological Society, 97: 1649–1665. https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-14-00248.1

Joslyn, S., Savelli, S., and Nadav-Greenberg, L. (2011). Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects. Journal of Experimental Psychology Applied, 17(4): 342–53. http://dx.doi.org/10.1037/a0025901

Lazo, J.K., Bostrom, A., Morss, R.E., Demuth, J.L., and Lazrus, H. (2015). Factors Affecting Hurricane Evacuation Intentions. Risk Analysis, 35(10): 1837–1857, https://doi.org/10.1111/risa.12407

Liu, L., Boone, A.P., Ruginski, I.T., Padilla, L., Hegarty, M., Creem-Regehr, S.H., Thompson, W.B., Yuksel, C., and House, D.H. (2017). Uncertainty Visualization by Representative Sampling from Prediction Ensembles. IEEE Transactions on Visualization and Computer Graphics, 23(9): 2165–2178. https://doi.org/10.1109/TVCG.2016.2607204

Liu, L.Y., Mirzargar, M., Kirby, R.M., Whitaker, R.T., and House, D.H. (2015). Visualizing Time-Specific Hurricane Predictions, with Uncertainty, from Storm Path Ensembles. Computer Graphics Forum, 34(3): 371–380. https://doi.org/10.1111/cgf.12649

Losee, J.E., Naufel, K.Z., Locker, L., and Webster, G.D. (2017). Weather Warning Uncertainty: High Severity Influences Judgment Bias. Weather, Climate, and Society, 9: 441–454. https://doi.org/10.1175/WCAS-D-16-0071.1

Martinez, A. (2017). How Quickly Can We Adapt to Change? An Assessment of Hurricane Damage Mitigation Efforts Using Forecast Uncertainty. University of Oxford Department of Economics Discussion Paper Series. https://www.economics.ox.ac.uk/materials/papers/15156/831-martinez.pdf

Meyer, R., Baker, J., Broad, K., Czajkowski, J., and Orlove, B. (2014). The Dynamics of Hurricane Risk Perception: Real-Time Evidence from the 2012 Atlantic Hurricane Season. Bulletin of the American Meteorological Society, 95: 1389–1404. https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-12-00218.1

Meyer, R., Broad, K., Orlove, B., and Petrovic, N. (2013). Dynamic Simulation as an Approach to Understanding Hurricane Risk Response: Insights from the Stormview Lab. Risk Analysis, 33(8): 1532–1552. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1539-6924.2012.01935.x

Milch, K., Broad, K., Orlove, B., and Meyer, R. (2018). Decision Science Perspectives on Hurricane Vulnerability: Evidence from the 2010–2012 Atlantic Hurricane Seasons. Atmosphere, 9(1): [32]. http://www.mdpi.com/2073-4433/9/1/32

Morss, R.E., Demuth, J., Lazo, J.K., Dickinson, K., Lazrus, H., and Morrow, B.H. (2016). Understanding Public Hurricane Evacuation Decisions and Responses to Forecast and Warning Messages. Weather Forecasting, 31: 395–417. https://journals.ametsoc.org/doi/pdf/10.1175/WAF-D-15-0066.1

NOAA. (2006). Service Assessment: Hurricane Charley, August 9–15, 2007. National Oceanic and Atmospheric Administration, National Weather Service. Silver Spring, MD. https://www.weather.gov/media/publications/assessments/Charley06.pdf

NOAA. (2017). Service Assessment: October 2016 Hurricane Matthew. National Oceanic and Atmospheric Administration, National Weather Service. Silver Spring, MD. https://www.weather.gov/media/publications/assessments/HurricaneMatthew8-17.pdf

NTSB. (2017). Safety Recommendation Report: Tropical Cyclone Information for Mariners (accident number DCA16MM001). National Transportation Safety Board. NTSB/MSR-17/02. https://www.ntsb.gov/investigations/AccidentReports/Reports/MSR1702.pdf

Padilla, L.M., Ruginski, I.T., and Creem-Regehr, S.H. (2017). Effects of Ensemble and Summary Displays on Interpretations of Geospatial Uncertainty Data. Cognitive Research: Principles and Implications, 2(1): 40. https://link.springer.com/content/pdf/10.1186%2Fs41235-017-0076-1.pdf

Pang, A. (2008). Visualizing Uncertainty in Natural Hazards. In: Risk Assessment, Modeling and Decision Support (pp. 261–294). New York: Springer. https://link.springer.com/chapter/10.1007/978-3-540-71158-2_12

Pugh, A.J., Wickens, C.D., Herdener, N., Clegg, B.A., Smith, C.A.P. (2017). Effect of Visualization on Spatial Trajectory Prediction under Uncertainty. Proceedings of the Human Factors and Ergonomics Society Annual Meeting: 61(1): 297–301. https://doi.org/10.1177/1541931213601555

Radford, L., Senkbeil, J.S., and Rockman, M. (2013). Suggestions for Alternative Tropical Cyclone Warning Graphics in the USA. Disaster Prevention and Management: An International Journal, 22(3): 192–209. https://www.emeraldinsight.com/doi/abs/10.1108/DPM-06-2012-0064

Ruginski, I.T., Boone, A.P., Padilla, L.M., Liu, L., Heydari, N., Kramer, H.S., Hegarty, M., Thompson, W.B., House, D.H., and Creem-Regehr, S.H. (2016). Non-Expert Interpretations of Hurricane Forecast Uncertainty Visualizations. Spatial Cognition & Computation, 16(2): 154–172. https://www.tandfonline.com/doi/full/10.1080/13875868.2015.1137577

Sanyal, J., Zhang, S., Dyer, J., Mercer, A., Amburn, P., and Moorhead, R. (2010). Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty. IEEE Transactions on Visualization and Computer Graphics, 16: 1421–30. https://doi.org/10.1109/TVCG.2010.181

Saunders, M.E., and Senkbeil, J.C. (2017). Perceptions of Hurricane Hazards in the Mid‐Atlantic Region. Meteorological Applications, 24(1): 120–1340. https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/met.1611

Savelli, S., and Joslyn, S. (2013). The Advantages of Predictive Interval Forecasts for Non‐Expert Users and the Impact of Visualizations. Applied Cognitive Psychology, 27(4): 527–541. https://doi.org/10.1002/acp.2932  

Sherman-Morris, K., and Del Valle-Martinez, I. (2017). Optimistic Bias and the Consistency of Hurricane Track Forecasts. Natural Hazards, 88(3): 1523–1543. https://doi.org/10.1007/s11069-017-2931-2

Tversky, A., and Kahneman, D. (1974). Judgment Under Uncertainty: Heuristics and Biases. Science, 185: 1124–1130. https://www.jstor.org/stable/1738360?seq=1#page_scan_tab_contents

Wang, J., Hazarika, S., Li, C., and Shen, H.-W. (2018). Visualization and Visual Analysis of Ensemble Data: A Survey. IEEE Transactions on Visualization and Computer Graphics, pp. 1–1. https://doi.org/10.1109/TVCG.2018.2853721

Whitaker, R.T., Mirzargar, M., and Kirby, R.M. (2013). Contour Boxplots: A Method for Characterizing Uncertainty in Feature Sets from Simulation Ensembles. IEEE Transactions on Visualization and Computer Graphics, 19(12): 2713–2722. https://doi.org/10.1109/TVCG.2013.143

Wu, H.-C., Lindell, M.K., and Prater, C.S. (2015a). Process Tracing Analysis of Hurricane Information Displays. Risk Analysis, 35(12): 2202–2220. https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.12423

Wu, H.-C., Lindell, M.K., and Prater, C.S. (2015b). Strike Probability Judgments and Protective Action Recommendations in a Dynamic Hurricane Tracking Task. Risk Analysis, 79(1): 355–380. https://link.springer.com/content/pdf/10.1007%2Fs11069-015-1846-z.pdf

Wu, H.-C., Lindell, M.K., Prater, C.S., and Samuelson, C.D. (2014).Effects of Track and Threat Information on Judgments of Hurricane Strike Probability. Risk Analysis, 34(6): 1025–1039. https://onlinelibrary.wiley.com/doi/epdf/10.1111/risa.12128



1 Anchoring effect refers to a failure to adequately adjust an estimate away from an initial piece of quantitative information (Tversky and Kahneman, 1974). Research related to anchoring and weather judgments show that people exhibit an anchoring-like bias when processing information about severe weather but only under certain circumstances (Joslyn et al., 2011).

Shape1

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMMcPherson
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy