Upon completion of this lesson, you will be able to:
Decision analysis using decision trees is a tool that can help individuals or organizations make decisions by mapping out various possible outcomes and their probabilities based on information they have collected. It is most commonly accomplished using a graphical representation of a decision problem that allows the decision-maker to visualize the potential outcomes of their choices.
A decision tree consists of nodes and branches that represent different decision points and possible outcomes. Each branch represents a different choice or option, and the nodes represent the points at which the decision-maker must choose between options. At the end of each branch, there is a terminal node representing the final outcome or result – also sometimes called the payoff.
To construct a decision tree, the decision-maker must first identify the decision to be made and the possible options. They must then identify the possible outcomes and the probabilities associated with each outcome. This information is used to calculate the expected value of each option, which is the sum of the products of the probability and outcome value for each possible outcome.
Once the decision tree is constructed, the decision-maker can use it to determine the best course of action. They can do this by working backwards from the terminal nodes, calculating the expected value of each node and selecting the option with the highest expected value.
Decision analysis using decision trees can be used in a variety of contexts, such as business, finance, healthcare, and engineering. It is a powerful tool that can help decision-makers make informed choices by considering all possible outcomes and their probabilities. Often, decision analysis can be programmed to allow for automated decision analysis and autonomous decision making.
As an example, consider the role of a program manager tasked with selecting the optimal project from a set of potential options. Each project presents unique risks, costs, and potential benefits. Decision trees assist in mapping out these options in a visual format, integrating cost-benefit analyses, and applying probabilities to different outcomes, thereby facilitating an informed and quantitative decision-making process. Likewise, software developers, business analysts, and professionals in other fields often use decision trees for evaluating alternative actions and their possible impacts on and outcomes.
Probability is essential for decision making using decision trees because it is necessary to calculate the expected value of each possible outcome and decision path. The expected value is the sum of the products of the probability and the outcome value for each possible outcome.
In a decision tree, each branch represents a different option or decision path based on some probable event. Each option has a set of possible outcomes, and the probability of each outcome occurring may be different for each option. By assigning probabilities to each possible outcome, decision-makers can estimate the expected value of each option and make an informed choice.
Probability is also important in decision trees because it allows decision-makers to quantify the uncertainty associated with each possible outcome. By assigning probabilities to each outcome, decision-makers can estimate the likelihood of each outcome occurring and make decisions based on this information.
In addition, probability can help decision-makers identify the best course of action when faced with complex decision problems. By evaluating the expected values of each option, decision-makers can compare the potential outcomes and choose the option with the highest expected value.
Overall, probability is a critical component of decision trees as it allows decision-makers to consider all possible outcomes and their probabilities when making important decisions.
Probability is a measure of the likelihood that an event will occur. It is a number between 0 and 1, with 0 representing an impossible event and 1 representing a certain event. Probability is used in many fields, including information science, machine learning, artificial intelligence, statistics, physics, and engineering, business, among others.
The calculation of probability depends on the situation and the type of event being considered. There are three common ways to calculate or assess the probability of an event:
Classical or Mathematical Probability: In this approach, the probability of an event is calculated by dividing the number of favorable outcomes by the total number of possible outcomes. This is known as the classical, mathematical or theoretical probability. For example, suppose you are rolling a fair six-sided die. The probability of rolling a 1 is 1/6, because there is only one favorable outcome (rolling a 1) and six possible outcomes (rolling a 1, 2, 3, 4, 5, or 6).
Subjective Probability: Probability can also be calculated using subjective probability, which is based on personal beliefs or judgments. This is often used when objective data is not available or when making predictions about uncertain events.
Empirical Probability: Another way to calculate probability is through empirical or experimental probability. This involves conducting experiments or observations and recording the number of times an event occurs. The probability of an event is then calculated as the number of times the event occurs divided by the total number of trials. For example, if you roll a die 100 times and get a 1 on 20 of those rolls, the experimental probability of rolling a 1 is 20/100 or 0.2.
Overall, probability is a fundamental concept that can be employed to quantify the likelihood of events. The calculation of probability depends on the situation and the type of event being considered, and can be determined using different methods including classical, empirical, and subjective probability.
Situation I: No Information
What is the probability that an activity in a process is carried out incorrectly by Patrick? If we do not have any prior information about how often Patrick makes a mistake and we do not know Patrick, then we need to assume equal likelihood and use a mathematical approach of \(1/2\), that is \(p = 0.5\).
Situation II: Subjective Knowledge
For the same situation above, we might believe that it is more likely than not that Patrick makes mistakes, so we might (subjectively) estimate the probability of him carrying out an activity or task incorrectly at a value above 0.5, perhaps we might “guess” \(p = 0.7\), i.e., we set the likelihood at 70%. Of course, this is a personal belief, but might be more accurate and realistic than the probability obtained from the classical approach.
Situation III: Prior Information
Let’s say that, for the past six months, we keep a log of “mistakes” and “accidents” and we found that Patrick has made 21 mistakes on a particular tasks while carrying out that task 762 times So, the ratio of mistakes versus total tasks is the frequency of mistakes and can be interpreted as the probability of making a mistake. In this situation that would mean an empirical probability, i.e., one based on observation, is \(p = \frac{21}{762} = 0.0276\) or about 2.8%.
A decision tree is a graphical representation of decision-making processes where choices branch out into different outcomes, each associated with certain probabilities and costs or benefits. It is a structured approach to decision-making that helps to clarify the relationships between different decisions and their possible consequences.
The decision tree begins with a root node where the primary decision is made. From this node, branches extend to either decision nodes or chance nodes, depending on whether the path involves a subsequent decision or a chance outcome. Each branch stemming from a chance node will lead to new decision nodes or directly to endpoints, depending on the complexity of the decision scenario.
Decision trees are generally drawn as a diagram with the root on the left and the payoff nodes on the right, and the decision paths flowing from left to right.
The diagram below illustrates a typical decision tree layout. It illustrates the decision whether to park in the parking garage at $35 for the day or park on the street at no cost but risk getting a parking violation which would cost $50. The decision maker does not know the probability of getting a ticket and has no information to make an estimate, so the chance of getting a ticket or not getting one is equal and thus the probability of each event (getting a ticket versus not getting a ticket) is 0.5. The “rational” decision based on maximizing economic payoff (in this case spending the least amount of money) is to park in the street. In an later section, we will explain how to arrive at the decision.
As the tree expands, it illustrates all possible routes a decision could take, along with the corresponding outcomes and their likelihoods. This branching structure allows decision-makers to visualize multiple pathways and outcomes, facilitating a comprehensive analysis of potential decisions.
Sequential Decisions: In more complex scenarios, a decision tree can represent multiple stages of decisions, with each stage dependent on the outcomes of previous stages. This is typical in project management and strategic business decisions where initial choices influence future options.
Simultaneous Decisions: Sometimes, decision trees need to represent decisions that are not sequential but rather concurrent, affecting each other. This advanced concept may be introduced through scenarios involving negotiations or competitive strategies where multiple parties make decisions at the same time.
Constructing a decision tree involves several steps, starting from defining the decision problem to analyzing the potential outcomes. This section will guide students through the process of building a decision tree from the ground up, emphasizing the sequential and logical approach needed to address complex decision-making scenarios.
The first step in constructing a decision tree is clearly defining the decision problem. This involves understanding the scope of the decision, the objectives to be achieved, and the constraints involved. It is crucial to articulate what decision needs to be made and why it is important. For example, a company might need to decide whether to launch a new product line, considering market potential and production costs.
Once the problem is defined, the next step is to list all possible decision alternatives. These are the different options available to the decision-maker. For the new product launch example, the alternatives might include: - Proceeding with the launch of the new product. - Enhancing an existing product instead of launching a new one. - Postponing the decision until more market data is available.
For each decision alternative, identify possible outcomes. Outcomes should include all foreseeable results of each decision, which could range from highly successful to moderate to failure, depending on external factors such as market acceptance or production challenges.
Each outcome should have a probability assigned to it, which quantifies the likelihood of each outcome occurring. These probabilities are based on available data, market research, historical data, expert judgment, or statistical models. For instance, the success of the new product might have a 60% probability based on market trends and consumer research.
Assign a payoff value to each endpoint in the decision tree. Payoffs represent the results of each outcome, which can be profits, costs saved, or other relevant metrics. Calculating these values involves estimating the financial or strategic benefits or losses associated with each outcome, or some other quantifiable utility. For example, a successful product launch might result in certain profits whose value can be estimated, whereas failure might result in losses due to sunk costs.
Begin constructing the decision tree by drawing the root node that represents the initial decision to be made. From this node, draw branches for each decision alternative, leading to further branches for chance outcomes based on the assigned probabilities. Continue expanding the tree until all decision paths are outlined, ending in endpoints that display the computed payoff values.
Consider the decision tree for a company deciding whether to launch a new product. The root node presents the initial decision: launch or not. If the decision is to launch, a chance node follows, estimating the market’s reaction:
Each of these branches leads to an endpoint that shows the payoff value, computed based on projected sales and costs. The diagram below shows how this can be visualized as a decision tree, with some estimates for the payoffs. The estimates might come from customer surveys, Delphi sessions with sales executives, or from customer interviews. If the product has a multi-year development cycle, then the expected revenue per year and the costs per year must be added up – and possibly convert future values to their present value. The probabilities are estimated using the same techniques. And, remember that the probabilities must add up to 1; if the probabilities cannot be estimated then equal likelihood must be assumed.
The “do-nothing” option or decision path is part of every decision tree as it is always an option.
Each “event node” contains the expected value (EV) calculated from its branches based on their probabilities. In general, the EV is the sum of the payoffs multiplied by their respective probability, so the EV for an event node with n branches and probability pi for each branch and payoff Pi is:
\(\text{EV}=\sum_{i=1}^{n}(p_i \times P_i)\)
Once the decision tree is fully constructed, review it for completeness and accuracy. Ensure all possible decisions and outcomes are represented, and that the probabilities and payoff values are based on the most reliable information available. This review should involve seeking feedback from stakeholders or subject matter experts.
After constructing a decision tree, the next critical step is to calculate the Expected Values (EV) for the various decision pathways. Expected Value is a fundamental concept in decision analysis, representing the average outcome of a decision if it could be repeated multiple times under the same conditions. Calculating EV helps decision-makers evaluate the potential effectiveness of different alternatives by quantifying the probable outcomes.
Expected Value (EV) is calculated by weighing each possible outcome of a decision by the probability of its occurrence and summing these values. It effectively balances the benefits and risks associated with each decision alternative, providing a single numeric value that represents the “average” result of the decision.
The general formula for Expected Value at any decision or chance node is:
\[ \text{EV} = (p_1 \times P_1) + (p_2 \times P_2) + \ldots + (p_n \times P_n) = \sum_{i=1}^{n}(p_i \times P_i) \]
Where: - \(p_i\) = Probability of outcome i occurring - \(P_i\) = Payoff value of outcome i - \(n\) = Total number of possible outcomes
Step 1: Identify Payoff Values and Probabilities
From the decision tree, identify the payoff values and their respective probabilities for each outcome from a chance node.
Step 2: Calculate EV for Each Node
Using the formula, calculate the Expected Value for each node by multiplying each outcome’s payoff by its probability and summing these products. This calculation should be done starting from the endpoints and moving towards the root of the tree, a method known as “rollback.”
Step 3: Rollback the Decision Tree
Continue calculating the EV for each preceding node, using the EVs from the forward nodes as the payoff values in your calculations. This process will eventually lead you to calculate the EV for the decisions at the root node.
Example
Suppose a company must decide whether to invest in a new project. The decision tree shows two outcomes: success and failure. Success has a 70% probability with a payoff of 100,000, and failure has a 30% probability with a payoff of -50,000. The EV for this decision would be:
\[ \text{EV} = (0.7 \times 100000) + (0.3 \times -50000) = 70000 - 15000 = 55000 \]
This EV tells us that, on average, the company can expect to gain 55,000 from the project.
A helpful way to visualize these calculations is through annotated decision trees where each node’s EV is clearly labeled. This aids in understanding how each decision and its possible outcomes contribute to the overall expected utility of the decision process.
Once all EVs are calculated, the decision alternative with the highest EV at the root node is typically considered the optimal choice, assuming the goal is to maximize returns or minimize losses. This decision will have taken into account all the probabilities and payoffs of the subsequent decisions and outcomes. The calculation of Expected Values is critical in rational decision-making as it provides a quantifiable basis for comparing different decision alternatives.
Sensitivity analysis examines how the output of a decision model changes in response to variations in its input variables. The “input variables” for a decision analysis using decision trees are the payoffs and the event probabilities. In simple terms, it helps analyze how the different uncertainties within a decision tree — such as changes in probabilities of outcomes or variations in payoff values — affect the overall decision recommendations and the threshold at which the decision changes. This can be helpful in determining risk and assess the confidence in a decision.
Testing the Robustness of Decisions: Sensitivity analysis tests the robustness and reliability of the decision outcomes derived from a decision tree. By altering one or more input variables while keeping others constant, decision-makers can see how sensitive the recommended decision is to changes in those inputs.
Identifying Critical Assumptions: It helps identify which assumptions or inputs have the most significant impact on the outcome of the decision tree. By understanding which variables are most influential, decision-makers can focus their efforts on gathering more precise data for those variables and can better manage the risks associated with their decisions.
Exploring Different Scenarios: This analysis allows decision-makers to explore a range of scenarios and understand the potential range of outcomes. For example, changing the probability of success of a new product launch or adjusting the expected revenue from a project can provide insights into how these changes affect the expected value (EV) of the decision.
Facilitating Decision Making under Uncertainty: In real-world conditions, uncertainty is a common challenge. Sensitivity analysis provides a systematic approach to deal with uncertainty by allowing decision-makers to see how outcomes vary with changes in input values. This is particularly useful in strategic decision-making where outcomes are uncertain and the stakes are high.
Supporting Stakeholder Discussions: By presenting how outcomes vary with different assumptions, sensitivity analysis can be a powerful tool in discussions with stakeholders. It helps in building consensus by demonstrating the impact of different assumptions on the outcomes, making it easier to discuss and align on risk tolerance and decision-making strategies.
Select the Variable: Start by selecting one key input variable to modify. This could be the probability of an event, the cost associated with a decision, or the revenue from a successful outcome.
Define the Range: Define a realistic range for this variable—what are the plausible highest and lowest values it could take?
Recalculate EVs: Adjust the variable within the specified range and recalculate the expected values for each scenario within the decision tree.
Analyze Outcomes: Compare how the overall decision recommendation changes with each variation of the input. Note any scenarios where the decision recommendation changes significantly.
This process can be repeated for various key inputs to get a comprehensive understanding of all sensitive variables within the decision tree. By performing sensitivity analysis, decision-makers can not only make more informed decisions but also build more resilient strategies that can withstand variations in key assumptions and inputs. Varying multiple parameters simultaneously requires computer simulation, often using Monte Carlo techniques.
To demonstrate sensitivity analysis, let’s revisit the prior example below:
The current decision is to launch the product. At which payoff amount for the best case scenario (currently $100,000) would the decision change to not launch the product? In other words, how “sensitive” is the decision outcome to the payoff value for the most optimistic case?
To perform a sensitivity analysis, we assign a variable to a parameter that we are analyzing. So, in this situation, let’s assign x as the payoff amount for the most optimistic scenario rather than the value 100,000. With that, the EV becomes:
\(EV = (06)(x) + (0.3)(35000) + (0.1)(-25000) = 0.6x-11999.9 \approx 0.6x - 12000\)
The decision changes when the other decision (“do not launch product”) has a larger payoff, i.e.
\(0.6 - 12000 < 0\)
The threshold is when \(0.6x - 12000 = 0\), so solving for x yields x = 20000. This means that as long as the most optimistic payoff is above $20,000 the decision would be the same. Of course, that would also change the other two cases and the decision analysis would need to be repeated. A decision maker would say that it is unlikely to only make $20,000 so the decision to launch the product is not very risky. A similar analysis, using a variable, can be done for a probability as well.
In decision analysis, the Value of Perfect Information (VPI) is a concept that quantifies the benefit of having complete and accurate information before making a decision. VPI represents the maximum amount a decision-maker should be willing to pay for information that would remove all uncertainty about a particular variable or outcome in the decision process. This value provides insight into the potential improvement in the decision outcome if one were to have perfect knowledge beforehand.
VPI helps in understanding the economic worth of acquiring additional information to eliminate uncertainty in decision-making scenarios. This is particularly relevant in situations where decisions must be made under conditions of uncertainty and where the decision outcomes significantly impact the decision-maker.
The VPI is calculated by comparing the expected value of a decision with perfect information (knowing the future outcomes beforehand) against the expected value of the decision made under uncertainty (without such perfect information). The formula to calculate VPI is:
\[ \text{VPI} = \text{EV with perfect information} - \text{EV without perfect information} \]
Where: - EV with perfect information is the expected value when the outcomes of uncertain events are known in advance, allowing for the optimal decision to be made in each possible scenario.
In the context of decision trees, VPI provides crucial insights: - Assessment of Information Value: VPI helps in assessing whether the cost of acquiring additional information (e.g., market research, data analysis, expert opinions) is justified by the potential increase in value from the decision outcome. If the cost of obtaining the information is less than the VPI, it would be considered economically rational to acquire that information.
Prioritization of Information Gathering: By identifying the variables with the highest VPI, organizations can prioritize their resources towards gathering information where the impact on decision outcomes is greatest.
Improving Decision Quality: Understanding the VPI allows decision-makers to enhance the quality of their decisions by focusing on reducing the most critical uncertainties.
Imagine a pharmaceutical company deciding whether to invest in the development of a new drug. The company faces uncertainties regarding regulatory approval and market acceptance. By calculating the VPI, the company can determine how much it should reasonably spend on activities like additional clinical trials or market research to obtain perfect information about these uncertainties. If the VPI indicates a substantial potential improvement in decision outcomes with perfect information, the company might choose to invest significantly in reducing these uncertainties.
The concept of the Value of Perfect Information enables decision-makers to quantify the benefits of reducing uncertainty and to make more informed choices about where to allocate resources in the decision-making process. It puts a price on how much should be spent at most to do more analysis and get “better” information. Only if the effort to get more information (which is, of course, never “perfect”) must be substantially less than the value of “perfect” information. If it isn’t, then the cost of gathering more information would outweigh the maximum possible gain.
The Value of Perfect Information (VPI) is calculated by determining the difference between the expected value of making the optimal decision with perfect information and the expected value of making the best decision possible with the information currently available. Essentially, VPI quantifies how much better one could do if the uncertainty were eliminated before making a decision.
Calculate the Expected Value without Perfect Information (EV): This involves building a decision tree based on the current information and calculating the expected value (EV) of the best decision according to this tree. This calculation integrates the probabilities and payoffs of each possible outcome.
Calculate the Expected Value with Perfect Information (EVP): This assumes that you have perfect knowledge of future events. The process involves:
Calculate VPI: The Value of Perfect Information is then calculated as: \[ \text{VPI} = \text{EVP} - \text{EV} \] This result tells you the maximum amount you should be willing to pay to eliminate uncertainty entirely before making a decision.
Scenario: An agricultural producer must decide whether to plant corn or soybeans in the upcoming season. The decision depends heavily on the upcoming weather conditions—whether it will be a wet or a dry season—both of which impact the yield of the crops differently.
If planting corn: \[ \text{EV}_{\text{corn}} = (0.6 \times \$200,000) + (0.4 \times \$50,000) = \$120,000 + \$20,000 = \$140,000 \]
If planting soybeans: \[ \text{EV}_{\text{soybeans}} = (0.6 \times \$100,000) + (0.4 \times \$150,000) = \$60,000 + \$60,000 = \$120,000 \]
Best decision without perfect information (choose the highest EV): Plant corn \[ \text{EV} = \$140,000 \]
In a wet season (choose the best crop for wet conditions): Plant corn for $200,000
In a dry season (choose the best crop for dry conditions): Plant soybeans for $150,000
Calculate EVP: \[ \text{EVP} = (0.6 \times \$200,000) + (0.4 \times \$150,000) = \$120,000 + \$60,000 = \$180,000 \]
The VPI of $40,000 indicates that the agricultural producer should be willing to spend up to $40,000 to know with certainty whether the season will be wet or dry before deciding on the crop. This value represents the potential financial benefit of making a decision based on perfect information about weather conditions, compared to the best decision that could be made using only the probabilities of weather outcomes.
Scenario: A software development company, “DevTech,” is deciding whether to develop a new project management tool aimed at small businesses. The decision is heavily influenced by the competitive response—whether a major competitor will release a similar tool within the next year.
Calculate the expected value (EV) based on the current uncertainty:
If DevTech launches the tool: \[ \text{EV}_{\text{launch}} = (0.4 \times \$200,000) + (0.6 \times \$500,000) = \$80,000 + \$300,000 = \$380,000 \]
If DevTech does not launch the tool: \[ \text{EV}_{\text{not launch}} = \$0 \]
DevTech should choose to launch based on the highest EV: \[ \text{EV} = \$380,000 \]
Assuming DevTech could know in advance whether the competitor will launch a similar tool, they could perfectly time their decision:
Calculate the expected value with perfect information: \[ \text{EVP} = (0.4 \times \$0) + (0.6 \times \$500,000) = \$0 + \$300,000 = \$300,000 \]
The Value of Perfect Information is: \[ \text{VPI} = \text{EVP} - \text{EV} = \$300,000 - \$380,000 = -\$80,000 \]
Interestingly, the VPI calculation suggests a negative value, which indicates a logical inconsistency or a misinterpretation of the situation because VPI cannot be negative. The mistake here is in the calculation of the EVP, where the decisions taken were not the optimal ones based on perfect information. Let’s correct that:
Correct EVP calculation: \[ \text{EVP} = (0.4 \times \$200,000) + (0.6 \times \$500,000) = \$80,000 + \$300,000 = \$380,000 \]
Hence, VPI should be: \[ \text{VPI} = \text{EVP} - \text{EV} = \$380,000 - \$380,000 = \$0 \]
The corrected VPI of $0 suggests that knowing the competitor’s actions in advance wouldn’t change the profitability of DevTech’s decision—they should launch the tool regardless. This example highlights the importance of accurate payoff and probability estimations and choosing the optimal decisions when calculating the EVP in sensitivity analysis within the context of information technology investments.
Decision trees are essential for decision analysis, but their effectiveness largely depends on how well they are constructed and analyzed. This section summarizes key best practices.
Start with a Clear Objective: Before constructing a decision tree, clearly define the decision problem and the objectives you aim to achieve. A well-defined problem helps in identifying relevant decision alternatives and potential outcomes.
Involve Stakeholders: Engage stakeholders in defining the problem to ensure all perspectives are considered. This involvement can provide insights into factors that might otherwise be overlooked.
List All Possible Alternatives: Ensure that all viable decision alternatives are considered. Overlooking potential options can lead to suboptimal decisions.
Anticipate Possible Outcomes: Identify as many outcomes as possible for each decision alternative to capture the full range of scenarios. This includes best-case, worst-case, and most likely outcomes.
Use Reliable Data: Base probability estimates and payoff values on reliable data sources, such as historical data, expert opinions, or market research. Accurate data reduces the risk of bias and improves the decision tree’s reliability.
Review and Update Regularly: Keep the decision tree updated with the latest information as new data becomes available or as the situation evolves.
Keep It Simple and Structured: Design the decision tree to be simple and easy to follow. Use clear labels and maintain a logical flow from left to right.
Use Software Tools: Utilize decision tree software like Microsoft Excel, Lucidchart, or specialized decision analysis tools. These tools often include features that enhance the creation, analysis, and presentation of decision trees.
Perform Sensitivity Analysis: Conduct sensitivity analysis to understand how changes in probabilities or payoffs affect the decision outcomes. This helps identify which inputs have the most significant impact on the decision and tests the robustness of the decision under various scenarios.
Look Beyond the Highest EV: While the decision with the highest Expected Value is generally preferred, consider other factors such as risk tolerance and strategic alignment. Sometimes, a decision with a lower EV might be more suitable due to lower risk or better alignment with long-term goals.
Communicate Clearly with Stakeholders: Use the decision tree to clearly communicate the decision process, the rationale behind decisions, and the implications of various choices to stakeholders. Clear communication ensures alignment and supports effective decision-making.
Document Assumptions and Decisions: Keep a detailed record of the assumptions made, the data used, and the rationale behind each decision. This documentation is crucial for reviewing decisions and for future reference.
Imagine a software development company, “Code Innovate,” faces a strategic decision about whether to develop a new software product. The decision involves evaluating the potential profitability of the product while considering market uncertainties and development challenges. The business analysis team around Willow Haden has been conducting interviews with key sales executives and regional sales directors, holding Delphi estimation sessions, and using customer survey data to determine likely outcomes and payoffs. Their findings are summarized below:
Code Innovate, Inc., a Canadian ERP software company, needs to decide whether to develop a new software tool designed to enhance cybersecurity for small businesses or whether to enhance their current suite of ERP products. The primary objectives are to maximize profitability for the next five years while managing development risk.
The alternatives include:
For each alternative, possible outcomes need to be identified:
Based on market research and expert opinion, probabilities and expected payoffs are estimated:
The decision tree starts with the root node where the decision to choose between A, B, or C is made. From each option, branches spread to their respective outcomes based on the assigned probabilities.
Calculate the EV for each option using the formula: \(\text{EV} = (P_1 \times V_1) + (P_2 \times V_2) + \ldots\)
Based on the calculated EVs:
Code Innovate should pursue the development of the new software product (Option A), as it offers the highest expected profitability. However, the company should also prepare for potential risks associated with lower market acceptance by setting aside resources for marketing and customer engagement to maximize the chances of high market acceptance.
This example illustrates how a decision tree can provide a structured and quantitative framework for making complex business decisions, incorporating various scenarios, probabilities, and potential payoffs.
In this narrated presentation, Khoury Boston’s Prof. Schedlbauer explains how to use decision trees to enable rational decision making processes.
Decision trees are a common method for visualizing a decision path that incorporates chance. In this context decision trees are employed for rational decision making and not for classification of an object. The latter is a common application of decision trees in machine learning in which an object’s type is determined through a series of decisions based on properties of the object. The decision making process presumes a rational decision maker who bases their decision on some aspect of utility (economic utility, money, time, happiness, etc.) and that utility can be quantified.
Slide Deck: Decision Analysis and Decision Trees
Decision trees bring structure and clarity to the decision-making process, especially when facing complex choices with uncertain outcomes. They are essential in helping visualize the course of decisions, presenting alternatives, potential outcomes, associated probabilities, and payoffs in a clear, graphic format.
A decision tree starts with a root node, representing the initial decision to be made. From this node, various branches spread out, representing decision alternatives and subsequent outcomes. Each branch leads to either further decision nodes, where additional choices can be made, or to endpoints, which display the final outcomes and their respective payoffs. These components are crucial as they systematically break down and display complex decisions in a manageable and understandable way.
The process of constructing a decision tree involves several critical steps. Initially, it is imperative to clearly define the decision problem and understand its objectives. Next, all possible decision alternatives need to be identified, followed by outlining all potential outcomes for each alternative. Probabilities and payoff values are then assigned to each outcome based on available data or expert opinion, ensuring that each possible scenario is accounted for and quantified. The decision tree is drawn visually to map out these alternatives and outcomes, providing a comprehensive overview of the entire decision-making process.
Once constructed, the decision tree is analyzed by calculating the Expected Value (EV) for each decision path. This calculation helps in assessing the average expected outcome of each alternative, taking into account all associated risks and benefits. This step is pivotal as it guides the decision-makers in choosing the option that maximizes the expected utility or minimizes potential losses.
To ensure the effectiveness of decision trees, it is essential to collect reliable data for probabilities and payoffs, ensuring the problem and alternatives are clearly defined, and regularly updating the tree with new information.
Decision trees are indispensable in rational decision-making, offering a systematic approach to analyzing complex scenarios and making informed choices.
None.