Problem Solving in Math
Document information
| Author | David Lippman |
| instructor/editor | David Lippman |
| school/university | Pierce College |
| subject/major | Mathematics |
| Document type | Textbook |
| Language | English |
| Format | |
| Size | 8.02 MB |
Summary
I.Problem Solving in Mathematics
This section emphasizes the practical application of mathematical problem-solving skills, moving beyond rote algebraic manipulations. It highlights the importance of identifying necessary information and choosing appropriate algorithms and procedures to tackle real-world challenges. Examples include calculating relative change (percent change), analyzing conflicting claims (like those regarding Medicare spending), and exploring scenarios involving advertising effectiveness and statistical analysis. The section stresses a backward-looking approach to problem-solving, starting with the desired outcome and working backward to identify necessary steps. Specific examples such as calculating the height of a tree using similar triangles and estimating the number of customers gained from an advertising campaign are included.
1. Real World Problem Solving vs. Textbook Problems
The section contrasts the nature of mathematical problem-solving in textbooks versus real-world scenarios. Textbook problems typically provide all necessary information and explicitly state the required formula or procedure. Real-life problems, however, demand a more nuanced approach. They require identifying relevant information, selecting appropriate formulas or procedures, and often involve a series of interconnected steps to reach a solution. The text emphasizes a shift from passively applying known formulas to actively strategizing and determining the necessary path to solve a problem. This is illustrated through several examples, demonstrating that effective problem-solving requires a systematic approach rather than simply recalling a formula.
2. Understanding and Applying Percent Change
A key concept introduced is the difference between absolute and relative change. Absolute change refers to the raw difference in quantities and retains the same units as the original quantity. Relative change, on the other hand, represents the change as a percentage of the original quantity. This distinction is crucial for interpreting data accurately, particularly when dealing with percentages. The example of the misleading statistic about marijuana use and subsequent harder drug use highlights the criticality of understanding the base used in a percentage calculation. This shows how a seemingly factual claim can be misrepresented depending on what the initial (base) number is.
3. Analyzing Conflicting Information and Real World Examples
The section presents real-world problems requiring critical analysis of available information. One example involves conflicting political statements about Medicare spending, where one candidate claims a $716 billion cut, while the other argues for cost-saving measures without reducing current spending. The text emphasizes the importance of carefully scrutinizing claims, understanding the context and underlying assumptions, and acknowledging the complexity of real-world issues. The example highlights the challenges of interpreting numerical data within complex systems, showing the difficulty in definitively evaluating whether statements are contradictory, complementary, or simply incomparable due to differing perspectives. The text also includes several other practical examples, including how many customers a company might gain from increased advertising spending, and how to use similar triangles to calculate the height of a tree. These exemplify the process of solving complex, multi-step real-world problems using a variety of mathematical concepts.
4. The Importance of a Systematic Approach to Problem Solving
The section advocates for a backward-looking problem-solving strategy. Instead of starting with available data and searching for applicable methods, the recommended approach is to begin by clearly defining the desired outcome or goal. From there, one works backward, identifying the necessary information and procedures required to achieve the final answer. This is contrasted with a traditional approach where one identifies a known formula or method and proceeds directly to the answer. This backward strategy is presented as particularly important in complex problem situations where one needs to coordinate and combine multiple steps to reach a solution. This is stressed as the most effective method for tackling many real-world challenges that are not easily solved through one simple mathematical operation.
II.Voting Systems and Fair Division
This section delves into various voting methods and their implications. It explores different systems, including plurality, instant runoff voting (IRV), Borda count, and approval voting, analyzing their strengths, weaknesses, and susceptibility to insincere voting. The section also examines the Electoral College as a specific case of weighted voting. Discussions of fair division methods are included, using examples like dividing a pizza or chores amongst roommates and highlighting the importance of fairness in resource allocation. The concepts of closed and open primaries and their effects are also discussed.
1. Overview of Voting Systems
This section introduces several voting systems, analyzing their mechanisms and potential consequences. It begins by discussing the concept of insincere voting, where voters strategically cast ballots that don't reflect their true preferences to achieve a desired outcome. The example of a democratic leadership endorsing one candidate over another to avoid splitting the vote and losing the election illustrates this point. Different voting methods are then explained, including plurality voting (the most common in the US), instant-runoff voting (IRV), and the Borda count. The text compares and contrasts these systems, highlighting how they prioritize voter preferences and the possibility of reaching different results based on the selected method. For instance, the Borda Count is described as a consensus-based system, contrasting with plurality and IRV, which focus on first-choice votes. The section also explains approval voting, where voters select all candidates they approve of, and its vulnerability to strategic voting. Different primary election systems (closed, open, caucuses) and their respective advantages and disadvantages are explained.
2. Analysis of Specific Voting Systems Plurality IRV Borda Count Approval Voting
A deeper dive into specific voting systems is provided. Plurality voting, where the candidate with the most first-place votes wins, is examined, noting potential issues where third-party candidates might significantly alter the outcome (Ross Perot in 1992 and Ralph Nader in 2000 are cited as examples). Instant-runoff voting (IRV), where voters rank their preferences, is presented as an alternative, highlighting its use in Australia and previously in Pierce County, Washington. The Borda count, which totals points assigned to ranked candidates, is analyzed for its ability to select a more broadly acceptable option rather than just the majority choice. Finally, approval voting, allowing voters to select multiple candidates, is discussed, showing its susceptibility to strategic, insincere voting, where voters might withhold their preference for a candidate to increase the chances of another preferred candidate winning. The examples demonstrate the varying outcomes that different systems can produce and the potential complexities in deciding on the most appropriate system.
3. Weighted Voting and the Electoral College
The concept of weighted voting, where votes carry different weights depending on factors like share ownership or state population, is introduced. The most familiar example to Americans, the Electoral College system for electing the President, is used to illustrate this. The section explains how the Electoral College functions as a weighted voting system, with each state's number of votes based on its number of congressional representatives. The complexity of the Electoral College, with its billions of possible winning coalitions, is highlighted. The use of computational methods for determining power indices in such systems is also discussed.
4. Fair Division Methods
The section introduces the concept of fair division—dividing items of value between two or more parties. It begins by describing the divider-chooser method for two parties, explaining how it guarantees a fair division where each party receives a piece they value at least 50% of the total. The section explores scenarios beyond two-party divisions, mentioning methods like the lone divider method and illustrating how they ensure fairness even when dealing with multiple individuals and conflicting preferences. The text then extends the concept of fair division to scenarios involving undesirable items, such as allocating household chores among roommates. This illustrates the broader applicability of fair division principles beyond the distribution of tangible goods to situations concerning the allocation of responsibilities or tasks. Several examples, including dividing a pizza among classmates with different preferences, serve to clarify the principles of fair division.
III.Apportionment Methods
This section examines different apportionment methods used to allocate resources fairly, focusing primarily on the allocation of legislative seats. It contrasts the Hamilton, Jefferson, and Webster methods, highlighting the potential for paradoxes such as the Alabama Paradox. The section discusses the implications of these methods for different sized states and the inherent difficulties in achieving perfectly fair representation. The section also touches upon the process of drawing legislative districts to achieve approximately equal populations in each district.
1. Introduction to Apportionment Methods
The section introduces the concept of apportionment, focusing on the allocation of legislative seats. It establishes that various methods exist for this purpose and that these methods can lead to different outcomes and may not always produce perfectly equitable results. The text highlights the importance of understanding the underlying principles and potential biases inherent in each method. The goal of apportionment is to distribute resources (in this case, legislative seats) proportionally among different entities (e.g., states), but achieving perfect proportionality can be challenging due to the inherent constraints of working with whole numbers of representatives. The section sets the stage by emphasizing the complexity and potential for unexpected results in such allocation processes.
2. Hamilton s Method and the Alabama Paradox
Hamilton's method is presented as one approach to apportionment. The explanation focuses on the method's process and then introduces the Alabama Paradox as a significant problem. The Alabama Paradox illustrates a counterintuitive outcome where a state can lose a representative even when the total number of representatives increases and the state's population remains unchanged. This paradox, which occurred after the 1880 census, highlights a flaw in Hamilton's method. The paradox demonstrates that even seemingly straightforward apportionment methods can produce unpredictable and unfair results, leading to the need for further investigation into alternative methods that might avoid such issues. The discussion implicitly emphasizes the importance of carefully selecting the method of apportionment based on its properties and potential consequences.
3. Alternative Apportionment Methods Jefferson s and Webster s Methods
The section introduces alternative apportionment methods proposed by Jefferson and Webster. Jefferson's method, adopted in Congress from 1791 to 1842, favors larger states. The text highlights that Jefferson's preference for his method had both political and mathematical rationales. It favored larger states, which benefited his own state of Virginia, and it was argued to more accurately reflect the ratio of people to representatives. Webster's method, adopted at different times throughout the 19th and early 20th centuries, is described as similar to Jefferson's but uses rounding to the nearest whole number rather than truncating decimals. The section emphasizes that there's no single 'correct' method of apportionment, as each method has its own advantages and disadvantages and tends to favor certain sized states over others. The discussion makes it clear that the choice of apportionment method is a complex decision that has both mathematical and political implications.
4. Apportionment of Legislative Districts
This subsection focuses on the application of apportionment to drawing legislative districts. Unlike previous examples of apportioning resources directly to entities (e.g., states), this involves creating geographical districts to ensure roughly equal representation based on population. The text points out that this approach can lead to discrepancies in representation based on geographical factors, as a small, densely populated city might have multiple representatives while a large, sparsely populated rural area may only have one. This section demonstrates how the principles of apportionment apply not only to the allocation of representatives, but also to the physical drawing of district boundaries to achieve equitable representation of populations in different geographical areas. The goal is still to create equal representation per population but with the additional complexity of geographical considerations.
IV.Graph Theory and Optimization
This section explores concepts within graph theory, focusing on Euler circuits, Hamiltonian circuits, and the Traveling Salesman Problem (TSP). It examines algorithms for finding efficient routes, such as Dijkstra's algorithm for finding the shortest path and algorithms for eulerization. The optimization of routes for various scenarios, like postal delivery and package delivery, is discussed. The concept of spanning trees and the use of algorithms for their efficient construction are also addressed. Specific examples involve finding optimal routes for a lawn inspector and designing networks to minimize cost. The concept of Levenshtein distance for measuring the similarity between words is mentioned as an application of graph theory.
1. Euler Circuits and Eulerization
This section introduces Euler circuits, which are paths that traverse every edge of a graph exactly once and return to the starting vertex. The concept of Eulerization is discussed—modifying a graph by duplicating edges to create an Euler circuit. This process is relevant to optimization problems where the goal is to find the most efficient route that covers all edges. The text provides an example of eulerization, highlighting that multiple eulerizations are possible and that the optimal eulerization (minimizing edge duplication or total added weight) depends on the specific characteristics of the graph and any associated weights (costs or distances) on the edges. The complexities of finding the optimal eulerization for larger, weighted graphs are mentioned.
2. Hamiltonian Circuits and the Traveling Salesman Problem
This section contrasts Euler circuits with Hamiltonian circuits. While Euler circuits cover every edge of a graph exactly once, Hamiltonian circuits visit every vertex exactly once and return to the starting vertex. The Traveling Salesman Problem (TSP) is introduced as a classic optimization problem involving Hamiltonian circuits; the goal is to find the shortest possible route that visits all specified locations (vertices) and returns to the starting point. The section notes that, unlike Euler circuits, the focus is not on the mere existence of a Hamiltonian circuit but on finding the optimal circuit with the lowest total weight (cost or distance) if the edges have weights assigned to them. The TSP is presented as a more complex optimization problem compared to finding Euler circuits.
3. Dijkstra s Algorithm for Shortest Paths
Dijkstra's algorithm is presented as a method for finding the shortest path between two vertices in a weighted graph. The text highlights its optimality (guaranteeing the shortest path) and efficiency (computational feasibility), contrasting it with methods that might only find relatively short, but not necessarily the shortest, path. The section explains that the algorithm's efficiency allows for practical applications such as GPS navigation. The number of calculations required by Dijkstra's algorithm is estimated, showing its scalability for graphs with a reasonably large number of vertices. This demonstrates the algorithm's importance in practical applications of graph theory.
4. Spanning Trees and Network Optimization
The concept of spanning trees is introduced in the context of network optimization. A spanning tree connects all vertices of a graph with the minimum number of edges, crucial for designing efficient networks. The section shows how to use weighted graphs to represent network costs. An example of a company needing to lease dedicated lines to connect multiple offices is provided, illustrating the problem of minimizing the total cost of establishing reliable connectivity. The section demonstrates how graph theory is applicable to real-world problems in areas like network design and telecommunications, and the need to find optimal configurations that minimize costs or distances. The use of sonet rings as a method for ensuring service reliability in fiber optics installations is also briefly discussed.
5. Levenshtein Distance as a Graph Theory Application
This subsection briefly introduces the Levenshtein distance as an application of graph theory. The Levenshtein distance measures the similarity between two strings (words) by counting the minimum number of edits (substitutions, insertions, or deletions) needed to transform one string into another. This is relevant to spell-checking algorithms in word processing programs. This brief explanation shows how graph theory concepts find applications in seemingly unrelated areas like natural language processing and computer science. It highlights the wide-ranging applicability of graph theory concepts beyond route optimization and network design.
V.Project Scheduling and Critical Path Analysis
This section describes methods for scheduling projects and determining the critical path – the sequence of tasks that determines the minimum project completion time. The section discusses algorithms for creating efficient project schedules, highlighting the concepts of task dependencies, prioritizing tasks, and identifying potential idle time. The decreasing time algorithm and the backflow algorithm are mentioned, and a comparison of schedule efficiency against the critical path time is presented.
1. Project Scheduling and the Concept of Idle Time
This section introduces the problem of project scheduling, where the goal is to efficiently sequence tasks to minimize overall completion time. The text highlights the possibility of idle time in a schedule, where resources (people or equipment) are not utilized while waiting for other tasks to complete. The section uses an example of a project schedule where a decreasing-time algorithm resulted in a total completion time of 35 units. The presence of idle time in the resulting schedule is noted, indicating that the schedule might not be fully optimized. The concept of idle time is introduced as a key factor to consider when evaluating and improving project schedules. The example lays the groundwork for understanding how to analyze and assess the efficiency of a project schedule.
2. Identifying the Critical Path
The critical path is defined as the sequence of tasks whose total completion time determines the minimum time required to complete the entire project. To identify this path, the text explains the process of examining sequences of tasks and identifying the one with the highest total completion time. The provided example shows a critical path with a total completion time of 28 units, suggesting that the schedule with a 35 unit completion time has room for improvement. The identification of the critical path provides a benchmark against which to measure the efficiency of a project schedule. A schedule that takes longer than the critical path time has inefficiency in the form of idle time. The method for identifying the critical path, though described briefly, is fundamental to optimizing project scheduling.
3. Algorithms for Project Scheduling and the Backflow Algorithm
The section discusses algorithms for creating efficient project schedules. A decreasing time algorithm, which prioritizes tasks based on their duration, is mentioned. This algorithm, however, is shown to lead to inefficient schedules with substantial idle time. To overcome this, the section suggests the backflow algorithm, an approach that begins from the end of the project and works backward. The backflow algorithm is explained as a more efficient method for determining the critical path and creating optimized schedules, especially when dealing with complex projects involving many tasks. The section contrasts and compares algorithms for project scheduling and the process for identifying the critical path to improve project schedule efficiency.
VI.Recursive Relationships and Explicit Forms
This final section contrasts recursive relationships and closed-form expressions for describing mathematical sequences. It demonstrates how recursive formulas can concisely describe how a quantity changes over time, but closed-form expressions are often more useful for making predictions and solving problems involving extended timeframes. An example using a recursive formula for a growing bottle collection is included to illustrate the calculation and conversion to an explicit form.
1. Recursive Relationships Definition and Limitations
The section begins by introducing recursive relationships as a method for describing how a quantity changes over time. A recursive relationship defines a term in a sequence based on preceding terms. While this approach can be concise and elegantly describe the pattern of change, the text emphasizes its limitations for making predictions far into the future. Calculating values for large n using a purely recursive approach can become computationally cumbersome and inefficient. The example of calculating the number of bottles in a collection after five years is used to show how recursive formulas can be used to easily describe the change in a quantity over time, but to answer the question of how long it takes to reach a certain number of bottles requires much more calculation.
2. Explicit Forms Advantages for Prediction and Problem Solving
The section contrasts recursive relationships with explicit (or closed-form) expressions. An explicit equation provides a direct formula for calculating any term in a sequence without relying on previous terms. This is presented as significantly more convenient for making predictions or solving problems that extend far into the future. The text argues that explicit equations offer a much more efficient way to calculate values for large n. The text explains this advantage with the example of needing to calculate how many bottles someone would have in their collection after a very long time; while the recursive formula is simple to define, the explicit form is far more effective at answering the question. The section highlights the tradeoff between the simplicity of description and the efficiency of calculation when choosing between recursive and explicit forms.
3. Deriving Explicit Forms from Recursive Relationships
The section demonstrates the process of deriving an explicit form from a recursive relationship. The text uses an example to illustrate the steps involved. It starts with a recursive formula and shows how, by selectively manipulating the equations without simplifying them prematurely, it is possible to identify a pattern and derive a general explicit formula. This formula eliminates the need for iterative calculations that are needed to utilize the recursive formula. This shows a common mathematical technique for converting between different representations of a sequence. The example uses logarithms, showing how even complex recursive formulas can sometimes be expressed as simple, explicit formulas.
