Quantitative Literacy and Reasoning of Freshman Students with Different Senior High School Academic Background Pursuing STEM-Related Programs

This paper investigates the quantitative literacy and reasoning (QLR) of freshmen students pursuing a Science, Technology, Engineering, and Mathematics (STEM)–related degree but do not necessarily have a Senior High School (SHS) STEM background. QLR is described as a multi-faceted skill focused on the application of Mathematics and Statistics rather than just a mere mastery of the content domains of these fields. This article compares the QLR performance between STEM and non-STEM SHS graduates. Further, this quantitative-correlational study involves 255 freshman students, of which 115 have non-STEM academic background from the SHS. Results reveal that students with a SHS STEM background had significantly higher QLR performance. Nevertheless, this difference does not cloud the fact that their overall QLR performance marks the lowest when compared to results of similar studies. This paper also shows whether achievement in SHS courses such as General Mathematics, and Statistics and Probability are significant predictors of QLR. Multivariate regression analysis discloses that achievement in the latter significantly relates to QLR. However, the low coefficient of determination (10.30%) suggests that achievement in these courses alone does not account to the students’ QLR. As supported by a deeper investigation of the students’ answers, it is concluded that QLR indeed involves complex processes and is more than just being proficient in Mathematics and Statistics.


Introduction
With the advent of the recently implemented Philippine basic education curriculum (K-12), high school students are required to take number-driven courses such as Mathematics and Statistics. Submerged with theoretical content and computational drills, students then graduate with the hope of being able to apply these knowledge and skills in solving problems encountered in their future workplace or more importantly in the next stage of their education. Higher education institutions would welcome students who have achieved sufficient quantitative literacy and reasoning (QLR) and are ready to comply with the demands of their educational programs.
Quantitative reasoning, quantitative literacy, and numeracy are three overlapping terms used interchangeably. Distinguished international organizations like the Mathematical Association of America (Sons, 1996), and the Organization for Economic Co-operation and Development (OECD, 2016) as well as reputable authors including Rhodes (2010), Adelman et al. (2014), and Roohr et al. (2014) have provided several definitions for this construct. One major similarity among these definitions is the emphasis on the ability to apply Mathematics and Statistics to contextualized problems, a higher-order skill rather than just the ability on the content domain of these disciplines. It is one thing to excel on the abstractions of Mathematics or computational and theoretical dimensions of Statistics; it is another to recognize their applications to the vast complexities of the modern world. It is with this nature of QLR that the researcher strongly advocates the development of this ability especially for students planning to enroll degree programs aligned to Science, Technology, Engineering, and Mathematics (STEM).
Many educational groups worldwide have in fact classified quantitative literacy and reasoning as one of the core competencies targeted to be advanced among students (Association of American Colleges and Universities, 2007). In the case of the Philippines, on its new basic education curriculum, numeracy along with critical thinking and problemsolving are the core of its mathematics framework (Science Education Institute-Department of Science and Technology & Philippine Council of Mathematics Teacher Education Inc. [SEI-DOST & MATHTED], 2011; Republic of the Philippines Department of Education, 2016). Put simply, QLR is a widely accepted set of skills people are hoped to be proficient at. Communicating information from numbers is a skill as important as reading and writing (Gaze et al., 2014).
Consequently, with this glaring evidence on the importance of QLR, there arises a need to assess the quality of quantitative skills that the students develop during their postsecondary education (Roohr et al., 2014). Especially for the case of students enrolling STEM degrees in the tertiary level, they are expected to have developed sufficient quantitative skills as science in general is a quantitative endeavour (Follette et al., 2015). Unfortunately, literature suggesting the conduct of such assessment appears to be limited especially for the case of Southeast Asian countries. For most institutions in the Philippines for example, QLR appears to remain an uncharted territory for educational researchers despite the consensus that assessment serves as a local and global evidence to support a progressing teaching-learning process (Capraro et al., 2012). Surely, student development always begins with the information of their capabilities. For instance, improving the quantitative literacy and reasoning of students would be difficult if one does not have clear measures for such ability. This concern served as the very heart of this research study as it aims to provide a measure for the QLR of students.
After assessing the QLR performance of students, the next logical step would be the identification of factors that relate to such performance. In this particular study, the factor that was investigated was the students' Senior High School (SHS) backgrounds such as their SHS strand (STEM or Non-STEM) and their achievement in SHS General Mathematics, and in SHS Statistics and Probability. Especially that in the university where this study was conducted, entrants for STEM-related degrees were not restricted to SHS STEM graduates. This study discloses whether the QLR of students with different SHS academic strand are at different levels. Moreover, it is shown whether previous academic achievements in General Mathematics, and Statistics and Probability relate to the quality of their quantitative skills. As was mentioned, mathematical and statistical concepts have been integrated in the new basic education curriculum, but whether they were helpful in the development of students' QLR is revealed in this paper.
Results from QLR assessment and exploration of related factors may shed light on the contribution of the existing basic education to the development of the quantitative abilities of incoming college students. Subsequently, one can reflect on strategies that can potentially improve such abilities through curriculum development and pedagogical innovations, not only in the Senior High School level but also in higher educational institutions they are enrolled in. Researches that emphasize QLR in education has the potential to reduce the widespread perception that mathematics courses are irrelevant, and that mathematical and statistical excellence is an innate ability. Finally, the results of this study may serve as a starting point for QLR assessment not only in the university this research was conducted but possibly in the entire country. As mentioned earlier, despite the consensus of most countries on its vital educational impact, QLR remains an unexplored idea in the Philippines as evidenced by the apparent scarcity of data on this construct.

Definition of Quantitative Literacy and Reasoning (QLR)
Quantitative literacy (QL), quantitative reasoning (QR), and numeracy are related terms that have been used to describe the quantitative ability of an individual. We first look at how these terms were defined in the last twenty-five years. Respectively, here are the existing definitions of some of the international organizations' framework namely (1) Association of American Colleges and Universities' Liberal Education and America's Promise, (2) Form Lumina's Degree Qualifications Profile 2.0, (3) Mathematical Association of America, and (4) Organisation for Economic Co-Operation and Development: 1. Quantitative literacy (QL)-also known as numeracy or quantitative reasoning-is a 'habit of mind,' competency, and comfort in working with numerical data. Individuals with strong QL skills possess the ability to reason and solve quantitative problems from a wide array of authentic contexts and everyday life situations. They understand and can create sophisticated arguments supported by quantitative evidence and they can clearly communicate those arguments in a variety of formats (using words, tables, graphs, mathematical equations, etc., as appropriate). (Rhodes, 2010, p. 25) 2. The student with strong QL translates verbal problems into mathematical algorithms as to construct valid arguments using the accepted symbolic system of mathematical reasoning and presents the resulting calculations, estimates, risk analyses or quantitative evaluations of public information in papers, projects or multimedia presentations. The student constructs mathematical expressions for complex issues most often described in non-quantitative terms. (Adelman et al., 2014, p. 22) 3. A college student who is considered quantitatively literate should be able to: (a) Interpret mathematical models such as formulas, graphs, tables, and schematics, and draw inferences from them; (b) Represent mathematical information symbolically, visually, numerically, and verbally; (c) Use arithmetical, algebraic, geometric and statistical methods to solve problems; (d) Estimate and check answers to mathematical problems in order to determine reasonableness, identify alternatives, and select optimal results; (f)Recognize that mathematical and statistical methods have limits. (Sons, 1996, Pat II, Para 6) 4. The ability to access, use, interpret and communicate mathematical information and ideas in order to engage in and manage the mathematical demands of a range of situations in adult life. To this end, numeracy involves managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways" (Organization for Economic Co-operation and Development [OECD], 2012b, p.20).
Apart from these definitions, researchers who made significant contributions in this area have provided key elements as to what comprises QLR. Steen (2001) for example lists some of the elements as (1) confidence with Mathematics, (2) cultural appreciation, (3) interpreting data, (4) logical thinking, (5) making decisions, (6) Mathematics in context, (7) Number sense, (8) practical skills, (9) prerequisite knowledge, and (10) symbol sense. He further provided key skills of quantitative literacy as (1) arithmetic, (2) data, (3) statistics, and (4) reasoning. Grawe (2011) also adds that when students (1) acquire a certain command of mathematical concepts, (2) are able to apply them in context, (3) communicate them, and (4) recognize the limitations of the data presented, then students are most likely to be successful in QLR problems. Moreover, Roohr et al. (2014) did a similar review on existing frameworks and came up with a definition as follows.
Quantitative literacy is the comprehension of mathematical information in everyday life, and the ability to detect and solve mathematics problems in authentic contexts across a variety of mathematical content areas. Solving these applied mathematical problems includes (a) interpreting information, (b) strategically evaluating, inferring, and reasoning, (c) capturing relationships between variables by mapping, interpreting, and modelling, (d) manipulating mathematical expressions and computing quantities, and (e) communicating these ideas in various forms (p. 14).
One striking similarity of these definitions is the emphasis on how courses that are learned in a formal school such as Mathematics and Statistics are integrated in problems with varied and authentic contexts. While it is true that contentspecific skills learned from these courses play a role in successfully answering the problems in a QLR assessment, students' applications to make decisions are of prime importance. It is also emphasized that QLR encompasses multiple skills rather than just being proficient at a specific course.
While the terms QL, QR, and Numeracy are related, their subtle differences are also pointed out. Vacher (2014) has proposed a vocabulary matrix corresponding to these terms. Accordingly, numeracy is centered on numbers and math skills. Both numeracy and QL describe the ability to read, write, and understand quantitative information such as graphs, tables, mathematical relations, and descriptive statistics. Coherent and logical thinking of this information is both covered by QL and QR but all these three terms are directed to a disposition to engage, and use one's mathematical and statistical skills to make decisions. Karaali et al. (2016), in their definition review, showed a hierarchical relation among these terms. At the foundational level is numeracy where arithmetic skills and ease with numbers are emphasized, then comes QL which is about fluency in comprehending quantitative information. At the top of the hierarchy is QR, a higher-order skill that allows one to be critical in quantitative arguments.
In consideration of all the frameworks and definitions reviewed, and adopting the definition of Gaze et al. (2014), quantitative literacy and reasoning is synthesized as "the skill set necessary to process quantitative information and the capacity to critique, reflect upon, and apply quantitative information in making decisions" (p. 3). This definition shall be adopted in this study since it captures the dimensions of both QL and QR, hence QLR.

QLR, Mathematics, and Statistics
QLR is distinct from Mathematics and Statistics (Steen, 2001) as the former is more than formulas and equations. Hughes-Hallett (2001) further adds that "mathematics focuses on climbing the ladder of abstraction while quantitative literacy clings to context" (p.94) and that "mathematics is about general principles that can be applied in a range of contexts; quantitative literacy is about seeing every context through a quantitative lens" (p.94). QLR serves as a bridge between Mathematics and the real world (Manaster, 2001).
These comparisons suggest that while QLR does not necessarily equate to skills acquired from the traditional Mathematics classroom, these two are not entirely exclusive. Success in QLR assessments still need a base knowledge in Mathematics. A good QLR course is one that focuses in critical thinking and problem-solving supported by mathematical tools (Briggs, 2021). Poor mathematical foundation may also pose a threat in developing students' QLR.
For example, in a study conducted by Wang and Wilder (2015), teachers teaching a QR-infused course reported that students had difficulty in high school if not lower-level mathematics operations.
It is also worth mentioning that Mathematics and Statistics, although related, will be treated as two separate variables. A plethora of literature has established that Mathematics and Statistics are two distinct disciplines as opposed to the idea that Statistics is a branch or even a type of Mathematics. Moore (1992) argued that Statistics is a mathematical science but not a branch of Mathematics. Gal and Garfield (1997) further added that mathematical concepts and procedures are part of many possible solutions of statistical problems. Ben-Zvi and Garfield (2004) finally provided a model showing that Mathematics knowledge, among many, is key towards statistical literacy. It is for these reasons that Mathematics ability was treated separately from Statistics ability.
With all the above literature explored, this article establishes whether students with more exposure to advanced mathematics courses such as Pre-Calculus and Calculus have different QLR performance as with the case of students graduating the Philippine Senior High School-STEM strand. Moreover, this study investigates how achievement in Mathematics, and Statistics and Probability predicts students' QLR. As mentioned before, QLR problem requires some level of Mathematics and Statistics proficiency. Figure 1 illustrates the interplay between the variables involved in the study. Specifically, it suggests a relationship that takes the QLR ability as the dependent variable, and is affected by independent variables focused on students' varying SHS academic background. These differences are further categorized as (1) SHS strand and (2) SHS General Mathematics, and Statistics and Probability grades.

Research Design
This research is purely quantitative-correlational. It assesses the QLR of freshmen students enrolled in STEM programs and then relates it to several variables namely (1) SHS strand, and (2) SHS General Mathematics, and Statistics and Probability grades. This establishes the magnitude of how postsecondary education contributes in the development of students' QLR.

Sample and Data Collection
The study was conducted at a university in the Philippines. During the Academic Year 2019-2020, eight hundred thirtyfive senior high school graduates were admitted to pursue a STEM program in the university. These STEM programs range from engineering, teacher education major in Mathematics and the Sciences, Statistics, Environmental Science, Nursing, Veterinary Medicine, Agriculture, and Forestry. Moreover, of this total number, 55% are graduates of a Senior High School non-STEM strand.
The sample size equal to 263 was computed online through surveysystem.com (Creative Research Systems, 2018). The confidence level and significance level used was 95% and 5% respectively. Stratified Random Sampling was then used to select the sample to ensure that the proportion of STEM to non-STEM students is maintained. However, after data cleaning, eight outliers and "influential" were detected and removed. In total, there were 255 respondents included in the study, of which are 115 SHS STEM graduates.
Professor Eric Gaze of the Bowdoin College led a project that resulted in the construction of an instrument that measures QLR. Gaze has confirmed his willingness in adapting this instrument for this study. The quantitative literacy and reasoning assessment (QLRA) is a 20-item multiple-choice test scrutinized for validity issues. Problem contexts were adjusted to make sure that the instrument is appropriate for Filipino students. Moreover, a pilot study was conducted to confirm that the instrument passes reliability requirements before administering it to the actual respondents of the study. The Kuder-Richardson (KR20) coefficient was utilized to assess the internal consistency of the scores from the pilot study. This coefficient (KR20=0.84) only revealed that the scores are indeed reliable (Glen, 2020). This paper-and-pencil test was administered to the respondents for 1 hour.

Data Analysis
Both descriptive and inferential techniques were used in this study. The descriptive part involves the computation of the mean QLR scores and the SHS mean grades. Table 1 provides the scale for describing students' SHS achievement in General Mathematics, and Statistics and Probability as reflected in their average grades. The inferential part involves the comparison of QLR scores on two groups, the STEM and non-STEM groups. To do this, t-test for two independent samples was employed. However, before proceeding to this test, the QLR scores were "logtransformed" since the original QLR distribution is skewed (non-normal). Further, to test the relationship between SHS grades and QLR scores, multiple linear regression analysis was employed. To ensure the validity of inferences derived from the model, the data was freed from outliers and "influentials". Important residual assumptions such as (1) multivariate normality, (2) multicollinearity, and (3) homoscedasticity was tested. Multivariate normality and homoscedasticity were checked using a probability plot (P-P Plot) and a residual scatter plot as presented in Figures 2  and 3 respectively. The linear behaviour of the P-P plot supports multivariate normality while the non-constant standardized residuals across the QLR scale favours the homoscedasticity assumption (Williams et al., 2013). Multicollinearity on the other hand was tested through Variance Inflation Rate (VIF) and Tolerance (T). These tests provided statistics (VIF > 0.1, T<10) that consistently indicate that the data is free from severe multicollinearity issues (Alexopoulos, 2010;Sarstedt & Mooi, 2014).

The QLRA Scores of the Students
To describe the performance of students in the QLR assessment, the overall and item-by-item results will be presented first. This will be followed by a graph showing the distribution of students' overall scores. As presented in Table 2, the mean score equal to 5.65 (28.25%) only reflects a dismal performance of students under the QLRA. The success rate (proportion of correct response) on the majority of items (16 items) was only about 40% if not lower. The remaining three items were correctly answered by not more than 62% of the students.

Figure 4. Distribution of Students' QLRA Scores
Consistent with the relatively low success rate on the majority of the QLRA items, Figure 4 displays a positively skewed distribution of students' overall scores. This distribution further confirms that more students scored towards the lower end of the QLRA spectrum.  Table 3 presents the QLR score difference between students in the two SHS strands. The mean difference equal to 6.42% suggests that students with SHS STEM background scored higher than their non-STEM counterparts. This is further supported by the inferential test run, revealing that such observed difference is statistically significant. Also, the standard deviations reveal that QLR scores of the STEM group are generally more spread about the mean as compared to the non-STEM group. An unbiased estimate for effect size called Hedges' g is further computed to measure the magnitude of the mean difference between the two groups (Cumming, 2011). The Hedges' g value equal to 0.56 indicates a "medium" effect size based on the conventions provided by Cohen (1988). This means that there is a medium meaningful difference between the two groups. Presented in Table 4 is the academic achievement of the respondents in General Mathematics and in Statistics and Probability during their Senior High School. The data shows that such achievements are "outstanding" based on the grade criteria set by the Department of Education in the Philippines. While this may seem true for both strands, a slightly higher mean grades of the STEM students under the two courses are observed. This seems reasonable because STEM students are expected to perform better in STEM-related courses. Moreover, between these two quantitative courses, it appears that STEM students marked a higher mean grade in Statistics and Probability compared to General Mathematics.

Relationship of Students' SHS Grades in General Mathematics, and in Statistics and Probability to Their QLR
Multiple linear regression analysis was run to investigate the predictive powers of the students' scores in the two courses to their QLR. Results reveal that achievement in the latter serves as a significant predictor of QLR as evidenced by the significant coefficients as shown in Table 4. The general mathematics grade on the other hand does not seem to be empirically related to the QLR scores of the students. The significant unstandardized coefficient further means that on average, when achievement in Statistics and Probability increases by one unit, QLR also increases by a magnitude of 0.66. The coefficient of determination however indicates that achievement in these courses accounts only to 10.30% of the observed variability in the students' QLR scores. In other words, a remaining 89.70% of QLR variability is explained by some other factors.

Discussion
The results presented in Table 2 is by far the lowest students' achievement when compared to the results of other studies that made use of similar instrument. These international studies as summarized in Table 5 serve as a point of comparison for the results of the current study. These studies reflect the quantitative literacy and reasoning of students from different institutions outside the country. Majority of these results do not seem to deviate from the results of the present study. For instance, the consecutive survey conducted by Gaze et al. (2014) revealed that students successfully answered barely half of the test items. An even lower result can be observed from the study conducted by the Mathematical Reasoning Assessment Committee (n.d.) of California as indicated by the students' mean percentage score of only 32.55%. Students' QLR performance will be explored deeper throughout the remainder of this paper.

32.55
Study 4. Grawe and O'Connel (2018) investigated QLRA by using it as a pre-course diagnostic tool for students' general academic performance at Carleton College (240 students).

79.10
The comparison of the QLR scores between students with and without STEM-specific courses in the SHS reveals that the former marked a significantly higher QLR score. The first obvious reason that may account for this disparity is curriculum difference. Higher mathematics courses such as Pre-Calculus and Basic Calculus are one of the sources of learning opportunities that are absolutely absent for non-STEM SHS students. While mathematical topics in these courses are not direct prerequisites to answer the items included in the QLRA, these courses still integrate the application and mastery of other basic mathematical knowledge and skills such as arithmetic, number sense, basic algebra, geometry (measurement) and proportional reasoning, all of which are essential in answering the QLRA. These courses are sources of knowledge that are important to understanding real-world phenomena (Manaster, 2001). Moreover, the additional experience of interpreting graphs or applying mathematical concepts in solving contextualized problems are also the tenets of QLR.
Another factor related to this result is varying teacher's pedagogical approaches. Different approaches applied in different learning settings may explain the observed differences in students' behaviors. In the context of the students where this study was conducted, those who are under the SHS STEM strand are grouped as a single section. The QL experiences of students belonging to this group may have enriched their QLR skills better than if they belonged to other non-STEM groups. A study supporting that this is possible was conducted by Rocconi et al. (2013). It revealed that students in STEM fields are more engaged in activities related to quantitative literacy as compared to students under the non-STEM fields.
Despite the possibilities of these arguments, it is to be made clear that this article does not provide empirical evidence to these claims but presents a hypothesis for further study. Furthermore, curriculum differences and students' varying behaviours due to pedagogical differences may only explain the observed difference in the QLR score but we are far from concluding that learning "more maths" results in a better QLR. It is in fact reiterated that both of these groups marked a rather low QLR scores. This observation agrees with the results of a study conducted by Elrod and Park (2020). After comparing the quantitative reasoning skills of STEM and non-STEM students, it was found out that although the STEM students had significantly higher QLRA scores, students overall had relatively low QLR skills.
The academic achievement of students in courses they had in common (General Mathematics and Statistics and Probability) were brought out because one goal of this paper is to explore how these courses may have contributed to the QLR of students. It was shown earlier that between these courses, achievement in Statistics and Probability appears to be the only significant predictor. One major difference between these two courses that could possibly account for this result is an observation that the SHS Statistics and Probability course is geared towards a pedagogical approach that is more context-laden. In fact, Cobb and Moore (1997) agree that statistics sets itself apart from mathematics and other mathematical sciences because in statistics, students work with data, and data are not just numbers but are numbers with context. Students also agree that Statistics serves as an opportunity to improve one's QLR as it has more practical applications than mathematics (Jordan & Haines, 2006). The characteristic of the course Statistics and Probability as being entwined to context is also a characteristic that is highly observable with the QLRA items, thus the observed relationship. For instance, "statistics and probability" topics such as graphs, measures of central tendency, variability, or correlation normally come with concrete and authentic examples. Sometimes, these topics are directly taught using these examples alone. Compare this to a "general mathematics" topic such as "operations and evaluation of functions" which are normally presented with higher abstraction and more mathematical procedures. Such decontextualized mathematics has left students with insufficient quantitative skills (Steen, 2001). Stith (2001) further argue that the absence of context to most mathematics courses is a problem of pedagogy. He adds that most teachers are rather unaware of the QLR movement (QLR researches) and that putting much context is time consuming when there is much content to be covered. Similarly, Hughes-Hallett (2001) reports that mathematics teachers complain about students' inability to apply in another context. Recently though, a few educators have started teaching mathematics courses such as Algebra using quantitative reasoning as an approach. Contextualized problems were powerful for students learning algebraic manipulations (Piercey, 2017).
So far, we have shown that grade in Statistics and Probability is a better predictor than the general mathematics grade. However, the low value of the coefficient of determination (R 2 =10.30%) suggests that while the independent variable (grade) correlates with the dependent variable (QLR scores), the grade does not explain much of the variability in the dependent variable. This is consistent with the findings of Clinkenbeard (2021), who found that among several variables used as predictors of students' grade in a college QL course, only the High School Grade Point Average (GPA) emerged as a significant predictor but accounting only to about 13% of the explained variability.
This actually makes sense because QLR as presented earlier is multi-faceted and requires multi-step processes. The "outstanding" academic achievement of students as reflected by their reported general averages on General Mathematics, and Statistics and Probability may only account to one facet (command to mathematical concepts) mentioned by Grawe (2011) or to certain elements as "confidence of mathematics" and "prerequisite knowledge" provided by Steen (2001). In other words, students may have only been equipped with the right amount of mathematical and statistical tools but fail to apply and communicate them in relation to the problems included in the QLRA.
We illustrate further by investigating the answer choices in the three QLRA items where most students are least successful at. Item descriptions will be presented first followed by the correct answer (CA), and then the most frequent distractor (MFD), meaning the incorrect answer chosen by most of the respondents. Also, a possible student reasoning (PSR) for each option will be illustrated.

Figure 5. Item 16 of the QLRA Instrument, the Correct Answer, and the Most Frequent Item Distractor
The first item is about the interpretation of a graph presented in Figure 5. The graph shows the home region of 500 students entering a hypothetical university and respondents are to compare the number of students in two regions (Region 3 and International Students). Only about 8% of the respondents got the correct answer. Majority of the respondents got distracted on the option with a rather simplistic interpretation of the problem. Both the CA and MFD are options that stem from numerically or arithmetically sound operations (addition and multiplication). This problem is also about comparing the "number" of enrollees given the "percentage." Ideally, students are not supposed to directly compare these percentages but to use them to compute the actual number of students before comparing.
Here is another item (item 2of QLRA) where about 62% of the respondent's got distracted by an erroneous answer: The  Item 10 of the QLRA is a problem on correctly reading a gauge ( Figure 6) showing the power output of a motor. To correctly read the gauge, students need to account the information given in the problem that power ranges from zero to one-half of a horsepower as shown by the semicircle as visual representation with an added label "1/2 hp" at one end of the gauge. Yet, about half of the respondents seemingly failed to account for this information and simply stated that the power output is 3 parts out of 8 or 3/8 since the gauge is divided into 8 equal parts and the gauge pointer stopped on the third division.
We observe that these three items involve fairly basic mathematical concepts, yet the majority of the students were unsuccessful in identifying the correct answer. We note that the most frequent distractors appear convincing to the respondents because these are "numerically sound". It seems however that students failed in their reasoning skills and in realizing the limitations of the data involved. For example, in item 16, the most frequent distractor compares two values 4% and 6%, the latter being 2 units more than the former. This seems sound but does not answer the problem since the actual problem requires the students to compare the "number of enrollees" and not simply the corresponding "percentages." Similarly, students answered item 2 by simply following the sequence of how numerical information was introduced in the problem (2006-record is 75 298 328→increase is 5.2%→how much is the 2004-record). A student with a good reasoning skill even after performing this process, would have realized that their chosen answer is incorrect since adding 5.2% of the 2004-record (71 382 815 kg) to itself does not yield the 2006-record (75 298 328 kg). We recall that these skills, reasoning and identifying the limitations of data are important QLR skills mentioned by Steen (2001) and Grawe (2011). These authors added that reasoning is a more fragile skill than understanding mathematical algorithms.
The most frequent distractor in item 10 also demonstrates what other skills might be lacking among students. In this particular item, the students were asked to read a gauge along a power scale of 0 to 1/2 of a horsepower (not 0 to 1 horsepower). Their chosen answer suggests a complete disregard of the information that the scale is up to half of a horsepower only. Further, this answer may reflect the students' fixation of problems that are possibly encountered during their previous learning experiences on fractions. This inability to apply ideas and skills learned to new contexts is what Steen (2001) calls "decompartmentalization." Other authors like Jordan and Haines (2006) term it as inability to "transfer" knowledge in multiple and new contexts.

Conclusion
The quantitative literacy and reasoning of students as assessed by the QLRA appear to be dismal as it marks the lowest when compared to the results of other studies that used the same instrument. The significantly higher QLRA scores of SHS STEM students over their non-STEM counterparts partially reveals that more experience in higher or advanced Mathematics courses allows higher quantitative skills. However, both of these groups are still lagging overall in their QLR.
Moreover, between SHS General Mathematics, and SHS Statistics and Probability, students' achievement in the latter is the only significant predictor. This may be the case because Statistics and Probability clings to context more than those topics included under the General Mathematics course. Yet again, achievement in this course does not completely account to the QLR performance of students. This may be so because QLR is defined as multi-faceted and involves multi-processes. Further investigation of the answer choices of the respondents to the least correctly-answered QLRA items brought out a realization that knowledge of Mathematics and/or Statistics does not necessarily make one to be quantitatively literate, applying and communicating them to new contexts does.

Recommendations
Taking into account the major results of this paper, educators are encouraged to join the Quantitatively Literacy Movement by emphasizing the integration of relevant and multiple contexts to traditional instruction. Educators may also innovate and apply pedagogical strategies designed to develop students' QLR. For instance, by connecting Mathematics and Statistics to real-life applications or contexts that are sensitive to cultural differences and learning styles, students realize the relevance of Mathematics and Statistics courses to everyday life, to their future career, and to society in general. Activities emphasizing students' reasoning and critical thinking certainly produce quantitatively literate citizens. Finally, administrators may consider curriculum reviews and revisions to embed the principles of QLR.
Future studies may also invest in the observation of STEM and non-STEM settings in the senior high school. One may focus on pedagogical and curricular differences applied to these settings to further shed light on students' different levels of QLR upon entering college.

Limitations
One limitation of this paper is the absence of actual observation of senior high schools to better support the claims on the differences of students' QLR scores between STEM and non-STEM SHS graduates. Another is an in-depth investigation of students' thinking process and confirmation of their reasoning on why they chose a particular answer for a QLRA item.