Political skill is a crucial leadership competence by which leaders influence followers and other important stakeholders (Blickle, Meurs, Wihler, Ewen, & Peiseler, 2014; Wang & McChamp, 2019). The relevance of political skill in leadership remains cogent even across time and situational differences (Frieder, Ferris, Perrewé, Wihler, & Brooks, 2019): in informal leadership roles (Shaughnessy, Treadway, Breland, & Perrewé, 2016), within formal organisations (Buch, Thompson, & Kuvaas, 2016), or in statesmanship (Kifordu, 2011). It is also a crucial skill in teacher leadership (Brosky, 2011; Hargreaves, 2011). Hillygus (2005) has demonstrated an enduring association between higher education and the political behaviour of key actors in the education system. Indeed, the organisational domain of professional self-efficacy with reference to teacher-leaders may refer “to the beliefs about one's abilities to influence social and political forces within the organisation" which "is especially important" (Cherniss, 1993, p.142), suggesting that political skill is a necessary component of teacher-leaders' professional self-efficacy. Therefore, political skill is a critical social competence teacher-leaders require to perform successfully in dealing with all stakeholders in a school system (Brosky, 2011; Ewen et al., 2013; Konaklı, 2014; Konaklı, 2016; Nordquist & Grigsby, 2011).
Triolo, Pozehl, and Mahaffey (1997) underscore the foregoing point when they define educational leadership as the capacity to effect marked changes in the behaviours of the relevant interlocutors which includes political awareness, among others. Thus, teachers who wish to be leaders must be politically sensitive with regards to the subterranean customs, idiosyncrasies, norms, and power structures that significantly determine the success or failure of most educational initiatives. Although teacher-leaders work within the constraints of formal educational institutions, they often rely on informal social levers of influence to get things done. They, therefore, need to nurture and employ political astuteness in order to take advantage of the tangible and intangible resources others possess in the course of handling the complex challenges associated with teacher leadership (McAllister, Ellen, & Ferris, 2018).
However, the idea of leveraging the strategic potentials of teacher-leaders' political skill to drive school improvement is still emergent. The lack of a scale specifically designed to gauge the teacher-leaders’ political sensitivity reflects the emergent state of the literature on teacher-leadership measures. In fact, only a few researchers (e.g., Brosky, 2011; Konaklı, 2014; Konaklı, 2016; Taliadorou & Pashiardis, 2015) investigated political skill within the context of education, and these studies rely on the generic Political Skill Inventory (PSI) developed and validated by Ferris and his colleagues (Ferris et al., 2005b). Thus, there remains unaddressed the need for a domain-anchored measure of political skill in higher education. This need provides the rationale for this study.
Problem Statement [TOP]
Three issues inform this study. First, researchers are increasingly calling for construct specificity in management and applied psychology research (Uengoer, Lucke, & Lachnit, 2018). The political skill construct has not been adequately grounded into the specificities peculiar to teachers as leaders. There is, therefore, need for explaining political skill within the context of teacher leadership. Secondly, there are different definitions of sampling domain of the political skill construct [e.g., compare, Chen and Lin (2014), Doldor (2017) and Ferris et al. (2005b)]. At present, there is a marked absence of a political skill measure contextualised to the teacher-leadership domain. Thirdly, the PSI is not culturally invariant. For example, Lvina et al. (2012) show that the PSI suffers validity problems due to cultural peculiarities, thereby attenuating the PSI's adaptability in other climes, such as Africa's. These three issues justify the need for a new political skill scale dovetailed for gauging teacher-leaders political savvy.
Research Questions [TOP]
This study aims to investigate the foregoing research problems by developing a brief political skill scale for measuring teacher-leaders’ political skill in the context of higher technical education in underdevelopment. Thus, the study seeks to answer the question: Can a short, context-specific scale be developed for measuring the political skill of teacher-leaders operating in Nigerian Polytechnics?
Purpose [TOP]
Overall, this study seeks to develop and validate a parsimonious political skill scale for measuring the political sensitivity of teacher-leaders from Nigerian Polytechnics.
Teacher-Leader Political Skill [TOP]
Many definitions of teacher leadership exist. One common thread that runs through most definitions is that teacher-leadership is a shared burden carried in varying degrees of intensity by all teachers throughout the institutional hierarchies. Thus, every teacher anywhere in the institutional hierarchy is potentially a leader (Smulyan, 2016). To this end, teacher leadership could be defined as a series of interconnected and interdependent decisions and actions taken by the teacher, working alone or in teams, directed at changing the mindsets and worldviews of students, colleagues, parents, and other school and community interest groups to improve teaching and learning (York-Barr & Duke, 2004).
Wenner and Campbell (2018) identify two classes of teacher leaders: the thick and the thin. The former are teacher-leaders whose professional identities are deeply rooted in their personalities, while the latter see themselves as teacher-leaders only on occasions. This study is about thick teacher-leaders. The personality dispositions of thick teacher-leaders (such as political sensitivity) play crucial roles as supportive grids for the performance of the relevant teacher leadership mandates. One of such mandates is the teacher leaders’ role as reform champions or change agents (Von Esch, 2018) who work within the school system (Cooper et al., 2016) and from without the school system (Jacobs, Beck, & Crowell, 2014) to bring about beneficial improvement in the education system. As reform champions and change agents, teacher-leaders play a multiplicity of roles as mentors (Clarke, Killeavy, & Ferris, 2015), servant leaders (Nichols, 2011), team players (Koeslag-Kreunen, Van der Klink, Van den Bossche, & Gijselaers, 2018), team leaders (Honingh & van Genugten, 2017), curriculum reformers (Zhang & Henderson, 2018), and student character builders (Ningsih & Wijayanti, 2018). Teacher leaders perform these duties to varying degrees of involvement while relying mainly on their psychological resources (Lee & Nie, 2017; Lyness, Lurie, Ward, Mooney, & Lambert, 2013). Political skill is one psychological resource teacher-leaders seldom neglect to employ in doing what they do.
Yukl’s (2013) conception of corporate leadership is used here to highlight the distinctiveness of teacher leadership. Corporate leadership is performed in a highly structured and hierarchical environment, while teacher leadership takes place in an amorphous network comprised of students, faculty, and community stakeholders. Corporate leaders have subordinates they direct and control; teacher-leaders have, as followers, people who consider them as guides irrespective of the power or lack of power the teacher-leaders possess. Interactions between the leader and the led in corporate climes are often formally dyadic, with the leader exercising dominant and often domineering role; interactions between teacher-leaders and their constituents are random and extemporaneous, with teacher-leaders exercising a significant but facilitatory and advocacy roles. Formal authority is the primary basis of the powers corporate leaders wield, while teacher-leaders often rely on powers of suasion rather than any formal leverage.
The foregoing differences underscore the divergence in behaviours between teacher-leaders and corporate leaders. Teacher-leaders are expected to exhibit a more collaborative spirit, a more engaging, a more personal approach in dealing with stakeholders than corporate leaders (Sinha & Hanuscin, 2017). This behavioural expectation is because teacher-leaders do not wield formal power, and therefore, may not issue commands and expect obedience (Timor, 2017). Thus, they invariably rely on relational structures and moral suasions to generating support and cooperation from others (Dal Bó & Dal Bó, 2014; Johnson, Griffith, & Buckley, 2016). While formal powers have been orchestrated for and occupied by teacher-leaders (Williams, 2015), their influence is not coeval in origin with the offices they occupy but instead emanates from the abilities they possess, which differs from individual to individual. Since the teacher-leaders’ job is boundary-spanning (cutting across classroom and campus, into community spaces where formal authority is functionally alien), they, logically, need more than the formal authority to succeed across these indeterminate divides. Political skill is a core skill needed to successfully operate in boundary-spanning change situations (Balogun, Gleadle, Hailey, & Willmott, 2005). They had to do what they do by the sheer power of their character as teacher-leaders. Thus, teacher leadership is exercised not through hierarchy and demands for compliance; instead, teacher-leaders encourage active dialogue and collaboration with all stakeholders (e.g., colleagues, administrators, students, parents and community leaders) infrequently through formal meetings and most frequently through informal self-initiated and self-driven interactions. Thus, any measure developed to measure teacher-leaders' skill set must take cognisance of their personal and contextual variables into consideration. However, the benchmark measure of political skill (i.e., PSI) lacks situational variance. Hence, the need to modify it to reflect the contextual peculiarities of specific climes, especially the African milieu, becomes warranted.
Methodology [TOP]
The researchers used a combination of cognitive testing (Koskey, 2016), behaviour coding (Kirchner, Olson, & Smyth, 2017), respondent debriefing (Nichols & Childs, 2009) and expert review (Olson, 2010) methods in pretesting the PSS.
Cognitive Testing: Cognitive testing, made up of verbal probing and think aloud, is a method of ascertaining whether a questionnaire is good enough to yield the desired information (Koskey, 2016). In this study, scripted verbal probing was used as a cognitive evaluation method in testing respondents’ actual engagement with the questionnaire. Independent interviewers conducted the cognitive tests using scripted probes so that every interviewer asks the same set of questions in the same way to every respondent, thereby standardising the cognitive testing procedure.
Behaviour Coding: Behaviour coding involves systematically probing a questionnaire to understand the likely problems the actual administration of the questionnaire will occasion, especially with regards to the perceptions and behaviours of the respondents towards the questionnaire (Kirchner, Olson, & Smyth, 2017). In this study, the interviewers captured or coded only relevant respondents' behaviours during the first level verbalisation in a cognitive interview.
Respondent Debriefing: According to Nichols and Childs (2009, p.117), expert respondent debriefing is "the use of expansive probing in the debriefing of actual respondents in a field production survey as a means of assessing the accuracy of an instrument." The debriefings happen at the end of the cognitive evaluation and behaviour coding where the interviewers subjected the respondents to serendipitous, expansive probes about held notions of political skill and the frequency and nuances of its usage in their workplaces.
Expert Review: It is an established practice among researchers to form teams of experts familiar with the research domain of interest in order to review a proposed instrument and assess its reliability (LeBreton & Senter, 2008). One objective of this method is to establish consensus among the experts concerning the fidelity of the proposed instrument.
Selection of Respondents, Interviewers, and Expert Reviewers [TOP]
A sample of 36 respondents was selected through snowballing (Marcus, Weigelt, Hergert, Gurt, & Gelléri, 2017) from the ranks of programme coordinators in the nine polytechnics scattered across the northeast geopolitical zone of Nigeria. The respondents comprised of young to middle aged adults ( = 38.50, = 5.94), with the male respondents slightly older. The sample mean for tenure (used as a proxy for experience) was 10.06 years with a standard deviation of 5.41 years. In addition to the respondents, one interviewer (Dean of Schools only) was selected each from the nine study polytechnics, who then self-selected four programme coordinators from their respective schools. The deans were purposefully selected using the “maximum variation sampling strategy” (Patton, 2002, p.234) in order to reflect the disciplinary diversity characteristic of polytechnics.
Sample sizes from 20 (Blair & Conrad, 2011) to 30 (Perneger, Courvoisier, Hudelson, & Gayet-Ageron, 2015) are said to be sufficient in questionnaire pretests involving any standard instrument (Blair & Srinath, 2008). The age of the respondents straddles the two groups of young adults and middle age, an age bracket which research (e.g., Priyadarshi & Premchandran, 2019) has shown needs to be politically suave. Besides, the sample has a mean tenure of around ten years, indicating that the respondents are qualified to undergo the complexity of a cognitive test. An individual's political skill matures with age and is shaped by work context (Oerder, Blickle, & Summers, 2014). Finally, a panel of five social and management sciences Professors, selected from the five universities in the Northeast was constituted to appraise the 15-item PSS as a measure of the political skill of teacher-leaders using a categorical 5-point Likert scale [0 = No agreement to 1 = Perfect agreement].
Sources of Initial Item Pool [TOP]
In this study, six published self-reports served as sources for the initial item-pool used in the development of the study's political skill measure.
Political Skill Inventory (PSI) [TOP]
The PSI is an 18-item measure with four factors (Ferris, Davidson, & Perrewé, 2005a; Ferris et al., 2007): namely, social astuteness (4 items), interpersonal influence (4 items), networking ability (6 items), and apparent sincerity (4 items). The measure utilised 7-point Likert-like scale ranging from strongly disagree = 1 to strongly agree = 7. Jacobson and Viswesvaran (2017) recently validated the PSI and confirmed its excellent psychometrics.
Politics Subscale of the Organisational Socialisation Questionnaire (OSQ) [TOP]
The politics subscale of the OSQ contained six items, which the authors validated over four years (Chao, O’Leary-Kelly, Wolf, Klein, & Gardner, 1994). The subscale is reliable with Cronbach's alphas of .81, .79, .78 and .80 over four years (Chao et al., 1994). However, of the six items in this subscale, the researchers selected only three, dropping three items due to low factor loadings (<.60), and two for being reversed scored. The measure utilised a 5-point Likert scale ranging from Strongly disagree = 1 to Strongly agree = 5.
Social Skill [TOP]
Ferris, Witt, and Hochwarter (2001) developed and validated the social skill scale as a 7-item one-dimensional political skill inventory in a sample of 106 software engineers to test the interaction of their political skill and mental abilities. The measure yielded an acceptable internal reliability estimate (α = .77). The measure utilised a 7-point Likert-like scale ranging from Strongly disagree = 1 to Strongly agree = 7.
The Six-Item Political Skill Inventory [TOP]
The six items used in this study were taken from Ahearn, Ferris, Hochwarter, Douglas, & Ammeter (2004). The inventory features a 7-point Likert-type rating scale ranging from 1 = Strongly disagree to 7 = Strongly agree. Ahearn et al. (2004) reported a .89 internal consistency reliability estimate for this six-item PSI.
Flattery and Opinion Conformity [TOP]
Park, Westphal and Stern's (2011) flattery and opinion conformity measure, comprised of six items, was used to gauge ingratiatory behaviours. It was not an agreement scale, and so the researchers employed Chan's (1998) referent-shift consensus model to turn the measure into an agreement scale. The researchers adopted a 7-point Likert-like scale ranging from Strongly disagree = 1 to Strongly agree = 7.
Ingratiation Subscale of the Bolino and Turnley’s Impression Management Scale [TOP]
One of the six subscales in Bolino and Turnley’s (1999) impression management scale is a parsimonious 4-item ingratiation measure. The authors have validated it in a sample of undergraduates in the US with α = .76. The measure was validated in a three-sample study of full-time employees with excellent reliability indices (α = .91, .85, and .91 respectively) (Kacmar, Harris, & Nagy, 2007). Karam, Sekaja, and Geldenhuys (2016) confirmed the subscale’s reliability with α = .84. The scale is a 5-point Likert scale anchored on 1 = Strongly disagree to 5 = Strongly agree.
Procedures [TOP]
Cognitive tests were conducted using scripted probes in the first interview round. Each interviewer interviewed three self-selected programme coordinators from their respective schools. The interviews were conducted on-site at various times between February 5 to March 16, 2018, and each interview session averaged 20 minutes. The interviewers used six types of scripted in-depth probes shown in Table 1. These were developed based on Foddy's (1998) suggestions.
Table 1
Probes | Codes |
---|---|
Does the respondent have difficulty comprehending the survey questions? | P1 |
Does the respondent feel that the response options are inadequate? | P2 |
Does the respondent express uncertainty about the question? | P3 |
Does the respondent require the question be repeated to him/her? | P4 |
Does the respondent feel a number of questions mean the same thing? | P5 |
Do the respondents adopt different perspectives in answering the questions? | P6 |
The first level verbalisation consists of the interviewers reading the 38 questionnaire items and the respondents’ initial response based on a 5-point Likert scale. The interviewers used the cording schema in Table 1 to evaluate a total of 38 problem codes. Interviewers provided the authors with summaries of the cognitive interviews based on P1 – P6. Altogether, 8,208 asking questions were recorded in 36 interview sessions and coded based on P1 – P6.
As each of the 36 respondents engaged with each of the 38 items on the questionnaire, the interviewers also kept keen eyes on their behaviours and recorded significant departures from the scripted probes. The interviewers were requested to utilise the paradata thus garnered to make suggestions for possible item refinements and to suggest other questions that could be added to augment the scale. These behaviour codes shed significant light on P2, P3 and P5 (see Table 1).
Finally, the interviewers conducted debriefing sessions with each respondent immediately after the cognitive tests were over. The debriefing sessions represent the second level verbalisation in the study. The debriefing sessions used scripted in-depth probes to elicit motives and perspectives on items, as well as the overall purpose of the questionnaire as understood by the respondents. To ensure uniformity among interviewers, scripted probes developed based on suggestions of Foddy (1998) and Peterson, Peterson, and Powell (2017), were given to the interviewers as a guide (see Table 2).
Table 2
Code | Probe Type | Purpose of Probe | Probes Examples |
---|---|---|---|
iP1 | Perspective Probe | To determine the range of perspectives respondents adopted. | Could you tell me more about that? |
iP2 | Paraphrasing/ Comprehension | To determine the level of comprehension. | Can you repeat the question in different words? |
iP3 | Motive probe | To unearth respondents’ driving motives. | “Why do you say that” (in response to an answer) |
iP4 | Recall probe | To test the respondent's recall abilities. | How do you know that you did … so and so times? |
iP5 | Specific probe | To determine respondents’ interpretation of concepts. | What do you understand by the term political skill? |
iP6 | Response anchor probe | To establish the suitability of response categories. | Do you find the given response options suitable? |
iP7 | Difficulty level | To determine the level of question difficulty. | Do you experience any difficulty in selecting one response choice rather than the others? |
iP8 | Confidence probe | To gauge the confidence respondents have in their answers. | How sure are you about that? (in response to an answer) |
iP9 | Judgement probe | To determine the sensitivity of the questions. | How comfortable did you feel answering this question? |
Note. Sources: Foddy (1998, p.114); Peterson, Peterson, & Powell (2017, p.19).
Results [TOP]
The researchers used the results of respondents' first and second verbalisation levels on the 38 problem codes to streamline the initial item pool (see Table 3 for the distribution of sources by selected items). The first level verbalisation resulted from cognitive tests and behaviour coding, while the second level verbalisation resulted from the debriefing sessions. Together, these analyses resulted in a parsimonious validated measure (the PSS). The results of the expert review provided reliability estimates for the scale.
Table 3
Sources | Ahearn et al. (2004, p.316) | Ferris et al. (2005a) | Ferris et al. (2001, p.1077) | Chao et al. (1994, p.734) | Bolino & Turnley (1999, p.199) | Park et al. (2011, p.298) | Total |
---|---|---|---|---|---|---|---|
Original Number of Items | 7 | 18 | 7 | 6 | 4 | 6 | 48 |
Number of Items Selected | 5 | 14 | 6 | 3 | 4 | 6 | 38 |
Cognitive Testing, Behaviour Coding and Respondent Debriefing [TOP]
The researchers started by considering the results of the cognitive interview and behaviour coding made during the interview process. Interviewers' summaries with regards to these showed that most of the respondents did not experienced any difficulty in comprehending any of the 38 items. The respondents' excellent grasp of the questions is supported by the results of P1 and P6 (shown in Table 4) which respectively indicates near universal understanding of the questionnaire items and almost a uniform apprehension of same by the respondents.
Table 4
Problem Type | Code | Respondents with Problems (%) |
---|---|---|
Does the respondent have difficulty comprehending the survey questions? | P1 | 16.67 |
Does the respondent feel that the response options are inadequate? | P2 | 86.11 |
Does the respondent express uncertainty about the question? | P3 | 25.00 |
Does the respondent require the question be repeated to him/her? | P4 | 22.22 |
Does the respondent feel a number of questions mean the same thing? | P5 | 88.89 |
Do the respondents adopt different perspectives in answering the questions? | P6 | 8.33 |
The results in Table 4 are not surprising because as teacher-leaders, the respondents are very familiar with evaluation procedures and are experienced enough to appreciate the purport of questionnaire items at first sighting. Indeed, the literature has shown that data garnered from educated and experienced middle-aged respondents are highly consistent (Sauer, Auspurg, Hinz, & Liebig, 2011). Further evidence that the respondents find the PSS highly engaging and unambiguous could be seen in the minimal level of uncertainty (P3) they showed concerning the items and general purport of the instrument, and in the few instances of the need to repeat the questions asked (P4) by way of seeking further clarifications. However, a significant number of respondents pointed out that some of the items meant the same thing (P5) to them. For example, they understood these two items as assessing the same referent: “In social situations, it is always clear to me exactly what to say and do,” and “I can adjust my behaviour and become the type of person dictated by any situation.” Therefore, the researchers merged such items.
However, the 5-point Likert response anchors of the initial 38-item pool gave some problems to a significant number of respondents, as some showed apparent hesitancy in selecting an option. The same behaviour was noted when the anchors were changed to a 9-point format. However, they reported being comfortable with a 7-point scaling when this option was offered. This finding reflects the report of Cai, Lin, and Zhang (2016, p.6) who used the triple criteria of reliability, consistency and accuracy in determining the best scale among a 5-, 7-, and 9-point Likert scales, concluding that “the optimal number of rating bars is 7.” Thus, the researchers used the 7-point Likert-like agreement scale in scaling the battery of questions included in the final PSS.
The interviewers recorded additional information on respondents' feelings during debriefing sessions (second level verbalisation) which were complete departures from the paradigmatic view of the scale's original items. For instance, a significant number of respondents felt that the words exaggerate and overstate (used in describing how they “give compliments” on the abilities and achievements of colleagues) were morally uncomfortable and socially unacceptable as they deemed such expressions as indicative of the behaviour of “yes men” (i.e., “servile compliance”). Thus, the affected items were streamlined and retained, sans the seemingly offensive words (iP9, see Table 2). The responses to iP3 and iP4 were mostly non-committal, indicating respondents' reticence. These sharply contrasted with those of iP6 to iP8 where the respondents showed verve in articulating their various stances. In response to the word political, the interviewers recorded wide inconsistencies, some ascribing negativity to anything political and others seeing it as an integral reality of workplaces (iP5). However, these inconsistencies seemed to be reconciled when the term was presented in the context of using social connections to advance workplace issues. This finding lends further credence to the role of contextual and construct specificities in psychological measurement and research (de Vries, 2012; Woo, Jin, & LeBreton, 2015). Finally, responses to iP1 and iP2 (see Table 2) showed that the meanings of most of the original items were self-evident and therefore not susceptible to widespread ambiguity.
The combined outcome of the cognitive interview, behaviour coding and respondents debriefing is the emergence of a 15-item measure of teacher-leader political skill, featured in Table 5. The scale was then submitted to the panel of experts for review.
Table 5
S/N | Items | κ |
---|---|---|
1 | I am conscious of how other people perceive me. | 0.60 |
2 | In most situations, I instinctively know what to say and/or do to influence others. | 0.40 |
3 | I easily understand the body language and facial expressions of people. | 0.60 |
4 | I always try to be sincere and authentic in whatever I speak and do. | 0.60 |
5 | I can easily put myself in the position of others and find common ground with them. | 0.60 |
6 | I compliment or praise people to make them like me and to respond positively to me. | 1.00 |
7 | I know how to get the support of influential people whenever the need arises. | 0.60 |
8 | I spend time and energy at building relations with important people. | 0.60 |
9 | I can communicate easily and effectively with people. | 1.00 |
10 | I can adjust my behaviour to most situations when conditions demand it. | 1.00 |
11 | I know the “inner” workings of my workplace. | 0.60 |
12 | I can sense the motivations and ulterior motives of others. | 0.60 |
13 | I show genuine interest in people to make them friendly to me. | 0.60 |
14 | I always try to align my views on issues and those of others I am dealing with. | 1.00 |
15 | I always try to recognise the achievements of the people around me. | 1.00 |
κ statistic for the PSS | 0.78 |
Expert Review [TOP]
The researchers used the Fleiss' kappa (κ) statistic (Fleiss, 1971) to analyse experts' evaluations to establish interrater agreement (IRA), as shown in Table 5.
Fleiss' κ is an index commonly employed in quantifying multi-rater agreement on a target and is, therefore “a measure of consensus” (O’Neill, 2017, p.1). The κ statistic is a “widely used measure of interrater reliability for the case of quantitative ratings” (Fleiss, Levin, & Paik, 2003, p.604). The researchers, therefore, consider the computed κ values as measures of the PSS's reliability and the reliability of each of its 15 items and their contributions to the scale’s reliability score (Zijlmans, van der Ark, Tijmstra, & Sijtsma, 2018). Besides, the inter-rater agreement seems similar as the evaluation of content accuracy or validity. Therefore, the PSS is likely to be a reliable instrument that provides valid results of teacher political sensitivity. The κ statistic is computed using the formulation below:
= proportion of all items for which the experts agreed on their fidelity; = estimate of the expected proportion of chance agreements.
Fleiss, Levin, and Paik (2003) state that for most purposes, values of κ > 0.75 represent excellent agreement beyond chance, values < 0.40 indicate poor agreement beyond chance, and values between 0.40 and 0.75 represent fair to a reasonable agreement beyond chance. Table 5 shows the κ values for the PSS and its items. The statistics indicate fair to excellent IRA, averaging good to perfect agreement. With an overall κ = .78, the PSS is, therefore, likely to be a reliable and valid instrument for measuring teacher-leader political sensitivity.
Discussion [TOP]
Teacher-leaders rely on personal competences in the performance of a wide range of activities that go beyond the ambience of formal job descriptions. Political skill occupies a pivotal cell in the matrix of social skills teacher-leaders need to master in order to perform effectively and professionally. In this study, the researchers have developed a new political skill measure dovetailed to the peculiarities associated with teacher leadership in Nigerian polytechnics. The scale may thus be labelled Teacher-Leader Political Skill Scale (TL-PSS). Specifically, the researchers report on the content validity of a brief Political Skill Scale that reflects the professional domain of teacher leadership, as a support for the call made in Lane (2012) for construct specificity and theoretical generality. The researchers further tested the content of the scale using a combination of cognitive testing, behaviour coding, respondents debriefing, and expert reviews. The result is a robust, parsimonious, and potentially unidimensional Political Skill Scale (PSS). However, the psychometrics of the scale needs to be determined using an appropriate sample size across various cultural milieus for its potential benefits and established usage to be fully realised.
This study has at least three limitations. First, this is a pretest study that produced a new political skill scale. The production of a new scale implies the need for piloting the scale to establish its reliability. Second, relatively small samples of n = 36, n = 9 and n = 5 were used for respondents, interviewers and an expert panel, respectively. Moreover, the sample was selected from the Northeast polytechnics only. For these reasons, future research should utilise an adequate sample size in establishing the reliability of the instrument and should investigate its generalisability across different settings than the Northeast. Third, though the nine interviewers used in the study were rigorously selected based on merit and academic standing, the self-selection of the 36 programme coordinators (respondents) through snowballing by the interviewers might have introduced some selection bias and divested the researchers from close control of the data collection process. Future studies may avoid such adverse possibilities through researchers’ involvement in the actual data collection process.