Technical proficiency for IS Success

Share Embed


Descripción

Technical Proficiency for IS Success
Abstract
The Information System (IS) Success model implies that IS users possess
baseline technical abilities; an assumption that, if not met, may adversely
affect the constructs and relationships proposed by the model. We propose
that the level of users' technical proficiency should be accounted for when
considering deployment of information systems. However, considering the
extant literature, it is unclear precisely what constitutes technical
proficiency in today's business environment. Using a Delphi method
approach, we develop the technical proficiency construct to uncover what
competencies indicate technically proficiency, what business needs such
proficiencies address, and how technical proficiency can be assessed. We
uncover 16 qualities of technical proficiency, 14 common technology
business needs, and 13 methods to assess proficiency. This research lays
the groundwork for future research regarding IS Success and technical
proficiency. Practitioners can use these findings to help better prepare
their workforce for IS deployment.

Keywords: technical proficiency, Delphi method, information system success,
computer self-efficacy, computer-mediated communication

1. INTRODUCTION
The IS Success model provides a general framework from which scholars
and practitioners can better understand the outcomes of IS employment
(DeLone & McLean, 1992, 2003). While useful, the model implies some
assumptions that should be met if such outcomes are to be realized. For
instance, Seddon (1997) points out that the model is contingent on
voluntary use of a system. The model also appears to assume that IS users
have some basic level of information technology (IT) proficiency;
unfortunately, we found no mention of the idea in our review of the
literature. Excluding an IT proficiency construct implies that either all
potential users of IS are technically proficient or that the construct is
irrelevant to the model. We contend that the quality of the system might
have little or no impact on use or intention to use if the person using the
IS has no base understanding of how to use the IS. Deficiencies in the
user's technical proficiency (TP) may affect these constructs and
consequently, the net benefit outcomes suggested in the model. In this
study, we propose that TP is an assumption that should be accounted for in
the IS Success model. We define TP as the skills required to operate and
judge the quality of an information system (i.e., a computer
hardware/software solution), such as one that would be used by a small
business owner or middle manager in a large corporation. Although
straightforward, the definition evokes some important questions, such as:
precisely what task competencies indicate that one is technically
proficient? What business needs do such proficiencies address? How can
one's technical proficiency be assessed? These serve as our study's
research questions.
The purpose of this study is to develop the construct of technical
proficiency, with an emphasis on how TP may be relevant to the IS Success
model. To achieve this purpose, we used the Delphi method to harvest
expert knowledge to develop the TP construct. The desired outcome of this
study is an inventory of core competencies that comprise the proposed TP
construct and specific criteria by which to assess TP. In addition, we
seek to catalog tangible business needs in which the uncovered TP
competencies may fulfill. In realizing these outcomes, we hope to
contribute to the broadening literature base regarding how the user
contributes to and interacts with an IS (e.g. Henfridsson & Lindgren, 2010;
Iivari & Iivari, 2011; Iivari, Isomaki, & Pekkola, 2010; Lin &
Bhattacherjee, 2010; Lu, Deng, & Wang, 2010; Millerand & Baker, 2010).
The remainder of this paper is organized as follows. In the next
section, we examine the IS Success model, describe existing measures used
to determine IT expertise, and develop the concept of TP. We then explain
the Delphi method employed in this study. Next, we describe our procedure,
data analysis, and results. Finally, we discuss our findings and provide
suggestions for future research directions and the implications thereof.


2. THEORETICAL DEVELOPMENT

2.1. IS Success model
DeLone and McLean's IS Success model has been widely accepted and used
in MIS research both in its original and its updated form (Cho, Park, &
Michel, 2011; Li, 1997; Mun, Yun, Kim, Hong, & Lee, 2010; Petter & McLean,
2009). Considering both Shannon and Weaver's (1949) three categories of IS
Success and Mason's (1978) five categories, DeLone and McLean (1992)
proposed a total of six constructs that comprise IS Success: (a) system
quality, (b) information quality, (c) use, (d) user satisfaction, (e)
individual impact, and (f) organizational impact. This includes the two
outcomes: (a) organizational impact and (b) individual impact. While
organizational impact is the effect of the IS on organizational
performance, individual impact is an attempt to measure the effect of the
IS on employees' perception of their own performance (DeLone & McLean,
1992).
When DeLone and McLean first proposed the IS Success model, they
suggested that the model needed further validation and development before
use. To this end, several researchers sought to test and refine the model.
Some studies sought to empirically test the interdependencies of IS
Success constructs. Seddon and Kiew (1994) tested the relationships among
system quality, user satisfaction, and information quality. Baroudi et al.
(1986) found support for the relationship between user satisfaction and
use. Gelderman (1998) found support for the relationship between
satisfaction and individual impact measures. Igbaria and Tan's (1997)
results indicated that user satisfaction has the strongest effect on
individual impact. While many researchers focused on validation of the IS
Success model, Seddon (1997) suggested modifications to the IS Success
model because he believed, among other weaknesses of the model, DeLone and
McLean gave an ambiguous definition of the construct IS use. Based on
DeLone and McLean's 1992 model, Seddon refuted two of the three possible
meanings for IS use and argued that the only valid meaning for use is as a
proxy for benefits from use (Peter B. Seddon, 1997). In sum, although the
IS Success model has been useful in the IS field, research suggests that
there is some room for modification and improvement.
We posit that research and practice should consider a user's level of
IT proficiency when investigating a system's success. A prevailing measure
in determining IT proficiency in MIS literature has been the Computer Self-
Efficacy Scale (CSE), originally popularized by Compeau and Higgins (1995).
MIS researchers tested and validated the CSE instrument over the years
between DeLone and McLean's two IS Success papers. However, the original
CSE instrument asked questions such as "A monochrome monitor is…" and "…is
the standard size for mainframe computer memory" (Jones & Pearson, 1996,
pp. 11, 12), questions that are outdated when the focus is on individual
users. Since then, a number of studies have attempted to improve upon the
CSE scale. Unfortunately, merely changing the question wording from
mainframe to personal computer does not necessarily mean the questions
retain the same meaning or that they measure the same intended construct
(Podsakoff, MacKenzie, Jeong-Yeon, & Podsakoff, 2003). Technology use has
increased exponentially in recent years. Just as technologies themselves
become obsolete, the questions and constructs included regarding CSE may
have also become outmoded.
Several variations of the original CSE instrument have been created,
many of which are amended, reworded, or customized versions of the
original. A search in the ABI/Inform databases for the topics of "computer
ability" or "computer self-efficacy" yields over 1,000 results for this
topic. Several versions of the CSE scale have been cited in recent IS
literature (Chang, Chang, Ho, Yen, & Chiang, 2011; Chiu & Wang, 2008; Koh,
2011; Scott & Walczak, 2009). Although there are many diverse types of CSE
scales, there is little consensus on one that properly measures IT
proficiency (e.g. Downey, Rainer Jr, & Bartczak, 2008; Torkzadeh, Chang, &
Demirhan, 2006).


2.2. IT proficiency
"In today's competitive environment… it is clear that maximizing IT's
potential presumes not only that the technology be adopted and used, but
that it be used well" (Marcolin, Compeau, Munro, & Huff, 2000, p. 38).
When considering the relationship between the system and end-user, we posit
that additional consideration should be given to one's TP. Although an
abundance of terminologies and instruments exist to measure and explain the
various aspects of an individual's proficiency with technology, consensus
with regard to an overarching measurement of preexisting TP appears to be
absent within the IS literature. In the absence of consensus, confusion
abounds and comparison of the extant literature proves increasingly
difficult. Marcolin et al. (2000) attempted to alleviate this confusion
with the development of the user competence construct, which they defined
as "the user's potential to apply technology to its fullest possible extent
to maximize performance of specific job tasks" (Marcolin et al., 2000, p.
38). While Marcolin et al. sought to measure both perceived self-efficacy
and self-reported knowledge, their instrument had a highly specific focus
on Lotus 1-2-3® and WordPerfect®. Marcolin et al. focused their self-
efficacy instrument solely at application software, limiting other
researchers' abilities to apply this instrument in new contexts.
Interestingly, Marcolin et al. (2000) mention the lack of attention given
to user competence by various models, including the IS Success model.
Despite the implied call for research in the user competence area,
subsequent work has not fully addressed this topic.
The field of communications science provides additional perspective
toward the TP construct. Although focused on communications-based
information technology, the computer-mediated competence (CMC) stream of
research warrants discussion as CMC is one subset of information technology
and can be a proxy for IS. As Spitzberg defines it (2006, p. 630),
computer-meditated communication is "any human symbolic text-based
interaction conducted or facilitated through digitally-based technologies."
In his seminal work, Spitzberg developed a theoretical model to categorize
CMC research based on "motivation, knowledge, skills, context and outcomes"
(2006, p. 629). Perhaps the areas of Spitzberg's model that are most
relevant to TP are the perspective on knowledge and motivation. Spitzberg
posits that knowledge and motivation toward CMC are directly related and
mutually complementary. In other words, an increase in either knowledge
about or motivation toward CMC will increase the other (Spitzberg, 2006).
Another aspect of Spitzbeg's model provides support for the need for a
TP construct to the IS Success model. In reviewing the literature,
Spitzberg recognized the role that skills play in CMC; although there is
similarity between CMC and face-to-face communication, people tend to
consider CMC a lower quality form of communication and therefore, a higher
level of skill may be required to compensate for the lower quality
(Spitzberg, 2006). Additionally, Bunz (2006) conducted research to assess
students in the business-like environment of virtual teams. She found that
there appeared to be a limitation to how much of an increase in CMC
competence the students could gain. The students in Bunz' study who had a
low level of competence at the beginning of the study had a similarly low
level of competence at the end of the study. However, Bunz used a
questionnaire for the students to self-assess their levels of competence in
using CMC technologies; Bunz suggests that the self-assessment may be a
limitation. We agree and find it important to identify a means to assess a
user's level of TP that has strong validity. Because Bunz' students were
self-assessed, there is a possibility that the difference between pre- and
post-test scores were not accurate assessments of the students' competence,
but merely a reflection of the students' self-efficacy regarding
technology.
In a more recent study, Ku, Chu, and Tseng (2013), found that people
seek different forms of gratification through CMC use. It is not a great
leap to see the relationship of gratification and motivation, and hence,
knowledge acquisition. Assuming people find more gratification in certain
CMC, they are likely to be more motivated to use said CMC. As mentioned,
an increase in motivation is likely to increase knowledge about the CMC.
We believe that although the aforementioned research is focused in the
realm of CMC, there is a correlation with non-communications-based computer
applications. Nonetheless, the CMC competence literature seems to support
the need for a TP construct. Additionally, identifying a TP construct and
a means to measure the construct are likely to provide practitioners with a
better understanding of their employees' skills and a means by which to
improve the employees' skills.
Existing constructs could be used to determine individual components
of IT proficiency; however, a limited scope prohibits most from capturing
the essence of the IT proficiency construct (i.e., both perceived ability
and actual ability). These concepts include computer ability (Kay, 1993),
end-user computing skills (Torkzadeh & Lee, 2003), computer proficiency
(Jones & Pearson, 1996), virtual competency (Wang & Haggerty, 2009),
computer understanding and experience (Potosky & Bobko, 1998), computer
self-efficacy (Compeau & Higgins, 1995), and internet self-efficacy
(Torkzadeh & Van Dyke, 2002; Torkzadeh & Van Dyke, 2001). To gain a
comprehensive understanding of an individual's IT proficiency, we posit
that perceived capabilities should be considered in conjunction with actual
abilities.
As discussed above, a construct frequently used in the IS literature
to define an individual's aptitude regarding technology is computer self-
efficacy (CSE). While CSE is valuable, it may not provide an accurate
assessment of the level of IT proficiency possessed by an individual.
Computer self-efficacy's primary limitation rests in the fact that it is a
perceived capability rather than a combined measure of one's actual and
perceived competence. Bandura (1997, p. 37) presented self-efficacy as an
attempt to explain one's belief about what one's own abilities under
different sets of conditions. In other words, self-efficacy is an
individual's belief regarding his or her ability to perform a task. As an
extension to Bandura's conception, CSE refers to an individual's belief
regarding his or her ability to use a computer (Compeau & Higgins, 1995).
The concept of CSE began to appear in the IS literature in the mid1990s
amid the adaptation of self-efficacy measures for the MIS domain (Compeau &
Higgins, 1995). As a result, scholars developed and implemented many CSE
scales, providing insightful contributions to the literature; however,
researchers seldom adhered strictly to the definition of self-efficacy
presented by Bandura, and objective measures of TP were often not included.

Marakas et al. (1998) suggest that many authors have confounded the
definition of self-efficacy by including outcome expectations in addition
to efficacy expectations. Bandura (1997) clearly differentiated between
these expectancies by noting that perceived self-efficacy is a judgment of
one's ability to perform, whereas an outcome expectation is a judgment of
the likely consequence of such performance. The importance of this
distinction is explained through the context in which self-efficacy is
measured. In an environment where performance is determined by outcome
alone, efficacy beliefs will more accurately predict outcomes (i.e.
efficacy expectations are closely aligned with outcome expectations);
however, when outcomes are impacted by factors other than the quality of
performance, efficacy beliefs alone cannot accurately predict outcomes
(Bandura, 1997). Within the context of the IS Success model, CSE relates
to one's judgment regarding one's level of performance while using a
specific system, while an outcome expectation relates to one's judgment
regarding the net benefits that might result as a consequence of his or her
use of the system. Because the IS Success model does not currently account
for an individual's actual performance, we posit that the introduction of
TP may be beneficial to the IS Success model.
One of the more prevalent issues in the self-efficacy literature of
the last decade stems from researchers' neglect in accounting for the
differences between general and specific self-efficacy during instrument
development. Marakas et al. (1998) presented this distinction as a method
of clarifying the generality dimension of Bandura's conceptualization of
self-efficacy. Marakas et al. defined task-specific computer self-efficacy
as "an individual's perception of efficacy in performing specific computer-
related tasks within the domain of general computing" (1998, p. 128, p.
128). They defined general computer self-efficacy as "an individual's
judgment of efficacy across multiple computer application domains" (Marakas
et al., 1998, p. 129). Although the task-specific characterization more
closely emulates Bandura's original definition of self-efficacy,
researchers have measured the construct at both the specific computer
application level and the general computing level (Gist, 1987). Although
general self-efficacy can be measured, the use of task-specific self-
efficacy should provide a more accurate prediction of performance.
The key distinction between specific and general computer self-
efficacy resides in the concept that general computer self-efficacy is a
culmination of task-specific experiences. The implication is that general
computer self-efficacy is not susceptible to immediate change or
manipulation in the short term (Marakas et al., 1998). Because many self-
efficacy studies require the measurement of CSE before the introduction of
technology, participants rely on their general CSE beliefs. Agarwal et al.
(2000) suggest that initial general CSE beliefs are able to predict
subsequent task-specific CSE beliefs. Although self-appraisals of efficacy
are reasonably accurate in familiar situations, new undertakings in which
individuals do not have sufficient experience to make accurate self-
appraisals may result in faulty self-efficacy judgments (Agarwal et al.,
2000; Chen, Gully, & Eden, 2001). Thus, one can conclude that it may be
necessary to consider additional variables that impact proficiency.
Using a CSE instrument in addition to a questionnaire designed to
measure knowledge, Marcolin et al. (2000) found that many respondents
overestimate their competencies. However, the study also revealed that
participants generally had more confidence in what they could do with a
software package in the accomplishment of a task than they did in their
current knowledge of all the capabilities of the software package. This
implies that respondents did not believe that possession of a comprehensive
knowledge of a particular software package's capabilities is required for
success. Additionally, end users were confident in their ability to learn
as they go, a requirement that is arguably more important than the
existence of a comprehensive knowledge base. A subsequent study by Gravill
et al. (2006) found that users were unable to accurately assess their
knowledge of a specific software package; however, as an individual's
breadth of experience increased, the accuracy of their self-assessment
increased. Therefore, the TP construct should account for both perceived
and actual knowledge.


3. METHOD

3.1. Delphi method overview
"What is the least number of [atom] bombs that will have to be
delivered on target..." (Dalkey & Helmer, 1963, p. 461). With these words
began one of the first questions delivered using the Delphi method.
Developed by the Rand Corporation in the mid-twentieth century to evaluate
the effect of atom bombs on the United States' ability to produce
munitions, Project Delphi was an experiment performed by Dalkey and Helmer
(Dalkey & Helmer, 1963) in which they evaluated the United States'
industrial targets from a Soviet perspective. The name originated from the
Oracle of Delphi, a mythical Greek figure with supposed supernatural
abilities to foretell the future (Thangaratinam & Redman, 2005); the
authors of the Rand project used the term "Delphi" to allude to the
expected result from the panel of experts.
The Delphi method is a structured group decision support, multiple
round technique. A round can be defined as a segment of the study in which
the participants receive a scenario, questions, and a due date, and then
provide answers to the questions. Each round is complete when the due date
passes and the researchers collect the data to analyze and prepare for the
next round. As each round concludes, the researchers compile and
synthesize the collected data, and provide analysis and feedback to
participants. This process highlights, for the panelists, perspectives
different from their own. Along with this feedback, researchers give the
panelists the next round of questions and provide them with an opportunity
to refine their previous statements.
The Delphi approach preserves the anonymity of the panel experts so
they are able to concentrate on each other's ideas instead of concentrating
on each other. When the Delphi experts are unknown to each other, any
dissenting opinions can be discussed without embarrassment of or backlash
to those who may share the opinions (Gupta & Clarke, 1996; Skulmoski,
Hartman, & Krahn, 2007; Tersine & Riggs, 1976). Thus, the Delphi method
may be used in a variety of circumstances, to include facilitating group
decision making and exploring creative aspects of problems (1996). By
design, the Delphi process includes an occasion for the panelists to re-
evaluate their opinions. It is desirable, if not encouraged, to share
opposing views. The Delphi research approach also can support many
participants, particularly if the study is administered electronically,
such as via the Internet or e-mail (Adler & Ziglio, 1996).
In this study, we used a web-based Delphi method to develop the TP
construct. Scholars in the IS discipline frequently use the Delphi method
and we believe it to be an appropriate method for this study given the
subject matter and our intent to capture the opinions of experts to address
our research objectives (Casey G. Cegielski, 2008; Saunders & Benbasat,
2007; Skulmoski et al., 2007). Delphi techniques have been found to be
applicable for communication of experts within a field (Adler & Ziglio,
1996). We chose this method, in part, because it has been shown to bridge
the gap between researchers and practitioners who often have different
goals (Churchman & Schainblatt, 1965); the Delphi method can enable
efficient and effective understanding and communication between researchers
and practitioners. Although data gathering methods such as open-ended
questionnaires or interviews could have been used for this study, we
desired to not only identify TP dimensions, but also to gain consensus
regarding the most salient TP dimensions. Thus, we concluded that the
Delphi method was more appropriate than the additional aforementioned
methods (Taylor & Meinhardt, 1985).


3.2. Participants
The selection of a group of experts is an important component of a
Delphi study because the outcomes of the Delphi study depend on the panel
members' opinions (Parente, Anderson, Myers, & O'Brien, 1994). The
selection of participants however, is not necessarily easy and there are no
hard and fast criteria for doing so. One of the criticisms of the Delphi
method is that there is limited consensus as to what exactly defines
expertise (Keeney, Hasson, & McKenna, 2001; Williams & Webb, 1994).
Expertise does not depend as much on the position one holds, but rather on
what skills and attributes he or she possesses (Baker, Lovell, & Harris,
2006). It has been argued that, because of the group decision support
aspect of the Delphi method, panel experts need only to have a base level
of relevant knowledge regarding the topic (Baker et al., 2006). Regardless
of the criteria used to determine panel expertise, to increase
methodological rigor, researchers should identify a priori the standards by
which they define expertise (Casey G. Cegielski, 2008). Thus, we
established the following criteria for an expert participant; the
participant must (1) possess a broad understanding of MIS and (2) have been
employed (currently or in the recent past) in a job that requires use of
IT.

After careful consideration of potential participants, we determined
that IS graduate students provide a sample frame that meets our stated
criteria. Research suggests that students such as these may be adequate
participants for this type of study (1991; 1986). This sample frame not
only frequently uses IT in their student activities (Francis & Schreiber,
2008; Sparling, 2002), but they have also held positions where IT
competency was mandatory. As IS graduate students, they also possess, at a
minimum, a baseline understanding of the use of IS in business. Thus, we
posit that they have sufficient experience to judge both system and
information quality, and are able to address issues that may indicate
whether an individual is technically proficient.

In general, there are no hard and fast standards established regarding
the number of participants required for a Delphi study. The panel size
depends on the topic and type of study, the number of potential experts
available, and the panelist makeup (Powell, 2003). Although, research
identifies Delphi sample sizes ranging from three to nearly 500 (Landeta,
2006; Loo, 2002; Skulmoski et al., 2007), scholars have suggested that a
number between five and 30 participants is typically ideal for a Delphi
study (Loo, 2002). Previous studies have suggested attrition may occur
because of fatigue from successive rounds, poorly structured first round
questions, inadequate time planning by participants, or unforeseen
circumstances (Casey G. Cegielski, 2008; Linden et al., 2007; Snyder-
Halpern, 2001). Based on the varying panelist retention rates of between
41-100 percent found in the literature (Casey G. Cegielski, 2008; Linden et
al., 2007), we carefully selected 36 participants in hopes of retaining a
minimum of ten participants throughout the study. We pre-screened IS
graduate students at a large U.S. university to develop a list of 36
qualified participants. Participants were solicited via e-mail and
consented to participate after reviewing the terms and conditions of the
study. We reviewed demographic information and additional qualifying
questions to ensure that participants had adequate knowledge of our focal
topic. We offered no compensation for participation. Demographic
information regarding the 22 participants retained for this study is
reported in Table 1.
-----------------------------------------

Insert Table 1 here

-----------------------------------------

4. PROCEDURE AND RESULTS
For each of the three phases of the study, we provided a webpage that
contained the readings and questions for the participants, and we provided
the participants with a due date in the invitation e-mail for each phase.
Each of the three planned rounds was different from the others. To
minimize participant attrition, at the beginning of the study, we provided
panelists with a detailed explanation of the purpose, method, and outcome
goals of this study.


4.1. Round one procedure and results
Researchers may identify initial questions from gaps found in
reviewing the literature, from the researcher's previous research
experience, or from a pilot study (Skulmoski et al., 2007). We developed
the questions for the initial round based on the literature. The purpose
of the first round was for the panelists to brainstorm as many criteria,
characteristics, and standards as possible that they believe determine TP
competencies, methods for assessing TP, and their assessment of what TP
qualities organizations seek in employees to fulfill business needs. The
intent behind the first round was for panelists to devise a large list of
items for narrowing and confirming in subsequent rounds. Although there is
much flexibility in the administration of a Delphi study, generally, the
first round questions are open-ended and broad (Skulmoski et al., 2007).
By asking three open-ended questions at the onset of the study, we
solicited panelist opinions regarding the challenges in defining IT
proficiency as we discussed in our review of the literature.
For the first round, we provided the participants with the following
scenario:
"For the purpose of this study, technical proficiency is defined as the
skills required to operate and judge the quality of an information system
(i.e., a computer hardware/software solution), such as one that would be
used by a small business owner or middle manager in a large corporation."
Accompanying the scenario, we provided the following questions:
Question 1: "What are the most important technical competencies necessary
for an individual to be effective in the current business environment?"
Question 2: "How would you determine whether or not an individual has
adequate technical proficiency to be a productive member of the current
business community?"
Question 3: "In your opinion, what are the most important technical
proficiency criteria organizations in your field are seeking in their
employees?"
For the initial round of the study, we provided the participants a
three-day window in which to complete the online questionnaire. The
response rate was acceptable; we sent the e-mail invitation to 36 potential
panelists and received a 61% response rate (22 respondents; Table 2).
Surprisingly, although the response rate declined between the first (61%)
and second (39%) rounds, the response rate increased slightly between the
second (39%) and third (42%) rounds.
-----------------------------------------

Insert Table 2 here

-----------------------------------------

Upon the conclusion of the first round of data collection, we examined
the data from the 22 respondents to determine what themes emerged. The
initial result was a list of 90 TP qualities (question one) that our
participants suggested as being essential competencies representing the TP
construct. Because the Delphi method is a simultaneous input, group-based
method, we received duplicate and similarly worded responses. For example,
in response to our request for TP qualities, one respondent offered
"Microsoft Office" as a response. Another respondent stated that a
technically proficient individual "…need[s] to know how to use e-mail,
internet, word-processing, and spreadsheet software at a minimum." In our
judgment, both of these responses address the TP quality of using an office
productivity suite of applications and, therefore, we equated the two
responses to one unique response. Thus, from the 90 qualities reported, we
extracted 21 unique TP qualities (Table 3a). As with the technical
qualities, we combined duplicate original responses regarding TP assessment
(question two) into single unique responses. As an example, one response
we received was "ask them to use the technology in question," while another
was "skills based assessment where they actually use the technology." The
second response subsumes the first, so, we consider this one unique item.
Our refinement of the 43 total assessment methods suggested by the
participants yielded 13 unique methods by which to assess technical
proficiency (Table 3b). Similarly, when we received the responses
regarding business needs (question three), we reduced the original 45 items
into 17 unique business needs (Table 3c).
-------------------------------------

Insert Tables 3a, 3b and 3c here

-------------------------------------

4.2. Round two procedure and results
The purpose for the second round was to allow participants to refine
the three lists based on the aggregate, synthesized responses and to
suggest retaining, removing, or changing previous responses. In this
second round, the participants were given the opportunity to provide
further ideas and confirm (or refute) their previous input. Typically,
participants generate issues or important topics during the first round of
the Delphi, whereas the second round is used to concentrate the study or
remove unimportant issues (Skulmoski et al., 2007). This is the approach
we followed in this study.
Upon refining the results from the first round, we directed the
participants to a second webpage. The period between the end of the first
round due date and when we sent the participants the e-mail invitation to
the second round was 11 days. As with the first round, we provided a three-
day response window for the participants. To help respondents reestablish
their frames of reference, we restated the TP definition from the initial
round. In addressing the unique items that were the results of the round
one analysis, we asked participants to decide whether each of the items
adequately answers the question, "What are the most important technical
proficiency competencies necessary for an individual to be effective in the
current business environment?" Participants could select "RETAIN" if they
felt the item was an appropriate answer to the question, "DELETE" if the
item was not an appropriate answer to the question, or "CHANGE" if they
wished to change the item. We afforded participants who chose "CHANGE"
with a free text space to provide their changes. As with the technical
proficiency qualities, we provided the participants an opportunity to
address the unique round one TP assessment methods. We asked them to
decide whether the item adequately answers the question from the previous
round, "How would you determine whether an individual has adequate
technical proficiency to be a productive member of the current business
community?" As with the TP qualities, participants could delete, retain,
or change the TP assessment method items. Similarly, we asked the
participants to address the relevance of the synthesized round one TP
business needs qualities against the first round question, "In your
opinion, what are the most important technical proficiency criteria
organizations in your field are seeking in their employees?"


Deletions
For round two, we received 14 responses (Table 2). As a means to
discern the participants' aggregate responses, we eliminated any TP item
for which the most frequently selected choice was to delete the item. We
modified items based on the participants' suggestions and we retained the
remainder of the items (i.e., those marked "RETAIN") in their original
form. Based on these criteria, we removed five of the TP qualities (items
2, 9, 11, 15, and 18; Table 3a), none of TP assessment methods, and 3 of
the business needs (items 6, 8, 9, Table 3c).


Modifications
Based on participant recommendations, we modified two TP quality
items; we modified TP item 17 from its original "develop/implement 'work-
arounds' of system limitations," to "develop and implement alternate
methods to overcome information systems limitations." We modified item 21
from its original "set up and operate a teleconference phone system," to
"operate communications technologies (e.g., teleconference phone system)."
As we did with the TP quality items, we made changes to two TP assessment
method items based on participant information. For assessment item three,
we replaced "take a general placement exam/quiz" with "take an information
systems task specific placement exam/quiz." For assessment item 10, we
substituted "verbally explain limitations of current corporate information
systems" with "demonstrate an ability to explain limitations of current
corporate information systems." Similarly, we made modifications to one
business needs item. For business needs item two, we replaced "data
collection and analysis skills" with "data analysis skills". As a final
step before the final (third) round, after we processed the items, we
alphabetized each of the lists of items to remove the appearance of
researcher preference.


4.3. Round three procedure and results
By the start of the third round, the panel members should be familiar
with the issue of TP and should have a firm understanding of both the
procedure and the concepts addressed in the study (Adler & Ziglio, 1996).
Questions presented in this round are usually specific and are used to
refine the results garnered in previous rounds (Skulmoski et al., 2007).
Panelists may not necessarily reach complete consensus by the end of round
three. However, by this round, panelists should have a fair idea of what
criteria define IT proficiency. We provided the panelists with a final
compilation of the items uncovered in the previous rounds so that they
could rank order each item in terms of its importance to IT proficiency.
We e-mailed participants the link to the third round website and asked
them to complete the final round within five days. The period between the
end of the second round due date and when we sent the participants the e-
mail invitation to the third round was four days. The purpose of the third
round of the study was for the participants to rank order the items based
on their perception of importance of the items. As indicated in Table 2,
15 experts participated in the final round of the study by the deadline.
After the deadline passed, we computed mean rankings, as indicated in
Tables 4a-c.
-----------------------------------------

Insert Tables 4a, 4b, and 4c here

-----------------------------------------

Using the mean rankings, we rank-ordered the TP qualities, assessment
methods, and business needs; lower mean values indicate higher rankings.
For TP qualities (Table 4a), "use office productivity suite applications
(i.e., word-processing, spreadsheet, presentation graphics, database)"
ranked highest (M = 4.2667, SD = 2.7894) and "operate a document scanner"
ranked lowest (M = 12.667, SD = 4.2167). For TP assessment methods (Table
4b), "demonstrate skills on a variety of tasks aligned with the
competencies desired" ranked highest (M = 4.2857, SD = 3.1727) and "prepare
and deliver a presentation using presentation graphics software" ranked
lowest (M = 9.6429, SD = 4.1064). For TP business needs, "comprehensive
knowledge of an office productivity suite such as Microsoft Office (e.g.,
word processing, spreadsheet, presentation graphics, database)" ranked
highest (M = 3.3846, SD = 2.0631) and "skill in operating database
programs (e.g. Microsoft Access, MySQL, etc.)" ranked lowest (M = 11.0000,
SD = 2.6771). As we did with round two, we provided participants a field
in which they could place additional comments. However, none of the
participants shared supplementary comments with us during round three.


4.4. External validity procedure
Given the limitations of our sample, we conducted an additional
external validation test of our results to enhance confidence in the
generalizability of our results. We followed procedures similar to those
used in other information systems research studies that have employed the
Delphi method and an external validation test (e.g., C. G. Cegielski,
Bourrie, & Hazen, 2013). Our target population for this round of data
collection consists of managers with hiring authority who work in an office
setting. We sampled mid- to senior-level managers employed by the U.S.
government to obtain data regarding the perceived importance of the TP
qualities, business needs, and assessment methods uncovered in our Delphi
study. We crafted a web-based questionnaire, in which we listed the 16 TP
qualities, 14 business needs, and 13 assessment methods. We sent an e-mail
with a link to the questionnaire to 140 potential participants; we asked
participants to rate the degree of importance in regard to each quality,
business need, and method, and employed a 7-point, Likert-type scale
ranging from "Not important" to "Most important." We received 101 of 140
completed surveys, which yielded a 72.1% response rate. On average,
participants have between 16 and 20 years of industry experience.
Participants averaged 42 years of age (SD = 4.8 years), and 58% of
participants were male. We also asked participants to indicate their degree
of familiarity with information systems and technologies used in the
workplace, using a 7-point, Likert-type scale (1 = not familiar at all; 7 =
very familiar). The mean familiarity score was 5.8, which indicates a high
level of familiarity.

The results of the external validation survey generally reflect the
rankings indicated by the Delphi participants. The results of this
validation survey are reported in the right-hand columns of Tables 4a, 4b,
and 4c. All TP qualities, business needs, and assessment methods were
rated above 5 on our 7-point, Likert-type scale, which indicates that each
quality, business need, and assessment method uncovered in the Delphi study
was seen as at least moderately important in the eyes of the participants
of our external validation survey. Standard deviations in the external
validation results were also relatively low, which further indicates
agreement in regard to the relative importance of each item. In addition,
of the 43 qualities, business needs, and assessment methods ranked by our
Delphi panel, 25 of the items shared the exact same ranking across studies
and no items were beyond +/- two rankings between studies. Finally, the
top three items in each category were identically ranked between studies.
In aggregate, the results of our external validation procedure provide a
degree of generalizability in our Delphi study results in regard to both
the general importance of each TP quality, business need, and assessment
method, as well as generalizability in regard to the relative importance of
each issue uncovered.



5. DISCUSSION
Using the Delphi procedure, we harvested the knowledge of 22
participants to investigate the technical proficiency construct. This
procedure yielded ranked listings of 16 TP competencies, 13 TP assessment
methods, and 14 TP business needs. In this section, we consider these
listings to derive potential dimensions of the TP construct.
5.1. Technical proficiency qualities
Considering the literature that suggests a need to measure
proficiencies at both the specific application level and the general
knowledge level (Gist, 1987), we identified two primary themes running in
the TP qualities, one of applied qualities and one of conceptual qualities.
Applied TP qualities are those that appear to require actual use of
technical knowledge. Conceptual TP qualities are those that appear to
require the ability to gestate, ponder, or visualize mentally. Of the 16
TP qualities derived, eleven are based in applied TP (Table 4a; items 1, 2,
3, 5, 6, 10, 12, 13, 14, 15, 16), four of the qualities are based in
conceptual TP (Table 4a; items 4, 8, 9, 11), and one of the TP qualities
appears to be comprised of a combination of applied and conceptual TP
(Table 4a; item 7). We consider this an indication that the TP qualities
may be a bi-dimensional construct. This is consistent with the literature
on computer-medicated communication competence, which suggests the multi-
dimensionality of one's competence, which is comprised of factors
indicating one's motivation, knowledge, and skills (Bunz, 2003; Spitzberg,
2006). However, further study of these items is warranted. Specifically,
quantitative methods may be useful in determining the underlying factor
structure of the TP construct.
While the majority of applied TP qualities are generally performed
using personal computers, four of them use other technologies: handheld
devices, communications devices, copy machines, and document scanners.
Interestingly, participants ranked these four TP qualities as the lowest of
all qualities, suggesting that participants may believe TP is more highly
related to computer use than it is to other information technologies. This
corroborates extant research suggesting both the importance and complexity
of such computer-based communication mediums (Bunz, 2004). The three
highest ranked qualities include applied aspects of the TP construct. On
the other hand, item four, "explain how information systems enhance or
degrade the organization's business processes," is a conceptual aspect.
The participants consider the focus on corporate business processes
important. The other conceptual items (Table 4a; items 8, 9, 11) also
imply a certain level of managerial perspective from an individual with
high levels of TP quality.


5.2. Technical proficiency assessment methods
As we did with the TP qualities and in consideration of the literature
suggesting several dimensions of technological competency (e.g., Bunz,
2003; Gist, 1987), we evaluated the participants' list of assessment
methods and determined three themes; applied, conceptual, and evidence of
experience. While the definitions of applied and conceptual themes are the
same as those for TP qualities, the evidence of the experience theme
applies to those items that appear to be focused toward providing
substantiation of previous technical knowledge, understanding, or
familiarity. Similar to TP qualities, aspects of the TP assessment items
consist of both applied and conceptual dimensions. Because the TP
assessment method question immediately followed the TP qualities question,
the participants' answers to the TP qualities question may have had a
priming effect on the TP assessment method responses, leading to a high
relationship between the responses from the two questions. In some cases,
the participants provided assessment methods that are evaluation measures
of the TP quality items in the previous question. For example, the TP
assessment method of "prepare and deliver a presentation using presentation
graphics software," is integral to the "use office productivity suite
applications" TP quality item. Of the 13 TP assessment items derived,
three are based in applied TP (Table 4b; items 1, 2, 5), three are based in
conceptual TP (Table 4b; items 3, 4, 8), and two appear to be comprised of
a combination of applied and conceptual TP (Table 4b; items 12, 13). The
remaining five assessment items appear to reflect the evidence of
experience theme (Table 4b; items 6, 7, 9, 10, 11). Considering the
differences noted between one's perceived versus actual competencies (Bunz,
Curry, & Voon, 2007), it follows that such a variety of assessment methods
are warranted.


5.3. Technical proficiency business needs
Our investigation of TP business needs for themes yielded interesting
results. We found three themes, applied, conceptual, and managerial,
recurring during our analysis. Items found to have a managerial theme were
those items that appeared to imply a level of leadership or administrative
ability. As with TP assessment method items, it appears there may have
been a priming effect from the TP qualities. Of the 14 resultant TP
business needs items, five are based in applied TP (Table 4c; items 1, 3,
6, 11, 14). Nine of the assessment items are based in conceptual TP (Table
4c; items 2, 4, 5, 7, 8, 9, 10, 12, 13). Interestingly, some of the
conceptual TP items (six) also appear to have a managerial theme (Table 4c;
items 2, 4, 5, 7, 8, 9, 10, 12, 13), whereas none of the applied items have
a managerial theme.


5.4 Limitations and future research
The results of our study provide insight regarding the idea of
technical proficiency. We posit that technical proficiency, if considered
in the IS Success model, will provide a stronger understanding of the value
that IS may provide. However, there is much knowledge to be gained about
TP as applied to the IS Success model and future research is necessary to
realize said knowledge. For example, does TP serve only as an underlying
assumption of the IS Success model? Or, does TP moderate or mediate
relationships within the model? What is the nature and magnitude of the
relationships between TP and existing IS Success constructs? Can the TP
qualities, assessment methods, and business needs identified in this study
be used to create a measure of TP? Can TP be a useful replacement for CSE?
Our study served to identify and define the TP construct; future research
is needed to investigate the implications of TP.
This study may be limited by the sample frame used. Although we
contend that our chosen sampling frame was appropriate for this study and
we validated our results with an external sample frame, participants from
an alternate sampling frame may have yielded slightly different results.
Future research may wish to validate our findings across different business
settings. On the other hand, it is similarly important to recognize that
this study, like any other, is not one of absolute validation; it is
groundwork for future research. The study yielded content that can
potentially be used as the basis for a measure of actual technical
proficiency via identifying technical proficiency qualities, assessment
methods, and business needs. In addition, the results of our study can
assist practitioners in preparing for IS employment in their organizations.
Using the criteria the participants developed through the Delphi process,
practitioners will be better able to assess the level of their employees'
TP, which may help them to better prepare for IS deployment.


6. CONCLUSION
The IS Success model is the foundation for many scholarly works,
providing a framework from which others can investigate the success if IS
implementations. The results of this study further strengthen the general
underpinnings of the model and suggest the inclusion of a technical
proficiency construct. Technology improves at a dramatic rate; thus, TP is
a concept that is continuously evolving. With that rate of change,
measures and constructs that rely on interaction with specific technologies
also require continuous updating to remain relevant. In this study, we
asked: what does it mean to be technically proficient in regard to
information systems? In today's business setting, what skills are required
and how may one be assessed as to their level of technical proficiency?
This study revealed pertinent TP competencies upon which scholars can build
in future studies.



References

Adler, Michael, & Ziglio, Erio. (1996). Gazing into the Oracle: The Delphi
Method and its Application to Social Policy and Public Health. London:
Jessica Kingsley Publishers.
Agarwal, R, Sambamurthy, V, & Stair, RM. (2000). Research Report: The
Evolving Relationship Between General and Specific Computer Self-Efficacy-
-An Empirical Assessment. Information Systems Research, 11(4), 418.
Baker, John, Lovell, Karina, & Harris, Neil. (2006). How expert are the
experts? An exploration of the concept of 'expert' within Delphi panel
techniques. Nurse Researcher, 14(1), 59-70.
Bandura, A. (1997). Self-efficacy: The exercise of control: W. H. Freeman
and Company.
Baroudi, Jack J. , Olson, Margrethe H. , & Ives, Blake. (1986). An
empirical study of the impact of user involvement on system usage and
information satisfaction. Communications of the ACM, 29(3), 232-238. doi:
http://doi.acm.org/10.1145/5666.5669
Bunz, Ulla. (2003). Growing from computer literacy towards computer-
mediated communciation competence: Evoloution of a field and evaluation
of a new measurement instrument. Information Technology, Education and
Society, 4(2), 53-84.
Bunz, Ulla. (2004). The computer-email-web (CEW) fluency scale -
Development and validation. International Journal of Human-Computer
Interaction, 17(4), 477-504.
Bunz, Ulla. (2006). Saturation of CMC competency: Good or bad news for
instructors. Journal of Technology and Literacy, 6(1).
Bunz, Ulla, Curry, C., & Voon, W. (2007). Perceived versus actual computer-
email-web fluency. Computers in Human Behavior, 23, 2321-2344.
Cegielski, C. G., Bourrie, D. M., & Hazen, Benjamin T. (2013). Evaluating
adoption of emerging information technologies for corporate information
technology strategy. Information Systems Management, 30(235-249).
Cegielski, Casey G. (2008). Toward the Development of an Interdisciplinary
Information Assurance Curriculum: Knowledge Domains and Skill Sets
Required of Information Assurance Professionals. Decision Sciences
Journal of Innovative Education, 6(1), 29-49.
Chang, Li-Min, Chang, She-I, Ho, Chin-Tsang, Yen, David C., & Chiang, Mei-
Chen. (2011). Effects of IS characteristics on e-business success factors
of small- and medium-sized enterprises. Computers in Human Behavior,
27(6), 2129-2140.
Chen, G, Gully, SM, & Eden, D. (2001). Validation of a new general self-
efficacy scale. Organizational Research Methods, 4(1), 62.
Chiu, Chao-Min, & Wang, Eric T. G. (2008). Understanding web-based learning
continuance intention: the role of subjective task value. Information and
Management, 45(3), 194-201.
Cho, Jeewon, Park, Insu, & Michel, John W. (2011). How does leadership
affect information systems success? The role of transformational
leadership. Information and Management(In press).
Churchman, C. W., & Schainblatt, A. H. (1965). The researcher and the
manager: A dialectic of implementation. Management Science, 11(4), B69-
B87.
Compeau, Deborah R., & Higgins, Christopher A. (1995). Computer Self-
Efficacy: Development of a Measure and Initial Test. MIS Quarterly,
19(2), 189-211.
Dalkey, N, & Helmer, O. (1963). An experimental application of the Delphi
method to the use of experts. Management Science, 9(3), 458-467.
DeLone, WH, & McLean, ER. (1992). Information Systems Success: The Quest
for the Dependent Variable. Information Systems Research, 3(1), 60-95.
Delone, WH, & McLean, ER. (2003). The DeLone and McLean model of
information systems success: a ten-year update. Journal of Management
Information Systems, 19(4), 9-30.
Downey, James P., Rainer Jr, R. Kelly, & Bartczak, Summer E. (2008).
Explicating Computer Self-Efficacy Relationships: Generality and the
Overstated Case of Specificity Matching. Journal of Organizational and
End User Computing, 20(3), 22-40.
Francis, Vernon E., & Schreiber, Nancy. (2008). What, No Quiz Today? An
Innovative Framework for Increasing Student Preparation and
Participation. Decision Sciences Journal of Innovative Education, 6(1),
179-186.
Gelderman, Maarten. (1998). The relation between user satisfaction, usage
of information systems and performance. Information and Management,
34(1), 11.
Gist, Marilyn E. (1987). Self-Efficacy: Implications for Organizational
Behavior and Human Resource Management. The Academy of Management Review,
12(3), 472-485.
Gravill, JI, Compeau, DR, & Marcolin, BL. (2006). Experience effects on the
accuracy of self-assessed user competence. Information and Management,
43(3), 378-394.
Gupta, UG, & Clarke, RE. (1996). Theory and applications of the Delphi
technique: a bibliography (1975-1994). Technological Forecasting and
Social Change, 53(2), 185-211.
Henfridsson, Ola, & Lindgren, Rikard. (2010). User involvement in
developing mobile and temporarily interconnected systems. Information
Systems Journal, 20, 119-135.
Hughes, CT, & Gibson, ML. (1991). Students as surrogates for managers in a
decision-making environment: an experimental study. Journal of Management
Information Systems, 8(2), 166.
Igbaria, M., & Tan, M. (1997). The consequences of information technology
acceptance on subsequent individual performance. Information and
Management, 32(3), 113.
Iivari, J., & Iivari, N. (2011). Varieties of user-centredness: an analysis
of four systems development methods. Information Systems Journal, 21, 125-
153.
Iivari, J., Isomaki, H., & Pekkola, S. (2010). Editorial. The user - the
great unknown of systems development: reasons, forms, challenges,
experiences and intellectual contributions of user involvement.
Information Systems Journal, 20, 109-117.
Jones, Mary C., & Pearson, Rodney A. (1996). Developing an instrument to
measure computer literacy. Journal of Research on Computing in Education,
29(1), 17.
Kay, Robin H. (1993). A practical research tool for assessing ability to
use computers: The Computer Ability Survey (CAS). Journal of Research on
Computing in Education, 26(1), 16.
Keeney, S., Hasson, F., & McKenna, H. (2001). A critical review of the
Delphi panel technique as a research methodology for nursing.
International Journal of Nursing Studies, 38, 195-200.
Koh, Joyce Hwee Ling. (2011). Computer skills instruction for pre-service
teachers: a comparison of three instructional approaches. Computers in
Human Behavior, 27(6), 2392-2400.
Ku, Yi-Cheng, Chu, Tsai-Hsin, & Tseng, Chen-Hsiang. (2013). Gratifications
for using CMC technologies: A comparison among SNS, IM, and e-mail.
Computers in human behavior, 29(1), 226-234.
Landeta, J. (2006). Current validity of the Delphi method in social
sciences. Technological Forecasting & Social Change, 73(5), 467-482.
Li, Eldon Y. (1997). Perceived importance of information system success
factors: a meta analysis of group differences. Information and
Management, 32(1), 15-28.
Lin, Chieh-Peng, & Bhattacherjee, Anol. (2010). Extending technology usage
models to interactive hedonic technologies: a theoretical model and
empirical test. Information Systems Journal, 20, 163-181.
Linden, A, Biuso, TJ, Allada, G, Barker, AF, Cigarroa, J, Haranath, SP, . .
. Stajduhar, K. (2007). Consensus Development and Application of ICD-9-CM
Codes for Defining Chronic Illnesses and their Complications. Disease
Management and Health Outcomes, 15(5), 315.
Loo, R. (2002). The Delphi method: a powerful tool for strategic
management. Policing: An International Journal of Police Strategies &
Management, 25(4), 762-769.
Lu, Yaobin, Deng, Zhaohua, & Wang, Bin. (2010). Exploring factors affecting
Chinese consumers' usage of short message service for personal
communication. Information Systems Journal, 20(183-208).
Marakas, GM, Yi, MY, & Johnson, RD. (1998). The multilevel and multifaceted
character of computer self-efficacy: Toward clarification of the
construct and an integrative framework for research. Information Systems
Research, 9(2), 126.
Marcolin, BL, Compeau, DR, Munro, MC, & Huff, SL. (2000). Assessing user
competence: Conceptualization and measurement. Information Systems
Research, 11(1), 37-60.
Mason, Richard O. (1978). Measuring information output: A communication
systems approach. Information and Management, 1(4), 219-234.
Millerand, Florence, & Baker, Karen S. (2010). Who are the users? Who are
the developers? Webs of users and developers in the development process
of a technical standard. Information Systems Journal, 20, 137-161.
Mun, Hee, Yun, Haejung, Kim, Eun, Hong, Jin, & Lee, Choong. (2010).
Research on factors influencing intention to use DMB using extended IS
success model. Information Technology and Management, 11(3), 143-155.
Parente, F.J., Anderson, J.K., Myers, P., & O'Brien, T. (1994). An
examination of factors contributing to Delphi accuracy. Journal of
Forecasting, 3(1), 173-183.
Petter, Stacie, & McLean, Ephraim R. (2009). A meta-analytic assessment of
the DeLone and McLean IS success model: an examination of IS success at
the individual level. Information and Management, 46(3), 159-166.
Podsakoff, Philip M., MacKenzie, Scott B., Jeong-Yeon, Lee, & Podsakoff,
Nathan P. (2003). Common Method Biases in Behavioral Research: A Critical
Review of the Literature and Recommended Remedies. Journal of Applied
Psychology, 88(5), 879.
Potosky, Denise, & Bobko, Philip. (1998). The Computer Understanding and
Experience Scale: a self-report measure of computer experience. Computers
in Human Behavior, 14(2), 337-348.
Powell, C. (2003). The Delphi technique: myths and realities. Journal of
Advanced Nursing, 41(4), 376-382.
Remus, W. (1986). Graduate students as surrogates for managers in
experiments on business decision making. Journal of Business Research,
14(1), 19-25.
Saunders, Carol, & Benbasat, Izak. (2007). A Camel Going Through the Eye of
a Needle. MIS Quarterly, 31(3), iv-xviii.
Scott, Judy E., & Walczak, Steven. (2009). Cognitive engagement with a
multimedia ERP training tool: assessing computer self-efficacy and
technology acceptance. Information and Management, 46(4), 221-232.
Seddon, Peter B, & Kiew, Min-Yen. (1994). A Partial Test and Development of
Delone and Mclean's Model of IS Success. Paper presented at the
Proceedings of the International Conference on Information Systems,
Atlanta, GA.
Seddon, Peter B. (1997). A Respecification and Extension of the DeLone and
McLean Model of IS Success. Information Systems Research, 8(3), 240.
Shannon, Claude E. , & Weaver, Warren. (1949). The mathematical theory of
communication. Urbana, IL: University of Illinois Press.
Skulmoski, Gregory J., Hartman, Francis T., & Krahn, Jennifer. (2007). The
Delphi Method for Graduate Research. Journal of Information Technology
Education, 6, 1.
Snyder-Halpern, R. (2001). Indicators of organizational readiness for
clinical information technology/systems innovation: a Delphi study.
International Journal of Medical Informatics, 63(3), 179-204.
Sparling, D. (2002). Simulations and supply chains: strategies for teaching
supply chain management. Supply Chain Management: An International
Journal, 7(5), 334-342.
Spitzberg, B. H. (2006). Preliminary development of a model and measure of
computer-mediated-communication (CMC) competence. Journal of Computer-
Mediated Communication, 11, 629-666.
Taylor, Raymond E., & Meinhardt, David J. (1985). Defining computer
information needs for small business: A Delphi method. Journal of Small
Business Management, 23(2), 3-9.
Tersine, Richard J., & Riggs, Walter E. (1976). The Delphi Technique: A
Long-Range Planning Tool. Business Horizons, 19(2), 51.
Thangaratinam, Shakila, & Redman, Charles W. E. (2005). The Delphi
technique. The Obstetrician and Gynaecologist, 7(2), 120-125. doi:
10.1576/toag.7.2.120.27071
Torkzadeh, G, & Lee, J. (2003). Measures of perceived end-user computing
skills. Information and Management, 40(7), 607-615.
Torkzadeh, G, & Van Dyke, TP. (2002). Effects of training on Internet self-
efficacy and computer user attitudes. Computers in Human Behavior, 18(5),
479-494.
Torkzadeh, Gholamreza, Chang, Jerry Cha-Jan, & Demirhan, Didem. (2006). A
contingency model of computer and Internet self-efficacy. Information and
Management, 43(4), 541-550.
Torkzadeh, Gholamreza, & Van Dyke, Thomas P. (2001). Development and
validation of an Internet self-efficacy scale. Behaviour and Information
Technology, 20(4), 275 - 280.
Wang, Yinglei, & Haggerty, Nicole. (2009). Knowledge transfer in virtual
settings: the role of individual virtual competency. Information Systems
Journal, 19, 571-593.
Williams, P., & Webb, C. (1994). The Delphi technique: a methodological
discussion. Journal of Advanced Nursing, 19(1), 180-186.






Table 1: Participant Demographics






Table 2: Response Rate by Round






Table 3a: Technical Proficiency Qualities from Round One






Table 3b: Technical Proficiency Assessment Methods from Round One






Table 3c: Technical Proficiency Business Needs from Round One






Table 4a: Final Ranking of Technical Proficiency Qualities

"Delphi Rank "



Table 4b: Final Ranking of Technical Proficiency Assessment Methods

"Delphi Rank "



Table 4c: Final Ranking of Technical Proficiency Business Needs

"Delphi Rank "
Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.