Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth




Скачать 81.38 Kb.
НазваниеInformation Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth
страница2/5
Дата24.09.2012
Размер81.38 Kb.
ТипДокументы
1   2   3   4   5

II.CURRENT SITUATION


Are mainstream IS academic journals are becoming out-dated, over-rigorous and irrelevant?

BLEEDING EDGE THEORY


IS research seems to have strong theories on minor topics (like keystroke or mouse-click models), limited theories on major topics (like contingency theory) (Gutek, 1990), but only a few strong theories on major topics. Two such are the Technology Acceptance Model (TAM) (Davis, 1989) and Media Richness theory (MRT) (Daft, Lengel, & Trevino, 1987). TAM suggests users assess technology by usefulness and ease of use and MRT links “rich” media to rich interactions. Both not only say something important, but say it about something important. However such IS theories currently have two unfortunate properties:

  1. They are 10-20 years old, and

  2. They don’t work so well today.

For example, MRT’s “richness” dimension seems to over-simplify human communication. That people have proposed marriage to people they have got to know via plain-text “lean” email suggests that either email is “multi-media” rich, or MRT has omitted something. TAM’s idea of technology acceptance as ease of use plus usefulness was good once, but in today’s Internet, where security and privacy are pervasive issues, it seems inadequate (Mahinda & Whitworth, 2005). One can argue that security and privacy are aspects of TAM’s usefulness, but the same logic would subsume ease of use under usefulness as well. We are not criticizing these “old but good” theories. They did what good theories should do: ask important questions to open up new fields of research. MRT asked if human richness was as important as factual data, and was part of the multi-media advance, and TAM added usability to usefulness, and helped grow the field of human computer interaction (HCI).

The argument is not that these theories are wrong, or that IS theory made errors, but that knowledge is not static. Social-technical systems are complex – why expect to predict them? That mainstream IS theories struggle is no surprise, but that they have changed so little over so long a time is. Theory and practice should connect, so given dramatic changes in IS practice, for IS academics to be essentially still using decades old theories is an issue. Certainly there are many papers that “extend” TAM in different directions (Brown, 2002; Heijden, 2003; Moon & Kim, 2001; Ong, Lai, & Wang, 2004; Shih, 2004; Taylor & Todd, 1995; Venkatesh & Morris, 2000; Yu, Ha, M., & Rho, 2005), but many minor changes to a major model cancel out. If IS authors feel the only way to present a new idea in IS to graft it onto an old one, then we imagine our discipline to be more mature than it is. The Unified Theory of Acceptance and Use of Technology (UTAUT) was recently proposed to replace TAM (Venkatesh & Morris, 2003). However UTAUT merely tweaks TAM’s core constructs, renaming usefulness as performance expectancy and ease of use as effort expectancy. It then combines this face-lifted TAM with other equally old theories from other disciplines like sociology to create a “new” model. While outwardly IS theory seems to change, really it remains much the same.

Is it hard to publish new theories in IS? Quite frankly, yes, if “new” means not an old theory tweak, and “theory” means a predictive framework not speculative conjecture. Reviewing the core IS theory literature, innovation is not a term that comes to mind, while IS practice suggests the opposite. That progress is coming from practice and not theory suggests the latter has its priorities wrong.

LEADING EDGE PRACTICE


It is easy to forget that inventions like the cell-phone were not predicted by theory (Smith, Kulatilaka, & Venkatramen, 2002). Breakthroughs like chat rooms, blogs, text-messaging, wikis and reputation systems, are neither multi-media nor media rich. Yet these simple text products were highly successful. While IS practice advanced over the last decade, IS theory was essentially looking the other way. It was Google with its simple white screen and one entry box, not Yahoo with its media rich colors and pictures that scooped the search engine field. Investors who expected an Internet and multi-media bandwidth boom lost money. Those who invested in virtual reality games, where players donned helmets to play, missed the development of online social gaming.

Usability theories plus 25,000 hours of user testing predicted that Mr Clippy, Office ‘97’s friendly graphical help assistant, would be a huge success (Horvitz, 2004). Yet Mr Clippy and same concept Microsoft Bob were voted the third and first (respectively) biggest software flops of 2001 (PCMagazine, 2001). Mr. Clippy’s removal was even a Windows XP sales pitch (Levitt, 2001). Microsoft is still only dimly aware of the problem (Pratley, 2004), that Mr Clippy is impolite (Whitworth, 2005). Asked why plain text products succeed when multi-media, user-friendly ones do not, mainstream IS theory is strangely silent.

The pattern that practice leads while theory bleeds has a long history in computing. Twenty five years ago pundits proclaimed paper use dead, to be replaced by an electronic “paperless office” (Toffler, 1980). Yet today paper is more used than ever before. James Martin predicted program generators would make programmers obsolete, yet today programming is alive and well. A “leisure society” was supposed to arise as machines took over human work, but workers became busier and less leisured (Schor, 1991), and studies extend the 40+ hour week trend into the 1990s (Golden & Figart, 2000). Email was supposed to be only for routine tasks, the Internet was supposed to collapse without central control, video was supposed to become the Internet norm (given bandwidth), and people would tele-conference not travel. Each case had some truth but overall, prediction was poor. Does getting it wrong so often mean we have learned, or are we still confusing theory and hype? If so, should practice ignore theory?

Practice without theory?


There are two valid ways to progress:

  1. Pragmatic: Find what works by intuitive trial and error, then explain it later. Here theory is like the icing on a cake, it is put on after the cake is made,

  2. Theoretical: Use theory to predict practice, then create it. Theory now is like a recipe, it is used before the cake is baked.

Neither is “better”, but both approaches link theory and practice. The first uses theory to retrospectively explain existing progress, while the latter uses theory to predict and thus create progress, e.g. theory predicted space travel, then rockets were built. In IS however the theory/practice relationship seems broken, as if rocket builders found that the less they knew of rocket theory, the better their rockets flew. Practitioners pragmatically build a new web site, interface, tool, button or function, then “accessorize” a theory later only in order to publish.

Cutting edge pragmatism, with its all power to the IT artifact, means IS theory meets a “show me don’t tell me” response. Physicists with the same approach would have demanded Einstein build a particle accelerator to get his voice heard in Physics. The theory-practice disconnect arises when practitioners rightly ask: “If theory does not explain practice, what use is it?” In the IS marriage of theory and practice, the partners are barely speaking to each other.

Yet pragmatics has limits. If knowledge is a tree, first pickings come easily from the lower branches, but soon running around the tree gives only the odd windfall. One then needs the ladder of theory. The black box approach falters when the system under consideration has many more ways to go wrong than right, i.e. becomes complex. Imagine a space shuttle or nuclear program without theory! Trial and error does not work well here. Yet IS today is creating a system as complex as any space program, namely the architecture of an online global society. Can such a system be created by pragmatic trial and error alone? IS practice needs theory, as without it, it is working blind.

THE RIGOR PROBLEM


Rigor can be defined as the probability of avoiding scientific error, and value as the probability of useful progress. The practitioner name for knowledge value is relevance, which includes timeliness as an aspect. Academic quality, we propose, involves both rigor and value. The logic of experimental science gives lack of rigor and lack of value the general names of Type I errors (of commission) and Type II errors (of omission), and further notes that as one error type reduces the other increases. Type I and II errors are inextricably entwined, so to do nothing means to make no mistakes but also to miss all opportunities, i.e. reducing errors of commission to zero increases errors of omission to 100%. The latter are beneficial things one could have done, but didn’t, like buy a winning lottery ticket. Such “intangible” opportunity costs are a known cause of business failure (Bowman, 2005), e.g. VisiCalc and Word-Perfect no longer dominate spreadsheets and word processing respectively not from errors made, but from opportunities missed. In the lottery of life you must buy a ticket, i.e. risk error.

Are IS journals becoming more rigorous but less relevant? More rigor is good, but changing from type I errors (accepting faulty papers) to less obvious type II errors (rejecting useful papers) is not a gain overall. Most journal submissions offer value opportunities as well as error risks. To reject a paper with nine good ideas and one bad one is to miss nine opportunities to avoid only one error. For example, when Berners-Lee presented his World Wide Web idea to the academic hypertext community they rejected it on its faults (Berners-Lee, 2000), but did not see its enormous future potential. Assessing rigor without also assessing potential value reduces innovation, as good ideas are thrown out with bad. Authors face the same dilemma, as if they write to be “bullet-proof” they also tend to say very little.

Increasing journal rigor without an innovation counter-balance gives a bias to the old. More rigor means new theories face an increasing burden of proof to publish. While the faults of old IS theories are simply noted, the rising rigor standard means similar faults in new theories prevent publication entirely. If anything, the bias should be the other way, as new views rarely rise like Venus from the sea, complete and perfect. They usually begin imperfect and develop only with help from others. That new theories respect old ones is reasonable, but that they answer critiques existing theories don’t answer either, is not. There must be a balance, lest it seem that those who have climbed the tree of knowledge have pulled the ladder up behind them.

Publishing in top journals is now the primary screening mechanism for tenure, promotions and appointments. Yet top IS journals accept in single digit percentages, making submission failure the norm. A University course with a 90% failure rate would be unacceptable, yet our own promotion system is set up this way. The expected lesson of failure is conformity, yet today IS academia needs innovators as well as perpetuators. Consider how we grow our people: PhD students spend 3-6 years as apprentices under senior direction, then 3-6 years trying to get tenure. At both stages, criticizing established theory is unwise. Why expect innovation after nearly a decade of conformity training? Contrast this with theoretical physics, where 25 year olds are expected to make break-throughs, and some joke one is “over-the-hill” at 30. Once PhD students graduated then published, but now they must publish to graduate and get a job (Larsen, 1998). Putting them so early on the publication treadmill breeds conformity, as who questions established views with their career on the line? Senior IS researchers say to young IS faculty: “So for now, unfortunately, I would not recommend PhD students or junior faculty to aim for ‘IS research that really matters.’ My recommendation … would be to stick to their career paths. … not too much research that really matters seems publishable..” (Desouza, El-Sawy, Galliers, Loebbecke, & Watson, 2006). That this view is widespread affects our discipline as well as new faculty.

Let us not kill our discipline in the name of rigor, as too much rigor causes rigor mortis. Academic journals should set high standards, but not so high that new ideas can’t get in. Rigor is only one part of academic quality, as research must add value as well as avoid mistakes. The future of knowledge publishing lies in a combination of rigor and relevance, not either alone (Figures 1 & 2). The question “Could this paper change the way we think?” must be asked more seriously. IS journals today face a challenge of relevancy rather than rigor, a crisis of indifference rather than a crisis of quality.
1   2   3   4   5

Похожие:

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconModeling & Tools: Information Systems Using the Knowledge Pyramid to Characterize Systems J. N. Martin, The Aerospace Corporation Modeling & Tools: Multiple Sectors

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconMain Categories of Information Technologies Systems Regarding Process Orientation and Knowledge Orientation

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconThe move to "open systems" requires re-thinking interaction with knowledge representation systems

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconKnowledge is a power. That is truth. But the greatest power is the knowledge about knowledge, I e. how to learn

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconThe 21st century belongs to the knowledge age, where acquisition, possession and application of knowledge are the most important resources

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconComputer Science, Knowledge & systems

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconComputer Science, Knowledge & Systems

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconEglash, R. “Nanotechnology and Traditional Knowledge Systems.” In Donnie Maclurcan (ed)

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconRepresenting and using legal knowledge in integrated decision support systems: DataLex WorkStations

Information Systems Journals: Knowledge Castles or Knowledge Gardens? Brian Whitworth iconAvoiding Information Overload: Knowledge Management on the Internet

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница