Some thoughts on information systems

Joseph Koivisto's in-class blog for LSC555

The Hope of Mobile Computing and It’s Risks to Underserved User Communities

The shift of libraries towards provision of digital and online services has been long-coming and well documented. According to Sennyey, Ross, & Mills (2009), the trend has been to increasingly focus on the development of ubiquitous digital collections, available from any number of internet-capable platforms while decreasing the physical demands of the conventional library collection (253-254). Considering the overall social trends towards the migration of traditional materials and services to digital platforms, we should not be surprised by this transformation. As a microcosm of society at large, the library is subject to all the changes of the world around it.

And as the library begins to change along with the world around it, an important element of the changing library is how they ensure access to materials for underserved patron populations including communities of color, lower economic classes, and those for whom English is not a primary language. Two important elements that figure heavily into the provision of materials for these underserved populations are the accessibility of materials via trustworthy platforms and the usability of materials in a device-agnostic manner.

Many libraries have adopted some form of ebook service as a means of providing digital materials through online platforms. Here in DC – as with many other systems – the District of Columbia Public Library System (DCPL) subscribes to the OverDrive service to provide users with access to the latest releases in popular fiction and nonfiction as well as numerous public access works in an unlimited licenses format. While services like OverDrive are extremely beneficial, the most ambitious endeavor to date has been the Google Book Search initiative which aims to bring a Google search interface to the world of print media and provide downloadable copies of public domain books to anyone with an internet connection (Grimmelmann, 11). In terms of providing services to underserved communities, the Google Book Search platform stands head-and-shoulders above many other services because it not only provides access to an innumerable amount of materials in a non-physical format, but it does so in a way that integrates neatly with established search behaviors, lower the necessary behavior changes needed to engage with both born-digital materials and print materials. Additionally, the availability of the Google interface from any internet connection diminishes the need to be a part of a library system or to use a dedicated internet connection in order to access library-specific collections.

With regards to the advent of mobile technology, Hanson (2011) points out that the adoption of mobile technologies such as smart phones and tablets have soared and that mobile internet accessibility has undergone a nearly 5000% increase in the years leading up to 2010 (8). While this rapid change is stoking the flames of institutional change that is sweeping the library field, what is more startling is the important impact that mobile computing has for minority and impoverished users: the Pew Internet and American Life Project notes that Black and Latino respondents were more likely to own internet-accessible mobile devices and were 8-13% more likely to use them to access online information (9). Considering the lower cost point of a smartphone – when compared to a $700 laptop – this information is unsurprising. However, this does change the calculus for libraries as we must acknowledge the reality that the users who most need our services may not access information in the same way that we do. We must embrace mobile computing as a viable platform for information access and ensure that our services are developed accordingly.

While the road before us does appear to be paved with golden iPads and smartphones, there is a dark side to the increasing prevalence of tech-enabled materials and services: the risk of user surveillance and the commodification of user information. High profile occurrences such the adoption and extension of the USA PATRIOT Act highlight the risk inherent in digital services and data (Malinconico, 2011, 160). While the PATRIOT Act represents a sanctioned – albeit lamentable – incursion into privacy, such revelations as the Heartbleed bug – a computer glitch that allows for dangerous access to server-side information such as usage patterns and user information (Hesseldahl, 2014) – remind us of the instability of some digital information and the very real risk that it can be used against us. Meanwhile, legitimate concerns over the sale of user information remain in the forefront of developer consciousness (Malinconico, 2011, 161) to such a degree that the Google Book Search court settlement mandated has been criticized by legal counsel for not explicitly declaring the sanctity of user privacy (Grimmelmann, 2009, 16).

Clearly, these concerns are grave and are particularly important to the library profession, one that holds user privacy so sacred. However, these topics present a particularly difficult issue with regards to the needs and protections of minority and financially-depressed users. The threat of governmental surveillance is a serious issue to all users. However, those with social or economic means to subvert surveillance enjoy the privilege of work-arounds and other means of avoiding the watchful eye of the state or the corporate wranglers. However, for those who do not have these means, the only recourse is to either acquiesce to a violation of their privacy or to go without information or access. Additionally, systemic elements of oppression that exist in our culture place minority or poor users in an even tighter spot as the threat of surveillance or commodification may serve to further disenfranchise this user base even further. This in turn may push these users to totally abandon digital services and materials as they will only be yet another social pinion, perceived as an orchestrated means of keeping them away from true agency or autonomy.

In light of these sensitive issues, what can we do to ensure that we do not alienate the very users we hope to reach with our new approach to services and content? A few recommendations include:

  • Institutional mandates that all development proposals, RFPs, and deliverables must ensure the protection of user privacy through the masking of sensitive data and redundant security measures
  • Increased advocacy in the face of social pressures to give in to governmental incursion into citizen privacy
  • Educational outreach to ensure users remain informed about the information risks that certain user behaviors can invite
  • Positive institutional mandates to protect user privacy at all levels
  • Careful consideration to weigh the impact of safety measures against the user experience of access and use

While none of these represents a comprehensive solution to the complex issue of privacy and user protection of marginalized user groups, they are steps in the right direction. Digital technology and services have opened a door into a brave new world of access for users who have traditionally gone under-served by many segments of public service. Along with these new opportunities, risks have appeared. And while we may have our own personal and professional ideas about how to best deal with these issues, we must make sure that we remain conscious of those who stand to gain the most from these new modes of library materials of service. Even though they have so much to gain, they still have so much to lose.


Articles Cited

Grimmelmann, J. (2009). How to fix the Google Book Search settlement. Journal of Internet Law, 12(10). 10-20.

Hanson, C. (2011). Why worry about mobile? Library Technology Reports (February/March). 5-10.

Hesseldahl, A. (2014). How Heartbleed’s worst-case scenario was proven possible. <re/code>. Retrieved from http://recode.net/2014/04/27/how-heartbleeds-worst-case-scenario-was-proven-possible/.

Malinconico, S. (2011). Librarians and user privacy in the digital age. Scientific Research and Information Technology, 1(1). 159-172.

Sennyey, P., Ross, L., & Mills, C. (2009). Exploring the future of academic libraries: A definitional approach. The Journal of Academic Librarianship, 35(3). 252-259.

Accessibility Awareness and the Role of LIS Schools

Accessibility of web-based materials is an extremely important element of information access for disabled users of all sorts. It is so important, in fact that the Web Content Accessibility Guidelines (W3C, 2012), a standard technical approach and assessment criteria, has been adopted by the ISO in order to standardize its application, thereby promoting the highest possible level of accessibility. However, despite the widely available standard and broad adoption via the appropriate organizations, accessibility guideline adoption is not as embraced as it should be. Brophy & Craven (2007) note several cases that evince a general lack of accessibility considerations, such as:

  • a Brophy & Craven study that found only 49 of a sample of 134 UK homepages were deemed “Bobby Approved” via the Bobby accessibility testing software (964)
  • a generally negative review of websites from European Union member nations that concluded public service sites had “a long way to go” in terms of achieving accessibility (965)
  • a 2005 SupportEAM project that found only 35% of respondents actively tested the usability of their websites despite an 80% positive response in terms of accounting for accessibility during design stages (967)

While these numbers are rather lamentable, there is some good news to be gained from the progress made in a specific area: library and information science schools. Comeaux & Schmetzke (2007) completed a follow-up to Schmetzke’s 2003 study on the accessibility of LIS department pages and library pages (461). In their follow-up, they sought to identify any changes over time within the same sampled institutions. Even though they found some troubling trends – the seemingly random distribution of positive and negative change among the surveyed institutions (472) – they found overall that the sampled sites improved in terms of percentage of Bobby-approved pages (i.e. sites that adhere to the WCAG) and average barriers per page (467). Based on these findings, we see that there is a higher level of compliance to the WCAG guidelines and a greater level of consideration for users of adaptive technologies such as those seen in Guder’s (2012) review, including screen readers and literacy software (15-16).

What does this mean for the world of accessible web content writ large? Well, it could mean several things, many of which must be widely qualified. So, here are the qualifiers:

  • First, there is no silver bullet of accessibility. One-hundred percent compliance is not likely to ever be achieved. We should not delude ourselves into thinking that pie-in-the-sky goals should be our mission; this takes away from the hard work of incremental improvement.
  • Second, this is a recommendation of approach only. Obviously implementation will be locally flavored and will influence the success of the initiative.

Having gotten that out of the way, here are my recommendations.

Considering the overall positive assessment of LIS schools in terms of compliance to the WCAG framework and the deplorable state of other web content, I envision a dual partnership that parlays the positive behaviors of library and information science professionals and scholars into the accessibility initiatives of the overall web community. As we already have clearly-documented evidence that LIS professionals (and their associated web content) are savvier to topics of accessibility, we may turn to them as reasonable intermediaries between the W3C and individual content producers in terms of understanding accessibility guidelines and proper implementation of the same. What form could this take? Here are some examples:

  • Education and Consciousness-Raising Initiatives: the role of LIS departments – in addition to their research activities – are educative. However, as a matter of social welfare, they should think about extending their education tasks to beyond the classroom through such initiatives as public teaching days and free seminars. I’m sure that for many content producers, their failure to produce accessible content is solely because of a lack of awareness. Through introductory seminars and “day-in-the-life” activities – experiential learning exercises where students can see just how inaccessible their data is when accessed through a screen reader application – participants can gain a firsthand understanding of what it means to be accessible and what the dangers of remaining inaccessible.
  • On-Demand training materials: Perhaps those that search for information on creating accessible content are turned off by the complexity of available materials (such as the WCAG technical specs which, I’ll admit, are a bit daunting). LIS professionals and department members can bring their skill sets as seasoned educators to the arena and create easy-to-use online tutorials that content providers can access at their leisure. These materials may in turn make the prospect of realigning their content less scary.
  • An open assessment environment: As educators, LIS department members should be familiar with the ins and outs of open door policies. Why not extend that to the broader world of content providers? By advertising assessment services, LIS departments can communicate to the broader world that they know what accessible data is and will let you know if you have it. Content providers can then collaborate with LIS staff to assess their content via manual testing or automated applications (like Bobby) in order to determine how compliant their information is and what can be done to improve their standing.

Granted, these are just recommendations, but let us consider the alternative. If LIS professionals and other content providers continue to develop their materials in separate silos, the LIS world may continue to move along at an acceptable rate while other content continues to grow exponentially and with a questionable rate of compliance to accessibility guidelines. Rather than allowing this status quo to continue, we can take proactive steps to altering the trajectory of content production. By providing a helping hand to our non-academy affiliated content-providing brethren, we can all work towards a better, more accessible future of web content.


 

Works Cited

 

  • Brophy, P. & Craven, J. (2007). Web accessibility. Library Trends, 55(4). 950-972.
  • Comeaux, D. & Schmetzke, A. (2007). Web accessibility trends in university libraries and library schools. Library Hi Tech, 25(4). 457-477.
  • Gruder, C. (2012). Making the right decisions about assistive technology in your library. Library Technology Reports, 48(7), 14-21.
  • W3C. (2012). Web content accessibility guidelines (WCAG) overview. Retrieved from http://www.w3.org/WAI/intro/wcag.php

HCI, Cognitive Psychology, and Integrating the Spectrum of Disabilities

The field of Human-Computer Interaction (HCI) is the study of techniques and methodologies related to the ways in which human users interact with computer systems via software, hardware, displays, and tautological metaphors employed in programming and design.

Something like this, only a little better.

According to Ebert, Gershon, & van der Veer (2012), HCI directly engages with elements of cognitive psychology insomuch as elements of HCI evinced in system development facilitate the user’s ability to internalize information, make decisions, effectively interact with the computer, and experience the wide range of affective responses that are elicited by info systems. By properly applying psychological concepts to HCI activities, system developers can help support the encoding of information in the users mind. Similarly, Ferriera and Pithan (2005) document how the application of HCI concepts directly influence user’s emotional engagement with their information seeking practices. The use of information systems can positively or negatively influence affective experience, a major part of Kuhlthau’s Information Seeking Process. As Kuhlthau has described in her theoretical model, user affective experience can disincline users to continue seeking information (or using information systems) based on the positive or negative emotional reinforcement that occurs during the use of given system.

The considerations the psychological ramifications of system development and the emotional experiences that occur during information system usage, practitioners of HCI-centric system design (also known as human-centric design) must be sure to take a holistic view of users when developing systems: the user as a thinking, feel, experiencing individual who has responses that are informed by thoughts, past experiences, and emotive states.

Gupta (2012) writes about the variety of approaches to HCI and the future developments that will impact the information system landscape. One of the more interesting concepts presented in his article is the development of multi-modal HCI (MMHCI), an interaction approach that employs a variety of interaction methods such as visual-, audio-, and sensor-based system interactions. As computers develop and new system interaction avenues become technically and economically feasible, a greater level of accessibility will be afforded to computer users who have traditionally been shut out from use. In particular, increased MMHCI will open up new doors of computer engagement to users whose disabilities have not yet been accounted for through conventional accessibility platforms.

What is extremely interesting about the marrying of these three perspectives (i.e. the cognitive psychological impacts of HCI design; the affective experience of users; and the MMHCI implications for users with disabilities) is that there will need to be new approaches to all three when facing issues of disabled users and their cognitive and emotional experiences of computer usage. MMHCI stands to facilitate computer usage among new segments of users with disabilities. Ranging from physical disabilities to cognitive disabilities, user populations – while not new to computer usage – will experience new levels of system accessibility that directly addresses their specific needs and HCI requirements. Audio recognition software allows for real-time captioning of motion pictures; better voice recognition applications will alleviate the need for traditional input devices for those incapable of using a keyboard or mouse; and more.

However, HCI does not begin and end with the translation of user need to computer action. As was mentioned in Ebert, Gershon, & van der Veer (2012) and Ferriera & Pithan (2005), emotive and psychological experiences are deeply engaged with HCI concepts. This means that as we move to adopt HCI concepts for users with disabilities, we must remain conscious of their psychological and emotive needs. This may take many forms and will require a great deal of engagement with user groups to ensure that systems are being properly tailored to the real needs of users and not merely those that we perceive. For example, does the voice style, gender, and speed of a speech system have an impact on the cognitive experience of blind computer users? Another concern may be the emotional experience of using voice recognition software as an input device: does the system’s sensitivity and accuracy generate emotional responses in users and if so, what can be done to promote a positive emotional experience, one free from the stress and anxiety associated with system failure or unresponsiveness?

Additional disabilities will require an even greater degree of consideration when applying HCI concepts. For users with intellectual or learning disabilities, how can HCI improve their usage experience? For users with a wide spectrum of disorders – including, but not limited to, dyslexia, Downs syndrome, autism-spectrum disorders, and more – specialized HCI tailoring can now be accomplished to a greater degree than has ever before been possible. In this regard, system development can pay particular attention to the individual cognitive requirements and affective needs of users with disorders that have been traditionally underserved by the computer industry. While this may seem like a daunting task at this point, the future development of computer technology and cognitive psychological research will better enable system developers to meet these needs. However, we must ensure that we remain ever conscious of the human-centered design models that will enable system development to meet these needs, lest we become too mired in the techno-centric design approach.

The future of HCI research holds a great deal of promise for developing systems that will benefit those who have been traditionally underserved by the computer industry, information systems, and information professionals. As we move forward into a new era of system capability, we must all do our part to learn more about how to better serve disabled patrons by applying HCI methodology and practice in ways that have not been explored before. Opportunities for greater interdisciplinary participation with psychology professionals and researchers will arise, and we should do our best to seize on these opportunities whenever possible. By actively engaging with the future of HCI, we can help to hasten its arrival.

Articles Cited

Ebert, A., Gershon, D. & van der Veer, G. (2012). Human-computer interaction: Introduction and overview. Künstliche Intelligenz, 26(2). 121-126.

Ferreira, S. & Pithan, D. (2005). Usability of digital libraries: A study based on the areas of information science and human-computer-interaction. OCLC Systems &   Services, 21(4). 311-323.

Gupta, R. (2012). Human computer interaction – A modern overview. International Journal of Computer Technology & Applications, 3(5). 1736-1740.

Choosy Libraries Choose… What?

As both a cost-saving technique and a means of promoting scholarly collaboration and sharing, academic libraries have begun to take a shared approach to ILS selection, development, and implementation. As can be seen in Vaughn and Costello’s 2011 article, shared systems approaches are common and take the form of shared software purchases, hardware upgrades, support, and ILS planning/development (64; 66; 67; 69). This is certainly a positive development within the academic environment and – considering the ever-reducing funding profile for scholarly libraries – highlights the benefit to all participants when a collaborative approach is taken.

sharing is caring

Sharing is not only good for your institutional funding requirements. It’s good for your library’s soul.
Retrieved from http://www.brandignity.com/2011/03/online-sharing-importance/

That being said, there may be innate difficulties in this approach that do not readily come to light upon first review.

As is seen in the Wang and Dawes article (2012), most enterprise resource management systems (ERMs) – a potential commercial replacement for aging ILS platforms – are neither developed to fit current library acquisition models or to follow existing library workflows (79). While this may seem an acceptable hurdle for a single institution to overcome, the associated challenges of modifying a partial-fit system to two, three, or possibly a dozen differing institutions raises the stakes dramatically. For one institution to make organizational adjustments to meet the requirements of a new system is one thing, to foist this change on unsuspecting third parties is another thing all together. This sort of adoption could lead to several negative outcomes (i.e. unilateral imposition of an unfavorable system; nonparallel workflows or usage patterns; refusal to accept new shared systems, thereby defeating the purpose of shared system resources).

Another issue that comes in to play is that of a consolidated vendor landscape as is observed by Breeding in the Kinner and Rigda article (2009), a product of changing market forces that informed company acquisitions and available products through the 90s and beyond (404-5). While this may not seem like an issue at first, it may be an issue in particular regards to shared system situations. I hypothesize that with shared institutional system implementation and use, decisions on systems acquisition are made with an eye to a larger user base, a larger constituent collection, and (potentially) a larger funding pool from which to draw. Based on all these things, vendor solutions may be seen as a safer investment because 1) vendors have the knowledge, experience, and specialized skill set to accommodate a large distributed client base, something that may not be readily available in an open source system (OSS) implementation, 2) ongoing system support does not rest with any one part of the consortial body, and 3) institutional heebie-jeebies over trusting an in house developed system that will service a whole mess of users and objects. So, back to the original point. Considering the consolidated system vendor landscape that is presented by Kinner and Rigda, a shared system implementation may influence institutions to stick with vendor solutions. Now that there is a reduced number of available market options, it is feasible that the selected option may not be the best possible choice. While it is not a situation of entirely inelastic demand (i.e. we’ll take what we can get wherever we can get it), it does present some limiting factors. To try a metaphor, it’s like the difference between shopping for yourself and shopping for you and twelve of your college roommates. When you buy your own peanut butter, you can buy pretty much whatever brand you’d like, notwithstanding artisanal gold-leaf infused legume paste. But when you buy for a larger group of people, you’re less inclined to buy what you want, instead looking for options that offer volume and value. So, instead of buying the chunky Skippy you know and love, you may just decide on the three-gallon size of Costco brand peanut butter.

I don’t suggest that Vaughn and Costello are ignorant of these issues. Clearly, neither are their surveyed institutions: many of the institutions have memoranda of understanding which articulates many of the issues discussed here. Additionally, not all institutions stayed the course for every shared system decisions, such as contributing funding for unwanted system add-ons (66) or subscribing to enrichment services (67).

I don’t want to lament the prospects of shared ILS implementations. One need only look at the Washington Research Library Consortium for a positive example of what can happen when these initiatives go well.

Screenshot (3)

My heart skips a beat every time I have to provide my credentials.

But what can be done to really bring shared system implementations to a successful conclusion? A few thoughts:

  • Do a proper institutional analysis of each participating organization before even thinking about undertaking this type of initiative. This way you can spot potential issues early on and make an informed go/no-go decision before too much time and money is expended.
  • Consider the necessity of workflow alignments. Different institutions, no matter how similarly focused or structured, will do things differently. These variations will be thrown into stark contrast when a system implementation is embarked upon. For the good of the project and to capitalize on the shared resource approach, some – or probably all – institutions may have to fundamentally alter how it is that they perform their regular duties such as acquisition, cataloging, circulation, and more.
  • Properly invest in OSS options. If, in the end, the agreed upon approach is to implement an open-source system, don’t do so in half measures. Appoint full-time employees to work on the system. Earnestly involve all IT representatives in the development process. Seriously think about whether or not a vendor or consultant influence would positively influence your open-source approach.

The shared resources model presents some very serious challenges to any participating institution. But it also can afford some great benefits such as cost consolidation, enriched resources, and easier institutional collaboration. Before these initiatives begin, however, very serious discussions need to occur. Minds will be racked, patience will be tried. But, at the end of it all, a better system will emerge.

 

Articles cited:

Vaugh, J. & Costello, K. (2011). Management and support of shared integrated library systems. Information Technology and Libraries, 30(2). 62-70.

Kinner, L. & Rigda, C. (2009). The integrated library system: From daring to dinosaur? Journal of Library Administration, 49. 401-417.

Wang, Y. & Dawes, T. (2012). The Next Generation of Integrated Library System: A promise fulfilled? Information Technology and Libraries, 31(3). 76-84.

 

The Never-Ending Pursuit of HCI Perfection

Zhang et al., in their article “Integrating human-computer interaction development into the systems development life cycle: A methodology” (2005), make a very compelling argument for the importance of establishing a System Development Life Cycle (SDLC) at the core of which remains the human element, i.e. the user. This, in a way, makes perfect sense: systems that are not understandable, ergonomically-sound, or usable (within a certain tolerance of user skills and technical proficiency) will not be used.

Cactus-chair-375x500

Speaking of ergonomics…
Retrieved from http://www.suramya.com/blog

And while they provide a useful framework for implementing a human-centered project plan – things such as HCI analysis during each SDLC stage, HCI matrixes that serve as useful check lists, and iterative usability testing throughout the SDLC – I cannot help but think that there is an inherent flaw in their thought process. Namely, I think that their model aims at too high of an ideal, one that cannot possibly be obtained during the development of your average (or even above-average) information system.

Let us first look to Cervone’s 2007 article “The system development life cycle and digital library development.” In it, Cervone – in a throw-away closing paragraph – declares, “Eventually, though, the new system becomes the old system and due to changing requirements and technical obsolescence, it reaches a point where it is no longer cost effective or technologically possible to continue maintaining the system in an effective manner. It is at this point that we put a new team together and start the system development lifecycle anew to define a successor digital library project” (351-2). What strikes me about this statement is the never-ending, ongoing evolution of system development that goes beyond the individual project level. Regardless of who much time, energy, and dedication is funneled into a single project, it will ultimately be replaced by some future solution that does what the initial system did, only better, faster, and not to mention better-looking.

So, what impact does this line of thinking have on Zhang et al? First, it is important to understand that their article comes as a reaction to a perceived dearth of literature and critical discourse on the topic of human-centered design in the context of SDLC materials and texts.

table

I think we can all agree that it is a pretty bleak topography to survey (Zhang et al, 2005, 517)

I would not disagree with them that there needs to be a larger conversation about HCI with regards to its importance and influence on SDLC and the individual development tasks that make up the larger project. But what I would say is that given the ongoing nature of system development and the necessary constraints of development platforms and technical capabilities, what should ultimately be the goal of the human-centered SDLC?

A few thoughts to consider:

  • One can only develop as system to the extent that the available development software will allow. In terms of HCI, a system can be made as usable as the technical possibilities will allow. Things like system metaphors, color-depth, visibility, and ease-of-use are handicapped by the technical context within which they are developed. Regardless of how human-centered your development model is, there are imposed constraints on how usable, ergonomic a system you can develop. For example, look at the early operating systems (i.e. MSDOS, MAC OS System 1). The degree of usability will never be able to reach the ideal that you set out to achieve.
  • Zhang et al. acknowledge that total usability is impossible. In their evaluation model (528), the set the tolerance level at 85%. Within that 15% may exist several potential users who cannot (or will not) be accommodated by the system. Therein may be hiding some unfortunate biases such as ableism, ageism, or enthocentric perspectives on usage. Regardless, this acknowledgment in their own work should be incorporated into the human-centered model as a supplementary maxim: no system is 100% usable to all users.
  • Over time and through multiple stages of development iteration, usability will get better. We need not look too far to see that usability, providing that it moves along with trends in technical feasibility and development platforms, will improve. I present as an example the iPod system interface:
    Timeline-iPods

    How time flies!
    Retrieved from http://hitech-repairs.com/?page_id=335

    We see that the increased technical capabilities over time allowed for more human-centered designs such as moving from low-level black-on-grey text to full high-contrast color; better, more understandable organizational structures with visual aids (album covers); and a more expansive means of interaction (buttons, rotation pads, touch screens). The point I’m trying to make is that your system, although not bad within its own right, will be made better through proper implementation of future technologies and platforms. And while we would all hope to be like the innovative gods at Apple, we can only but acknowledge that we – and our system development skills – are merely mortal.

So, I suppose that what I am saying is not that Zhang et al are wrong; in fact, far from it. Rather, I think that they need to temper their idealism with some of Cervone’s realism. Yes, strive to make the best, most human usable system that you possibly can. Yes, repeatedly ask the tough questions about ergonomics, readability, system metaphors, and understandability. Yes, tirelessly test your systems with live users whose opinions will guide you to a better product. But, ultimately, realize your own constraints. Acknowledge that they are not flaws, but merely contextual constraints that you cannot change. And, please, understand that your system will eventually be old and no longer deemed “user friendly.”But as you set out onto the next development venture, bring along what you’ve learned. It will help to build a better foundation from which to begin.

Articles Cited:

Cervone, H. (2007). The system development life cycle and digital library development. OCLC Systems & Services: International digital library perspectives, 23(4), 348-352.

Zhang, P., Carey, J., Te’eni, D., & Tremaine, M. (2005). Integrating human-computer interaction development into the systems development life cycle model: A methodology. Communications of the Association for Information Systems, 15, 512-543.