Скачать 74.92 Kb.
Unit 05 Management - Essay:
Digital Rights Management and Libraries
ILS 501: Introduction to Information Science and Technology
Dr. Yan Liu
Southern Connecticut State University
The Internet has been called, facetiously, “a global collection of copying machines” (Godwin, 1998, p. 166). The emergence and dominance of a technology whose basic function is to share and propagate information is understandably alarming to copyright holders and intellectual property owners, and they have devised a number of strategies to protect their rights. This paper discusses the copyright-protection technologies collectively called Digital Rights Management (DRM), defining some of the most common forms. In addition, it considers the impact of DRM, data mining, and other access technologies on libraries.
Digital Rights Management (DRM) is a controversial and complex issue for the stakeholders in many worlds, including entertainment, education, business, publishing, computer software and hardware manufacturing, and libraries. In 2003, one study estimated that the music, film, book, and software publishing industries lost $2,022 billion worldwide as a result of copyright piracy (Jahnke & Seitz, 2005). DRM is a collection of technological schemes that strives to battle this problem.
But DRM is not easily defined. Online Dictionary of Library and Information Science (ODLIS) defines DRM in terms of its broad scope:
A system of information technology components (hardware and software) and services designed to distribute and control the rights to intellectual property created or reproduced in digital form for distribution online or via other digital media, in conjunction with corresponding law, policy and business models.
Diehl (2008) explores DRM’s impact on many fields, concluding: “There is no unique universal definition. There are many legal, economic, functional and technical definitions” (p. 19). May (2003) emphasizes that the management of digital rights relies on the law to distinguish the rights of the creator from those of the user or to balance the “private rights to reward and the social benefit of information/knowledge diffusion.” On the other hand, the digital management of rights calls on technologies to “control the distribution of content” (May, 2003). Godwin (2008) points out that DRM often goes beyond the law, calling it “a collective name for technologies that prevent you from using a copyrighted digital work beyond the degree to which the copyright owner wishes to allow you to use it.” The focus of this paper will be on how DRM technologies are used to control digital content and protect digital rights, and some of the ways that this affects library operations.
DRM and the Law
The purpose of this bibliographic essay is not to analyze the legal implications of DRM. However, it is important to point out that the question of DRM is indeed a legal one, and that a wrong step can land one in court. A brief history of copyright law may be helpful to understanding rights management in the digital age.
Belcredi (2001) traces copyright law in the United States from its origins through the 21st century. James Madison authored the copyright and patent clause in the Constitution which states: “The Congress shall have power to… promote the Progress of Science and useful Arts, by securing for limited time to Authors and Inventors the exclusive Right to their respective Writings and Discoveries” (U.S. Constitution, article 1, section 8, clause 8). Historically the goal of copyright is to maintain the balance between the interests of copyright holders and the public good (p.8). The Constitutional fathers understood that authors and inventors required economic incentives to continue to create and that Congress needed to limit the creators’ temporary monopoly over their work if the public was to benefit from a free exchange of ideas and information (pp. 6-9).
Furthermore, Belcredi (2001) establishes that The Copyright Act of 1976, like the Constitution, was written “to avoid the application of copyright law in a way that would create an imbalance of rights” (p. 26). Specifically, the “fair use” clause of Section 107 states:
Notwithstanding the provision of section 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specific by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright (qtd. in Belcredi, 2001, p. 26).
Not surprisingly, new technologies (from the printing press to the CD-burner) stress copyright law and cause it to evolve (p. 10).
Belcredi (2001) follows the evolution of copyright law to recent times, addressing the difficulty of applying both the copyright and patent clause and the “fair use” clause to cyberspace. The 2001 Napster case provided an important test case. The music industry sued Napster, created by 19-year-old college student Shawn Fanning, on the grounds that its peer-to-peer (P2P) technology violated the copyright holder’s right of reproduction protected in section 106 of the Copy Right Act of 1976 (p. 12). Napster permitted “individual computer users to open their hard drives directly to one another, allowing others to search for and swap files between computers without recourse to more traditional Web databases and servers” (p. 11).
In conclusion, Belcredi (2001) argues that the United States Ninth Court of Appeals, which ruled in favor of the music industry by holding Napster liable for the use of its technology, disregarded the balance rights by favoring the copyright holder over the public. The issue with Napster was settled via access technology that filters copyrighted files from the system, as well as blocks and tracks users based on their Internet Protocol address (p. 98). Belcredi (2001) contends, nevertheless, that the problem of balancing creator and public rights remains unsolved: “On the one hand, the technology has developed to meet the needs for Napster. On the other hand, new technology will be developed to evade tracking and filtering software” (p. 98). Technology is both the problem and solution to copyright in the digital age.
The Digital Millennium Copyright Act of 1998 (DMCA) modifies copyright law in light of rapidly changing technology. Among other things, DMCA makes it illegal (1) to circumvent any “technological measure that effectively controls access to a work,” (DMCA, Sec. 1201, para. A) and (2) to “manufacture, import, offer to the public, provide or otherwise traffic in any technology, product, service, device, component…that is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work” (DMCA, Sec. 1201, para. E). In other words, under the DMCA, it is against the law to circumvent anti-piracy measures or to sell or distribute any devices designed to circumvent such measures.
The DMCA represents a further shift in copyright law favoring the copyright owner, leading to even greater imbalance. Under previous copyright law it was not illegal to photocopy a page from a magazine and post it in a public place; or to lend a book to a friend; or to take a record to a neigHbor’s house and play it on his record player. Sharing copyrighted works in these ways was protected under the concept of “fair use.” But if DRM technology is put in place to prevent users from these ways of sharing, the DMCA makes it illegal to circumvent it.
The anticircumvention provisions in the DMCA anger civil rights activists like Doctorow, who argues,
Anticirumvention lets rightsholders invent new and exciting copyrights for themselves -- to write private laws without accountability or deliberation -- that expropriate your interest in your physical property to their favor…So when your French DVD won't play in America, that's not because it'd be illegal to do so: it's because the studios have invented a business-model and then invented a copyright law to prop it up. The DVD is your property and so is the DVD player, but if you break the region-coding on your disc, you're going to run afoul of anticircumvention. (Doctorow, 2004)
Access Control Technologies: An Overview
Access control technologies, as evidenced in the Napster case cited earlier, may be used to balance the rights of the copyright holder and the public. However, access control technologies are also necessary to balance the need for security and the need for convenience, while posing privacy issues (Williams and Sawyer, p. 470).
Williams and Sawyer (2010) defines security as “a system of safeguards for protecting information technology against disasters, system failures, and unauthorized access that can result in damage or loss” (p. 470). Five components of security include: “deterrents to computer crime, identification and access, encryption, protection of software and data and disaster-recovery plans” (p. 470). Deterrents to computer crime include: “enforcing laws, CERT: the Computer Emergency Response Team, tools for fighting fraudulent and unauthorized online uses” (p. 470). The Software Publishers Association helps enforce laws by reporting software piracy, a felony that could result in a prison term and high fines up to $250,000 (p. 470). CERT, an agency created by the U.S. Defense Department, deters computer crime by monitoring and reporting suspicious internet activity (p. 470). The third deterrent to online crime is a variety of software: rule-base-detection software, predicative-statistical-model software, employee internet management (EIM) software, internet filtering software and electronic surveillance (pp.470-471).
Williams and Sawyer (2010) discusses three methods to address the second component of security: computer identification and access. These include: (1) “what you have - cards, keys, signatures, and badges, “(2) “what you know - pins and passwords,” and (3) “who you are - physical traits” (pp. 472-473). The third component of safety, encryption, involves translating readable or simple text into a secret code that can only be read with a key (p. 472). The fourth component of security, software protection, may be managed through “control of access, audit control, and people controls” (p. 473). Of the various methods used to control access to data, those that will be further addressed in regard to DRM are encryption, marking, and filtering technologies.
How DRM Works
Because DRM is a collective term for a broad range of strategies (and new strategies are being developed all the time), it’s difficult to pinpoint exactly what it is. Godwin breaks down existing strategies into two broad categories: encryption and marking (Godwin, 2008).
Encryption is the use of a computer process that encodes or scrambles information in such a way that only those who have the right key to the code can obtain access to the information. As there are hundreds of different encryption schemes for different kinds of media, we will discuss only a few examples.
Satellite television content is scrambled so that only authorized viewers – those who have paid for satellite TV service – can watch it. The content may be readily available to everyone, but only those with the decryption key can access it (Godwin, 2008, p. 7). Encryption can also control how content is used. Adobe Acrobat Reader is free software whose encryption allows users to read .pdf formatted documents, but not to alter them (Coyle, 2003, p. 4).
Sometimes the decryption key is installed in a playing device. For instance, if a reader buys an e-book from Amazon.com’s Kindle store, that e-book can only be read on a device sold by Amazon – the Kindle – and no device other than the Kindle will read Kindle files (Kindle Frequently Asked Questions, para. 6). The encryption of the e-books functions to limit the right to access a certain file to those who have purchased the hardware. Kindle’s DRM also prevents users from downloading a certain file more than a limited number of times – so that if the user continues to update their hardware, they may be forced to purchase an e-book again.
The DVD industry employs an encryption scheme called CSS, or Content Scrambling System. Decryption keys are installed in DVD players; the manufacturers of the players must purchase licenses to the keys. Only “authorized viewers” – those with the licensed DVD players – can watch the DVDs. One of the aspects of the licensed keys is called region-coding. Region-coding makes it so that if you buy a DVD in Europe, it can only be played on DVD players in Europe (Doctorow, 2004) .
Encryption DRM is probably most famous in the music industry. The Recording Industry Association of America (RIAA), a professional group which represents recording companies, has been highly aggressive in prosecuting file-sharing services (like Napster) and individuals who download songs. When Apple was designing the iPod and the iTunes music store in the early 2000s, they came into conflict with the RIAA in their quest to sell music legally. Apple CEO Steve Jobs was opposed to DRM, but in order to acquire license to sell songs, he made a deal with music executives, agreeing to encrypt all music sold on iTunes with an encryption called FairPlay (Knopper, 2009, p. 173). Like Amazon’s Kindle encryption, FairPlay both tethered files to a certain device (the iPod), and limited how many times files could be downloaded.
FairPlay was astonishingly profitable for Apple. Although the iTunes store only made $0.99 per song (of which $0.67 went to the record labels), FairPlay pushed millions of consumers to buy iPods (Knopper, 2009, p. 178). It wasn’t long before Apple controlled an enormous majority of the music industry. By 2005, music and technology executives were complaining that Apple’s dominance amounted to a monopoly; Jobs blamed the RIAA, which had forced him to protect songs with FairPlay (Knopper, 2009, p. 180). Since that time, the downloadable music industry has opened up, with Apple dropping FairPlay on songs (though preserving it on videos and software for the iPhone and iPod), and other online vendors like Amazon getting in on the action.
Encryption has had decidedly mixed results. One inescapable feature of encryption DRM is that it can be circumvented (or “hacked”). CSS, the encryption used on DVDs, was hacked as early as 1999, although it is still in use (Godwin, 2008, p. 8). Before FairPlay was removed from iTunes, it was an open secret among consumers that one could burn encrypted songs to a CD and then rip the CD, DRM-free, back into iTunes. A quick search on Google reveals numerous strategies for hacking DRM-encrypted files.
For those uncomfortable with hacking, the same search will yield results for illegally downloading DRM-free files. One of the most powerful arguments against encryption DRM systems is that they actually promote piracy; people will steal DRM-free files rather than pay for DRM.
When the much-anticipated videogame Spore was released in 2008, its encryption-based DRM was so invasive and burdensome that gamers revolted. According to one account,
The problem was that the game had to be activated online before it could be played. The title could only be activated a limited number of times before the game shut down, which rankled customers with multiple computers. (Imagine buying a CD that you could only play on a few stereos, and one starts to understand the anger.) Even worse, the game installed a program called SecuRom that had the potential to change the behavior of other programs on the gamer's system, and there was no disclosure of what the program was, what it did, or how to remove it. (Kuchera, 2009, para. 2)
The result? Within one week of its release, Spore was illegally downloaded over 500,000 times (Schoenfeld, 2008).
A second DRM method, sometimes used in conjunction with encryption, is called marking. A mark is a label of some kind, for instance a notification of copyright, that is sent along with downloaded content. (Godwin, 2008, p. 12) When the content is copied, the mark is copied, too.
Marks are usually used, rather than encryption, when the goal is to track unauthorized copying rather than preventing it. Godwin explains,
Putting a mark on a piece of digital music, for example, allows one to create a search engine that can find a marked clip on the Internet, which the searcher might then assume is an unauthorized clip ... Moreover, if the mark is sufficiently sophisticated, it may carry information that can be used to determine where the unauthorized content originated.
Furthermore, if a hacker has decided to remove encryption from a digital file by converting it into clear and then re-digitizing it (for instance, by burning a song to a CD and then ripping it to a hard drive again; or by downloading a movie file, playing the movie while filming it with a camera, and then uploading the newly-recorded copy), the mark would remain with the file through this process. Marking, then, is generally not a copy-prevention technology so much as a copy-detection one.
Godwin (2008) identifies three types of marking schemes: simple marks, fingerprints, and watermarks. An example of a simple mark might be the logo of a cable television station, which is visible on the screen while a television program plays. Simple marks of this kind are often easy to remove: someone skilled with data imaging software may be able to remove a station logo from a pirated television show.
A second type of mark is called a fingerprint. This type of mark is an element that has been added to content; fingerprints are derived from the content itself. For instance, music fingerprinting software might analyze the tempo, length, and sound quality of a piece of music and identify it by comparing it against a database of known variables. The fingerprint can be used to identify a file, but it cannot be used to authenticate the legality of the file.
iTunes’ fingerprinting technology was instrumental in discovering a musical fraud in the world of classical piano. The recordings of highly-respected pianist Joyce Hatto, when uploaded into iTunes, were identified by the fingerprint analysis software as recordings of other pianists. The pianist’s husband admitted that he had reengineered and released other artists’ work under her name. The fraud was not discovered until after Hatto’s death, and the extent of her knowledge of it is unknown (Singer, 2007).
The third type of mark, and the one with the most promise as a form of DRM, is the watermark. This is a type of mark that is undetectable to humans, but not to computers. Traditionally, a watermark is a translucent design on paper, visible only when the paper is held up again the light, designed to identify an authentic document from a forgery (“Watermark,” ODLIS). A digital watermark serves a similar purpose, but is defined as “a sequence of bits skillfully embedded in a data file, such as an audio CD or motion picture on DVD, to help identify the source of copies manufactured or distributed in violation of copyright (“Watermark,” ODLIS). An example of a watermark might be a tone or sequence of tones embedded in a music file, unnoticeable to human listeners but serving as a type of decryption key for devices. An authorized player will detect the watermark and play the file.
Memon and Wong describe watermarks of visual media as “visually very similar but not necessarily identical to the original unmarked image;” they may only be identified by “watermark extraction” or “detection algorithms” (Memon and Wong, 2007).
Al-Haj (2007) sees watermarking technology as one of the most promising means to protect digital media copyright and discourage unauthorized reproduction or alteration. The study defines the criteria necessary for digital watermarking to be successful: “it should be imperceptible and robust to common image manipulations like compression, filtering, rotation, scaling, cropping, collusion attacks among many other digital signal process operations” (Al-Haj, 2007). Al-Haj (2007), in contrast to Jahnke & Seitz (2005), finds that the discrete frequency-domain watermarking techniques such as Discrete Wavelet Transform (DWT) are quite effective due to “excellent spatial localization and multi-resolution characteristics, which are similar to the theoretical models of the human visual system” (Al-Haj, 2007).
On the other hand, Godwin doubts that a truly successful watermarking scheme is even possible. The three characteristics of effective watermarking – invisible to humans, visible to computers, impossible to remove – are mutually incompatible, argues Godwin. If a manufacturer’s device can detect a watermark, then a hacker’s device can find and presumably remove it (Godwin, 2008).
Watermarking is a very new idea, one that – as the disagreement of these authors indicates – is under development. One recent development in watermarking points to both the promise and the dangers of watermarking. On October 27, 2009, Amazon, maker of the Kindle e-book reader, was awarded a patent for a method of “programmatically substituting synonyms into distributed text content.” According to the patent,
The modification to an excerpt performed by the synonym substitution mechanism may not significantly alter the meaning of the excerpt to a human reader. By replacing one or more selected words in an excerpt with synonyms for the words, illicit copies of the excerpt may be recognized by comparing a copy of the excerpt to the original.
Whether or not Amazon will be able to use this watermarking scheme depends much upon the cooperation of the authors and publishers who license Amazon to sell their works. Science fiction author John Scalzi, for one, was both amused and indignant:
I certainly won’t be using it, in any event. Hard as it may be for Amazon to believe, I actually use the words I intend to use when I write. If I had wanted to use a different word for something, I already would have (Scalzi, 2009).
DRM: What it Means for Libraries
Libraries are in the business of providing, disseminating and propagating information, and frequently encounter DRM and other access-control technologies in attempting to fulfill their missions. These encounters are often controversial. Sometimes DRM is used for the benefit of libraries; often, though, the library’s goals and the goals of those who impose DRM come into conflict.
Bertot (2009) describes the challenges libraries face in managing public access technology (PAT), including access control technologies. By 2007, public access technology (PAT) research conducted on 35 public libraries in diverse geographic locations indicated that 100% offered access to the internet, 87.7% offered access to licensed databases, 62.5 % offered access to digital references services and 51.8 % offered access to e-books (Bertot, 2009). Specific hardware offered included: “public-access computers, public-access computing registration (i.e., reservation) systems, self-checkout stations, printers, faxes, laptops” and devices for those with disabilities (Bertot, 2009). Software included: “operating systems software (e.g., Microsoft Windows, MacOS, and Linux) device application software (e.g. Microsoft Office, OpenOffice, graphics software, audio software, e-book readers, assistive software, and others), and function software (e.g. Web browsers, online databases, and digital reference)” (Bertot, 2009). Offering this technology to the public poses multiple challenges for libraries, among which is that “online databases, e-books, audiobooks, etc. are extensions of the library’s holdings but are not physical items under a library’s’ control and thus subject to a vendor’s information and business models,” as well as copyright (Bertot, 2009). Compris Technology’s Smart Access Manager and Userful’s DiscoverStations control access by managing “print cost recovery, filtering, and security” (Bertot, 2009). Limitations to these programs include fixed time limits, which can be frustrating to users, and limited “ability to access gaming and social-networking sites” (Bertot, 2009). Managing public access technology requires a skilled staff, adequate funding, constant upgrade of technologies, issues that the library may not be able to manage alone but may need to consider an access strategy that is community based (Bertot, 2009).
Houghton (2007) points out that DRM has created many problems in the library, particularly in the areas of downloading, installing and accessing, which occur even when these practices are legal, and thus prevent good customer service.
Russell (2003) describes the case of a visually-impaired library patron who checked out an e-book only to find that his “text-to voice software cannot ‘read the product.’” In 2007, librarians at Massachusetts Institute of Technology discontinued an online product called the Society for Automotive Engineers (SAE) Digital Library due to issues with the product’s increasing use of DRM. SAE dropped its DRM plans in 2009 (Albanese, 2009). For years OverDrive, the leading provider of downloadable audiobooks and other digital files for libraries, used DRM that limited the types of devices that would play on these files. Importantly, audiobooks would not play on any Apple product, preventing all iPod users from using this library service. OverDrive began offering iPod-compatible MP3s only in 2008 ("District of Columbia Public Library," 2008).
Libraries have long operated under the assumption that some copying of intellectual property is “fair use.” DRM can and does infringe upon lawful copying and is changing the meaning of fair use (Godwin, 2008, p. 24). DRM can interfere with the digitization of collections by libraries. The Library of Congress has called for changes in the Digital Millennium Copyright Act, which grants no exceptions to its DRM clauses for libraries that want to scan and archive materials (Section 108 Study Group Issues Report, 2008).
Burke (2006) addresses the impact of DRM systems on the contemporary library. Burke (2006) argues that once DRM and copyright issues are worked through, “e-books and other electronic resources would become more widely available” in libraries (p. 206). Furthermore once these issues are resolved, “the current fluid nature of full-text periodical and reference sources could be controlled,” removing the librarian’s constant fear of publishers removing titles from databases (p. 206). Meanwhile, libraries must deal with the software many companies require to view their e-books online, as well as on the various handheld devices such as PDAs, cell phones, MP3 players (p. 109).
Filtering, which has the capacity to block users from accessing digital files, is a way of managing digital content. Numerous libraries use Internet filtering software to limit patrons’ access to illegal or offensive materials. Many people are concerned about the explosion of readily-available pornography created by the Internet; according to one study, between the years 2000 and 2004, the number of pornographic websites increased from 88,000 to1.6 million (Websense Inc. 2004 & Nielsen/net Ratings, 2004, as quoted in Williams and Sawyer, 2010, p. 477).
As a result of the potential danger of offensive content, particularly to children, the Children’s Internet Protection Act (CIPA) was enacted in 2001. The Federal Communication Commission (FCC) website highlights the main points of the legislation. Specifically, the FCC explains that funding from the federal E-rate program, an initiative that provides financial support for communications technology, is denied to schools and libraries that do not take action to filter web access to materials that are: “(a) obscene, (b) child pornography, or (c) harmful minors” (“Children’s Internet Protection Act,” FCC).
The ALA filed a lawsuit to overturn CIPA, arguing that “the law fails to protect children while limiting access to legal, useful information for all library users” (ALA, 2003). The Supreme Court upheld the law, stating, “if a librarian will unblock filtered material or disable the Internet software filter without significant delay on an adult user's request, there is little to this case.” (U.S. v. American Library Association, 2003).
Filtering software is a mechanism that restricts access to Internet content. Websites are usually judged for filtering based upon a database of offensive sites, usually compiled by a third party; or by scanning content for certain words or phrases; or based upon the source of the information. Many libraries do use it, not just because it makes them eligible to receive federal funding, but also out of a genuine desire to protect children from offensive content. The Seattle Public Library, for instance, has installed filters on the computers that have been designated for children’s use. When a blocked site is clicked, this message appears: “This web site cannot be accessed. You can use other SPL resources, use an SPL terminal that allows open access, or ask a staff member for assistance.” (K., 2009)
However, there are concerns with the practice. As Burke (2006) states, many libraries have not chosen to install software filters, because filtering does not always work and frequently blocks sites that have no offensive qualities (p. 129). Some libraries, like the Seattle Public Library, get around this by designating some computers as filtered (for the use of children), others as unfiltered. Gorman (2001) challenges this practice, citing issues of privacy. He queries: why should someone’s privacy be violated by forcing them to “use certain marked terminals in order to gain access to the electronic resources they want or need?”
Privacy and Data Mining TechEncyclopedia defines data mining as “exploring and analyzing detailed business transactions. It implies ‘digging through tons of data’ to uncover patterns and relationships contained within the business activity and history.” Businesses have been using data mining for some time to identify their customers’ needs and better market products and services. The incorporation of data mining into the world of information/library science is more recent, gaining momentum as “bibliomining.”
Data mining technology holds great promise for accessing, managing and controlling information in libraries. Papatheodorou, Kapidakis, Sfakakis, and Vassiliou (2003) examines the ways that data mining may be useful to digital library communities in terms of service optimization, decision support, and personalization. Data mining helps digital library administrators (1) organize “content, authorities, and user interfaces” to meet the needs of diverse groups of users; (2) design “effective query expansion”; and (3) recommend subjects to user in the areas of their interest (Papatheodorou, Kapidakis, Sfakakis, & Vassiliou, 2003).
Cullen (2005) also addresses ways that libraries can use data mining to better serve their users by understanding their needs and preferences. Although most data mining in libraries focuses on the use of libraries licensed resources, data mining may also be used for tracking price increases over time of library resources, identifying the kinds of users who prefer certain resources such as e-books, establishing cost effectiveness in library management, and gathering information about web clients (Cullen 2005). To achieve this, a library must establish a data warehouse: “a large store of data with a structure optimized for analysis and pattern finding” through software such as WebRepoter, Director’s Station, and Normative Data Project, all developed by SirsiDynix or relatively new free software such as MySQL (My Structure Query Language) that acts as server for databases (Cullen 2005). Libraries run into difficulty, however, when they attempt to use data mining in the way businesses do, to create: “pictures of the buying habits of specific consumers” (Cullen 2005).
Williams and Sawyer (2010) explains that the technology that allows “the ease of pulling together information from databases and disseminating it over the internet has put privacy under extreme pressure” (p. 443). The American Library Association (ALA) Policy 52.4: Confidentiality of Library Records protects the right to privacy of library users, stating specifically that “records held in libraries which connect specific individuals with specific resources, programs or services, are confidential and not to be used for purposes other than routine record keeping” (ALA, 1998, p. 155). In addition, ALA Policy indicates that libraries whose record keeping identifies users are in “violation of the confidentiality of library record laws adopted in many states” (p. 155). ALA furthermore acknowledges in its interpretation of the Library Bill of Rights that “users have the right of confidentiality and the right of privacy” and that the library will do everything necessary to establish “policy, procedure, and practice” to protect these rights (p. 163). However, ALA cautions: “because security is technically difficult to achieve, electronic files could be come public” (p. 163).
Nicholson (2003), however, outlines the procedure of warehouse creation under which no violation of privacy rights would occur:
This extraction and cleaning process is the key to protecting patron privacy during data warehousing. As the records are drawn from internal and external systems, matches are made to connect data, and then the personally identifiable information is discarded. This personal information should never be put into the data warehouse, so it will not be backed up, saved, or otherwise archived. After the data warehouse is created, the original data can then be deleted, in accordance with current advice to protect the privacy of patrons. The goal of data warehousing is to create a data source that contains decision-making information that cannot be used to recreate the original transactional records (Nicholson, 2003).
Nicholson sees technology as a key player in both the protection of privacy and the right to access information, both freedoms provided for in the Bill of Rights.
J. McCann, Reference Librarian at Albertus Magnus College in New Haven, indicates that: “Probably there is nothing wrong with data mining per se, but who is to determine how the data is obtained and to what uses it is put?” (personal interview, October 2, 2009).
Gorman (2001) is uncertain, however, that digital mining technology will ever be without moral issues in regard to privacy. Gordon (2001) argues that library records needs to be confidential in order to protect everyone’s freedom to access information, read, view images, speak, and think. If not, we are destined to become citizens of the world of 1984 where “ the most private aspects of lives are laid bare to be condemned and sniggered over, and the right to your own thoughts, your own relationships, and your own beliefs is trampled on by zealots and bigots” (p. 5). Citing the already widespread marketing practice of Amazon.com, which attempts to sell additional books to customers by emailing them titles similar to the ones they have purchased, Gordon (2001) implies that Big Brother is already watching us. Gordon (2001) calls for us to “restrain the effect of technology” lest “hard won legal rights” regarding privacy and confidentially are “vitiated by forces that cannot be controlled by law” (p. 3).
Conclusion Digital Rights Management (DRM) has positives and negatives. Digital content owners are not unreasonable to fear that intellectual works will be freely broadcast on the Internet, depriving them of their rights and ability to generate revenue. Libraries have used data mining to more effectively manage their day to day operations and better identify their users’ needs, and many libraries continue to use content filtering software on their public computer terminals in order to restrict access to pornography and other offensive materials.
And yet, there is a growing number of cases in which the library’s mission to provide free access to information is blocked by DRM. The implications to privacy and to libraries’ abilities to serve their patrons are troubling. It seems clear that, as new DRM schemes are created by copyright holders, they will continue to come into conflict with users and with libraries.
Al-Haj. A. (Sept 2007). Combined DWT-DCT digital image watermarking. Journal of Computer Science, 3, 9. p. 740(7). Retrieved September 16, 2009, from AcademicOneFile via Gale: http://find.galegroup.com/gtx/start.do?prodId=AONE.
Albanese, Andrew. "MIT, SAE end standoff over DRM." Library Journal 133.7 (2008): 18. General OneFile. Web. 8 Nov. 2009.: http://find.galegroup.com/gtx/start.do?prodId=ITOF&userGroupName=newport1>.
American Library Association (1998). Information Power. Chicago: American Library Association.
Belcredi, C. (Oct 2001). The evolution of copyright: Napster and the challengers of the digital age. The University of British Columbia.
Bertot, J.C. (June 2009). Public access technologies in public libraries: effects and implications. (Report). Information Technology and Libraries, 28, 2. P.81(12). Retrieved November 6, 2009, AcademicOneFile via Gale: http://find.galegroup.com/gtx/start.do?prodId=AONE.
Burke, J. J. (2006). Library Technology Companion. New York: Neal-Schuman.
Calhoun. T. (2005). DRM: The challenge of the decade. Campus Technology, 18(7), 18-19.
Children’s Internet Protection Act. Federal Communication Commission. Retrieved from http://www.fcc.gov/cgb/consumerfacts/cipa.html.
"Children's Internet Protection Act (CIPA)." ALA | Home - American Library Association. N.p., 1 Dec. 2003. Web. 8 Nov. 2009.
|Of Bioinformatics – draft#1 (Book chapter for American Society for Information Science & Technology, vol. 40, 2005) Gerald Benoît Introduction||Introduction to Food Science and Technology|
|Introduction to plant and soil science and technology||Outline of Introduction to Food Science and Technology|
|Master of Science Program in Information Technology||School of Computer Science & Information Technology|
|Of computer science, software engineering and information technology||Khosrow-Pour, M. (Ed.). (2005). Encyclopedia of Information Science and Technology. Vol. 1 Hershey, pa, usa: Idea Group Inc|
|2. For students：Major in Computer Science and Technology, Pattern Recognition and Artificial Intelligence, Automatic Control, Information Engineering||Keywords Information management, information retrieval, 3D, similarity, categorization, information visualization, classification introduction|