University of Illinois Chancellor Phyllis Wise will open the 2012 Research Showcase, which will be held on Friday, March 30. Faculty and Ph.D. students from the Graduate School of Library and Information Science present short talks and posters highlighting their scholarly work.
The Research Showcase is an annual event open to campus and the general public.
Friday, March 30, 2012
Graduate School of Library and Information Science
501 E. Daniel, Champaign
East Foyer and Rms. 126 and 131
Presentations (1:30-3:30 pm in Room 126)
The Historian, the Librarian, and their Cabinet of Curiosity
Bonnie Mak and Julia Pollack
How Different Methods for Relation Extraction Impact Network Analysis Results
Gender, Race, and the Hidden Labor of the Digital Media Industries
Data Sharing Practices: Implications for Curation and Re-use
Carole L. Palmer and Tiffany Chao
Seducing the Innocent: Fredric Wertham and the Falsifications that Helped Condemn Comics
Large-Scale Music Audio Analyses Using High Performance Computing Technologies: Creating New Tools, Posing New Questions
J. Stephen Downie
Room to Breathe -- The Library Profession, Architectural Modernism, and the Welfare State in Britain in the Long 1960s
Introduction to demo: eBlack Illinois: A Project to Map the Black Experience in Cyberspace
Abdul Alkalimat and Brian Zelip
Posters (3:30-4:30 pm in Room 131)
Community Libraries as Public Computing Center--Two examples from China
Cao Haixia(曹海霞), Gao Jin (高巾), Xiao Chan（肖婵, Xu Zhenzhen (徐珍珍), Yuan Xu (袁旭), Li Tingting(李婷婷)，and Zhou Wenjie (周文杰)
Units of Evidence for Analyzing Subdisciplinary Difference in Data Practice Studies
Tiffany C. Chao, Carole L. Palmer, and Melissa H. Cragin
"To acquire and diffuse among the people": Information Service Functions at the U.S. Department of Agriculture in the Late 19th Century
Semi-automated Collection Evaluation for Large-Scale Aggregations
Katrina Fenlon, Peter Organisciak, Jacob Jett, and Miles Efron
Information Use and Innovation in Science Teaching
Unearthing Hidden Treasure
Kathryn La Barre and Carol Tilley
Disaster Response Workshops for Our Campus and Community
Editing Race: May Massee, Rebecca Caudill, and their letters about the N Word, 1947-1953
Kate McDowell and Alaine Martaus
The Transformation of the System of Information Provision in the 1930s United States and Technology, Access, and Policy
One Thing is Missing or Two Things are Confused: An Analysis of OAIS Representation Information
Simone Sacchi, Karen M. Wickett, Allen H. Renear, and David Dubin
Significant Properties of Complex Digital artifacts: Open Issues from a Video Game Case Study
Simone Saachi, and Jerome P. McDonough
Education for Data Professionals: A Study of Current Courses and Programs
Virgil Varvel, Jr., Elin Bammerlin, and Carole Palmer
Towards a Logical Form for Descriptive Metadata
Karen Wickett, Richard Urban, and Allen Renear
UC2B Anchor Social Institutions: Baseline Data on Technology Use
Kate Williams, Abdul Alkalimat, and Abigail Sackmann
What is Community Informatics? A Global and Empirical Answer
Kate Williams, Shameem Ahmed, Noah Lenstra, and Qiyuan Liu
Mapping Public Computing in Beijing
Kang Zhao 赵康, Kate Williams, Abdul Alkalimat, Shenglong Han 韩圣龙, and Hui Yan 闫慧
A Cabinet of Curiosity: The Library's Dead Time
Bonnie Mak and Julia Pollack
eBlack Illinois: A Project to Map the Black Experience in Cyberspace
Abdul Alkalimat and Brian Zelip
Music Information Retrieval
J. Stephen Downie
For over two millennia, librarians have been critical to the production and transmission of knowledge. They have helped to collect, catalogue, and curate a vast range of materials that constitute much of our cultural heritage from epic poetry on papyrus scrolls to PDFs of scholarly articles. We seek to examine and interrogate these practices by building and populating a librarian’s cabinet of curiosity. As Paula Findlen has observed, the cabinet of curiosity was a space for collectors to experiment with different ways of understanding the world in the early modern period. By re-emphasizing this personal engagement with the ordering of information, we bring to light the influential hand of the librarian in the configuration of knowledge. Our cabinet is an embodiment of acts of collecting, cataloguing, and curating and therefore invites a consideration of how such acts shape the library, its holdings, and its patrons. The cabinet contains explicit examples of the librarian’s curation of information, such as a book dissection, which investigates how materiality and the history of textual transmission both contribute to the process of meaning-making, whether in the codex or its digital counterpart. We thus deploy the cabinet to make evident the activities that support and surround the production, preservation, transmission, and circulation of knowledge. A guided tour of our cabinet and its contents will demonstrate how the librarian is key in crafting the paths by which people may come upon information. Our discussion will furthermore emphasize the increasing significance of the relationship between the librarian and the public, given the proliferation of commercially mediated data on the Internet. By foregrounding and problematizing the everyday practices of the librarian, we can begin to see the ways in which information is carefully formulated and prepared for consumption not only in the library, but also elsewhere.
Several methods are available for extracting relational data from natural language text data. While in prior research, these methods have been applied across corpora and domains, there is a lack of understanding of the differences in network structure and properties that result from employing these different methods. I report on the comparison of relational data constructed by applying commonly used relation extraction methods to a corpus of news data and a corpus of funded research proposals. The following methods are considered: First, thesaurus-based text coding. The key component needed for this process is a thesaurus, which maps text terms to nodes. Creating and adapting thesauri requires substantial human effort. Second, I use a prediction model that was trained via supervised machine learning to automatically generate a thesaurus. The first method is repeated by using the auto-generated thesaurus. Third, network data is built from structured metadata. This process disregards the content of text documents. Fourth, we collaborated with subject matter experts on the domain of the news wire corpus to build social network data. This process results in validated ground truth data. The comparison of the resulting relational data in terms of structural properties and key entities shows that there is little overlap between the data per method. Ground truth data are partially resembled by analyzing the content of text bodies, but not at all by using metadata only. I summarize how the considered methods can be combined in order to capture different perspectives of a network.
In 1975, the largest private employer of American Indians in the U.S. was the Fairchild Corporation, a microprocessor manufacturer whose factory on the Navajo reservation at Shiprock, New Mexico, employed hundreds of workers. At the same time that Indians were working in the early digital industries, their images were deployed within the pro-computing counterculture that appropriated their images as symbols of DIY independence from mass media.
Images of Native Americans are prominent within the rhetoric of media production cultures produced by the ‘70s counterculture movements. Stewart Brand, producer of the Whole Earth Catalog (1968-present) and Michael Shamberg, author of Guerilla Television (1971) were prominent media entrepreneurs who advocated a Do It Yourself (DIY) approach to media. Brand was an early advocate of personal computing as a technology for empowering citizens to challenge mainstream media, as was Shamberg. They believed that Indians exemplified an ideal political and cultural structure of radical de-centralization, an oral culture of free information and free culture, a stance which has come to define contemporary digital utopianism. Both strongly extolled the virtues of home-made media as well as the technologies that enabled their production.
The Indian-inspired buckskin, fringes, travois trailers, moccasins, beads, and braids sported by these and other members of the ‘70s counterculture were far more than a stylistic tic. The Whole Earth Catalog instructed users how to both consume and produce Indianness for themselves, and when accompanied by homemade personal computers and portable video cameras they signaled a commitment to anti-establishment values and ideas that presented a challenge to mass forms. Indians were deployed as part of a potent rhetorical and visual strategy to claim authenticity and moral authority for the pro-computing counterculture and were essential to the ethos of peer media production.
Cyberinfrastructure is principally about data: how to get it, how to share it, how to store it, and how to leverage it for scientific discovery and learning (Edwards et al. 2007, p. 31). Therefore, advancing cyberinfrastructure is dependent on our understanding of how to support data practices and needs of researchers. Through our work on the Data Curation Profiles Project and the Data Conservancy, we have conducted comparative analysis of data practices across a range of scientific domains, ranging from agronomy to geobiology and civil engineering. This presentation will provide an overview of results on key differences across fields and the implications for data curation services.
While we have identified important differences at the level of the subdiscipline in what data can be shared when, forms and types of data are also key factors in sharing and potential for re-use. Moreover, scientists have significant concerns about possible data mis-use, through misinterpretation, misappropriation, and disregard of good faith practices. However, curation may be most complicated by the need to capture various dimensions of the research methods used to produce the data, to fully support re-use by researchers outside the originating domain. Our results suggest that while supporting cross-disciplinary re-use will require a greater investment in curation resources, there is high potential for data sharing across some research communities. At the same time, we are far from achieving the kind of common data culture and data curation services required to advance the promise of integrative research driving the movement toward national and global cyberinfrastructure (Hey et al., 2009; National Science Board, 2005; National Science & Technology Council, 2009).
Psychiatrist Fredric Wertham and his 1954 book Seduction of the Innocent serve as historical and cultural touchstones of the anti-comics movement in the United States during the 1940s and 1950s. Although there have been persistent concerns about the clinical evidence Wertham used as the basis for Seduction, his sources were made widely available only in 2010. This presentation will give specific examples from materials only recently made available to the public of how Wertham manipulated, overstated, and fabricated evidence. In particular, Wertham misrepresented evidence he attributed to personal clinical research with young people for rhetorical gain.
The adoption of high performance computing (HPC) technologies in the digital arts and digital humanities research domains represents a small but emergent area of scholarly activity. The recent awarding of eight multinational “Digging Into Data (DID) Challenge” project grants reflects the growing recognition that the application of HPC technologies can empower cultural scholars to ask and answer new kinds of research questions. This presentation will introduce one of the DID awardees: the Structural Analysis of Large Amounts of Music Information (SALAMI) project. The SALAMI project represents a true multinational (i.e., Canada, United Kingdom, and USA) and multidisciplinary (i.e., music theory, library science, and computer science) digital humanities research collaboration. Utilizing 250,000 hours of compute time donated by the National Center for Supercomputing Applications (NCSA) at the University of Illinois, the SALAMI project is conducting the structural analysis of some 20,000 hours (i.e., roughly 2.3 years) of music audio. This is formal music analysis at a scale that no human scholar could ever hope to undertake. Our talk will also contextualize the SALAMI project within the broader frameworks of the ongoing Networked Environment for Music Analysis (NEMA) and the Music Information Retrieval Evaluation eXchange (MIREX) projects. The motivations, goals and developments of these three interrelated projects are presented to help illustrate the kinds of questions being explored by music informatics scholars and the roles that HPC, semantic web and Linked Data tools can play answering those questions.
Although witnessing a revolution in popular culture, the 1960s in Britain has been widely labeled as a failed decade. This project challenges this assessment by examining the contribution that new, modernist public library buildings, as well as the library profession, made to the period’s modernizing zeitgeist. In the Long 1960s (between the mid-1950s and mid-1970s) great energy was expended in attempting to build, literally, a better post-war Britain. This unprecedented burst of building activity included the planning and construction of hundreds of public library buildings in modernist styles--designs that one commentator described as providing library environments in which there was at last, in contrast to the historical designs of the past, room to breathe.
Described at the time as a national health service for reading, public libraries assumed a prominent position in the post-war welfare state. The architectural modernism of new public library buildings symbolized not only the purposeful pursuit of post-war social and economic modernization, in this instance through cultural provision, but also a growing desire in the library profession to modernize its services and relationship to the public. This contribution to the Showcase will highlight work already undertaken on this project (published in Library Trends, Summer 2011) as well as a program of further research on the subject currently being considered for funding by the National Endowment for the Humanities (Collaborative Program). The proposed research will be undertaken in collaboration with Professor Simon Pepper, School of Architecture, Liverpool University. Through analysis of extant buildings and primary source documents, the research will examine what modernist library design meant to librarians, architects, local politicians and planners, and the public. The study represents the second phase of the collaborators’ research on the design of public libraries in Britain. Like the first phase, which examined library design in the period 1850-1939, this second phase will result in a substantial co-authored illustrated book.
This project has developed a database of websites that contains information about the African-American experience in Illinois. The database can be searched by topic, county of location, and name. There is a particular focus on culture in Chicago, especially the musical forms of Gospel, Blues, Jazz, Hip Hop, and the African Diaspora including Afro-Caribbean music. In the state of Illinois there is a direct correlation between population size, percent African American, and the number of websites, with the one exception of sites that reflect unique historical experiences.
In Chicago, with regard to African-American music sites, an opposite pattern emerges. Music sites tend to be outside of the African-American majority community areas, except for Gospel music as that musical genre is the most organizationally linked to a social institution, the church. The commodification of African-American music reflects the location of musical spaces in commercial areas.Our study has begun to geo-code these spaces so that we can have a more detailed demographic profile in Chicago and on a statewide level.
After becoming successful in USA and other western countries, the Online Social Networking sites (OSN) such as Facebook, MySpace, Orkut, Twitter, LinkedIn, etc. are currently spreading their networks to the rest of the world (e.g. Asia, Africa, Latin America, etc.). Among others, India, Pakistan, and Bangladesh are the three prominent South Asian countries where OSNs are experiencing a tremendous growth in terms of number of users. 21.9% of the total population of the world comes from these three countries – making them lucrative targets as potential market for OSN. Four stakeholders (OSN, advertisers, government, and users) are playing major roles in the tremendous growth of these OSNs - influencing the social, cultural, and economic condition of these countries. Unfortunately, no comprehensive research has been conducted yet to analyze the role of these four stakeholders and their influence in the context of these three countries. In this poster, I take a first step to understand what role the stakeholders are playing and why, and finally, what are the potential consequences of the OSNs growth on social, cultural, and economic situation of these three countries. I also provide a detailed analysis of the profile of each stakeholder. For data collection, I relied on both scholarly articles and business periodicals. My findings are based on 21 peer-reviewed research articles from the leading journals and conferences and 84 business articles from renowned business and trade journals such as Wall Street Journal, Computer World, Business Week, AFP, Reuters, New York Times, Times of India, CNN, The Guardian, Financial Times, Financial Express, etc.
In the information age more and more libraries use computers and technologies to help people acquire information. Public computing plays important role in the information society. Kate Williams (2011) examines the informatics moment in people’s everyday lives as they seek help at community library. Understanding the informatics moment could accelerate people’s (and society’s) anxious transition to an inclusive digital age. Two community libraries examples from China are An-Zhen community library and the Civilian Mobile Library in Beijing, China. Both of them are the public computing center for this community. Some problems exist, such as a lack of computer skills training and not having an IT person in these government-supported and private libraries.
Building the infrastructure to support collections of scientific data raises questions concerning data selection, policy development, collaboration, and outreach efforts and how to best align these with local, institutional initiatives, data-intensive research, and data stewardship. To facilitate data acquisition and purposeful user services, increased understanding of data-practice-curation service arrangements across small science research is required. This poster presents a flexible methodological approach crafted to generate units of evidence to analyze these relationships and facilitate cross-disciplinary comparisons. The qualitative approach integrates targeted semi-structured interviews, data inventorying, artifact analysis, and purposive sampling. These are resonant in the five components of the data collection process -- the Pre-interview worksheet, Research interview, Follow-up interview, Data interview and Lab visit, which have been continually tested and refined. The sequence of data
collection, targeted participants, and multiple data sources work together and are all essential in producing dense, high-quality units of evidence. Current implementation of this approach is seen in work with the Data Conservancy and the investigation of research data practices across small science disciplines, an area of particular relevance due to the prevalence of this mode of research in the academy and the anticipated magnitude of data production. At present, observations of the method indicate the crucial role of the Pre-interview worksheet and its sequencing in data collection, the emergence of an analytical construct, and the importance of maintaining frequent contact with participants. The robust units of analysis and evidence applied in this methodological approach offer a tested strategy for understanding the relationship between real-world work practices and the curation activities supporting data preservation and re-use.
Prior to the establishment of the U.S. Department of Agriculture and concurrent with the agency's development, American farmers had myriad ways of sharing and communicating agricultural information. Much was anecdotal and based on years of experience. It was passed informally and in a lively and far-reaching agricultural press. Farmers both needed and used the information they created, circulated, and consumed.
The Department of Agriculture altered the kind and amount of information farmers had access to and effectively redefined who the "experts" were. Established in 1862 during the Civil War, it was the first executive agency created in a period in which the federal government began to assert a more expansive role. It was, perhaps more than any other federal agency, a place where we see evidence of the emergence of a modern state and the exercise of a central state authority. In the late 19th century, the Department was a prodigious collector, producer and distributor of information that served complicated and complex purposes. Research and new knowledge from it focused on applied science and were intended to bolster an emerging market economy by offering farmers information on new practices, new seeds and plants, and tools that promised increased crop yields and improved the efficiency of farm production. The accuracy of that information served to assert and reinforce the authority of the federal government and its role as a source of information critical to agricultural production in a growing market economy.
Farmers regularly submitted information about crop yields, market prices, and soil conditions. They also wrote to request seeds and advice on new and better practices emerging from research at the Department. Each report was richly illustrated with lithographs, photographs, and maps, tables, and graphs to complement the text. With print runs that reached more than 400,000 volumes and distribution by Congressional frank, the reports of the Department suggest that no organization in the 19th century reached more individuals in every state.
How did the department "acquire and diffuse" that information? Where was it located (the agricultural press, libraries, extension offices)? How did farmers access and use it? What did it mean to farmers? My dissertation critically examines the annual reports of the Department in order to situate and illuminate the role of government information in this transformation of the practice of agriculture in the United States.
Library and museum digital collections are increasingly aggregated at various levels. Large-scale aggregations, often characterized by heterogeneous or messy metadata, pose unique and growing challenges to aggregation administrators not only in facilitating end-user discovery and access, but in performing basic administrative and curatorial tasks in a scalable way, such as finding messy data and determining the overall topical landscape of the aggregation. This poster describes early findings on using statistical text analysis techniques to improve the scalability of an aggregation development workflow for a large-scale aggregation. These techniques hold great promise for automating historically labor-intensive evaluative aspects of aggregation development and form the basis for the development of an aggregator’s dashboard. The aggregator’s dashboard is planned as a statistical text-analysis-driven tool for supporting large-scale aggregation development and maintenance, through multifaceted, automatic visualization of an aggregations metadata quality and topical coverage. The administrator’s dashboard will support principled yet scalable aggregation development.
With growing emphasis on improving the quality of K-12 science education, there has been a great deal of effort on reforming science teaching through expediting the diffusion of instructional innovations. Current innovation studies in education tend to focus on a top-down approach to innovation, in which teachers are asked to implement new curriculum materials or adopt new teaching practice introduced by school or district administrators. Yet, such diffusion efforts face many organizational and practical challenges. In contrast to innovation efforts orchestrated by school and district administrations or external agencies, many science teachers are taking their own initiatives in enacting innovative teaching practices and have achieved positive outcomes. Yet, very little is known about the process in which individual science teachers seek and utilize information from myriad sources to implement new teaching practices. This study seeks to fill this gap of understanding by focusing on science teachers’ information use and innovation in science teaching. Key questions include how science teachers navigate the immense information space they face and act on new information to innovate teaching in and beyond the classroom. The study will use a grounded theory approach to capture the various ways in which science teacher find and apply new information in the enactment of innovative practices.
In a world of seemingly ubiquitous and searchable digital full text resources, folklore researchers often find the promise of untold riches replaced by a growing sense of frustration. This poster presents findings from a series of interviews with 15 storytellers, folktale scholars and librarians, and content analysis of library records for folklore materials. The research suggests a series of features that may augment full text searching in more powerful ways by providing better support for the goals and information seeking tasks of folklorists, archivists and scholars thereby revealing the treasures that are hidden in full text repositories.
Disaster management in cultural institutions is an important area of management that is often overlooked, but vital to the continued operations of an institution when an emergency strikes. In the summer of 2011 I conducted research at the Smithsonian Institution Archives on using FEMA's Incident Command System to respond to collections disasters, and the need for collaboration among libraries, museums, and archives to develop integrated disaster plans in the event of large scale emergencies. Although the University of Illinois has done an excellent job of promoting disaster preparedness for collections, there are a few areas on campus and in the community that could benefit from disaster preparedness and response training. For my CAS project, I have identified, in conjunction with collections care groups and professionals on campus, four groups that are currently lacking in collections care and disaster response training. These groups are the GSLIS students, Greek Organizations, local public library employees, and members of the community with personal collections and heirlooms who may not be able to afford expensive conservation quality products to preserve their objects. My project will involve speaking with these groups to identify their specific needs and the types of materials they care for, and then developing unique workshops in disaster response for their group. Targeted marketing will then be done for each group to promote interest in the workshops. Workshops will be taped and all workshop material will be made available on IDEALS to help to evaluate the effectiveness of these workshops and to encourage others in collections care to use these materials to develop their own disaster training. This poster will cover the development and marketing of workshops up through the end of March.
Recent literature has suggested that computational analysis of large text archives can yield novel insights into the functioning of society, including predicting future economic events. Using an archive of 100 million global news articles spanning a quarter-century, a 2.4 petabyte network of 10 billion people, places, and things, and 100 trillion relationships led to the discovery that the emotional dimensions of the news contain hidden cues that forecasted the revolutions in Tunisia, Egypt, and Libya, including the removal of Egyptian President Mubarak, predicted the stability of Saudi Arabia (at least through May 2011), estimated Osama Bin Laden’s likely hiding place as a 200kilometer radius in Northern Pakistan that includes Abbotabad, and offers a new look at the worlds cultural affiliations. Along the way, common assertions about the news, such as news is becoming more negative and American news portrays a U.S.centric view of the world are found to have merit. This research has been covered in over 100 countries, including two articles each by Nature and the BBC, and was just selected by The Economist as one of just 5 science discoveries considered to be the most important developments of 2011.
The history of print culture in the United States has been influenced by ongoing battles over the social construction of race. This poster draws upon letters written from 1947 to 1953, when Rebecca Caudill was at the height of her career as a children’s author, writing historical fiction about the 1850s set in the South and mailing regular drafts to her editor May Massee in New York City. Their lengthy exchange regarding Caudill’s book Susan Cornish, a work of historical fiction about a young white schoolteacher who reformed the prejudices of a community in Kentucky in the 1850s, was fraught with tensions. Caudill used the word nigger in early versions of the manuscript, and replied to Massee’s objections that the word was inherent to the vernacular of the time and place. Massee voiced concerns over children misusing the word and suggested that Caudill edit depictions of African Americans out of the manuscript where possible to avoid the issue. Understanding gaps in the representation of race in children’s literature and print culture is a difficult task. This case study offers one fresh approach for moving beyond unfortunate absences and moving toward understanding some of the complex reasons why representations of African Americans were edited out of children’s print culture in the United States in the mid-twentieth century. Understanding how tensions lead to silences and gaps may help contemporary authors, editors, and librarians to understand why it is vital to engage in resisting racism in a multiracial society.
Examination of developments in technology, access, and policy reveals that American librarianship and the wider system of information provision underwent profound and far-reaching changes during the 1930s. With regard to technology, the 1930s saw the widespread adoption of microfilm, heralded by its advocates as a revolutionary tool that would transform information preservation and dissemination. The number of outlets for library services increased markedly as information was brought to more people, often in creative ways, and on an enlarged scale. Packhorse libraries expanded services to rural users, “open-air” libraries in public parks enhanced access for urban users, and more librarians looked to improve access for traditionally underserved groups, notably African Americans. Finally, policymaking for libraries, and information provision more broadly, assumed greater prominence. New federal agencies were established, new statistical series offered, and existing information programs were expanded. The long-planned National Archives and Records Administration became a reality in the 1930s, state appropriations steadily increased during the Great Depression, and the American Library Association initiated its campaign for permanent federal support. Some historians have already flagged the 1930s as an important decade with regard to activities around information provision. Although scholars have explored some of these changes in a piecemeal way, much more can be said about the depth and breadth of these initiatives. My proposed research situates technology, access, and policy within the history of the 1930s United States, and serves to augment the work already done in the history of library and information science by examining some little-known but vital and formative activities of a neglected period.
We describe two alternative interpretations of OAIS Representation Information (CCSDS, 2002), and show that both are flawed. The first is insufficient to formalize a model of preservation, and the second leads to category mistakes in conceptualizing the nature of digital artifacts. OAIS claims that Data [Objects] interpreted using its Representation Information yields Information. However, we argue that Representation Information supports only the encoding and decoding of particular expressions, such as a table of numerals or a still image, into/from digital objects, such as an Excel file or a TIFF file, respectively. Preservation actions, like migrations, can only be assessed if preservation models reflect a correct, complete, and sufficiently fine-grained representation of digital artifacts. Restructuring the OAIS account of Representation Information as described above will bring us closer to this.
In this poster we present the preliminary output of a study analyzing the applicability of the InSPECT assessment framework to a particular kind of complex digital artifact: video games. This study was part of a set of investigations being conducted into the preservation of video games within the Preserving Virtual Worlds II (PVW2) project. Oregon Trail II, an educational simulation game, has been chosen as the object of analysis. Our analysis provides insight into the framework’s applicability, and points to possible improvements in its workflow. We believe the results to be significant in the discussion on digital preservation, particularly with respect to advancing our understanding of significant properties as a central concept in the development of preservation strategies for complex digital artifacts.
In response to the current data-intensive research environment, iSchools are beginning to build new programs and enhance existing programs to meet workforce demands in data curation, data management, and data science. To understand the state of education in the field, we studied current programs and courses offered at iSchools and other schools of Library and Information Science. Here we present an overview of the methods and results. Courses are divided into four categories: data centric, data inclusive, digital, and traditional LIS. The analysis reveals trends in LIS education for data professionals and identifies particular areas of expertise and gaps in LIS education for data professional.
Open linked data and semantic technologies promise support for information integration and inferencing. But taking advantage of this support often requires that the information carried by ordinary “colloquial” metadata records be made explicit and computationally available. Given the structured nature of most metadata records this looks easy to do; and conversion from metadata records to computer processable knowledge representation languages such as RDF is now commonplace. Nevertheless a precise formal characterization the semantics of common colloquial metadata records is more involved than appearances would suggest. We explore two approaches to formalization and discuss some issues related to the nature of identifier elements in colloquial metadata records and the use of individual constants in knowledge representation.
As federally funded broadband projects are being implemented across the nation, Champaign- Urbana is laying hundreds of miles of fiber optic cable through the UC2B project, facilitating access firstly to households in underserved communities and what the federal agency calls Anchor Social Institutions across town. Urbana-Champaign is unique in the nation for the number and variety of these institutions it will connect to broadband, expanding the standard definition to include local churches, senior housing, and other small non-profits in addition to hospitals, city agencies, public schools and libraries. This poster presents some preliminary findings of a study on these institutions’ use of information technology pre-UC2B. By mobilizing GSLIS researchers and a total of 35 students in GSLIS Community Informatics and Digital Divide courses to document current realities, we are building a comprehensive, accessible portrait of how the non-profit and public sectors are already innovating with technology. This collective portrait will mean that future work for UC2B and beyond can be more comprehensive, sustainable, and visible to all, building on the community informatics tradition begun at UIUC with PLATO (1960) and Prairienet (1994). Of 121 known anchors, many with multiple sites, we expect to construct detailed structured profiles of 100, all informed and approved by the leaders of these organizations. Findings presented in the poster will focus on current internet speeds, technology inventories, best practices, and common challenges and opportunities. It is our hope that this research will contribute to a lasting and open dialog about what difference broadband can and will make in our town.
Various definitions of community informatics have been advanced, each relatively prescriptive but all centered on the interaction between local, historical community and either information or information technology. The IT revolution continues to unfold and intersect in new ways with local communities, leaving the field in a state of flux. We are carrying out a systematic collection and analysis of the literature. Our aims are to get a picture of community informatics as it actually exists and sum up findings
thus far, in this way helping the field grow from a quasi-social movement or a community service endeavor into a respected research field; to find out to what extent the English language literature is in fact global, and identify any gaps; to investigate the contributions from library and information science, which played an important early role, and information and communication technology for development (ICT4D), which has emerged more recently; to identify the place of cultural heritage and its organizations, including libraries, in this work; and to do co-occurrence and citation analysis and reveal the potential structure of CI and relationships among research topics, high-output authors and core institutions.
As we continue the analysis of 524 articles from 25 journals, finding include:
- With regard to intellectual space, a research community is forming (articles concentrated in 4 journals)
- With regard to political space, the literature is not fully global (English-speaking countries overrepresented; Eurocentric)
- With regard to human space, also not fully global (urban overrepresented relative to rural)
- With regard to social space, local institutions are very much included, especially non-profit sector but also government, higher education, and businesses.
- Key disciplines are in fact helping us achieve a more global picture of informatization (ITD, Social informatics/Community Informatics, LIS, and Museums/Public History).
The existence of the digital divide and the effect of computer use programming to narrow it are addressed in this paper through investigation into public computing sites. As the digital divide and computer use have become global issues, this exploratory research focuses on public computing sites in China. This research was designed and implemented during the first Community Informatics Summer School at Peking University in 2011. Our aim is to test the concepts of Community Informatics that were developed outside China, and to understand how computer use impacts on Chinese communities. Case studies are carried out by questionnaire survey, field work, and in-depth interview methods to collect data. Nine cases of public computing sites were divided into four types: public library, commercial site, national library, and university library. The main findings include: (1) Restriction policies for local communities that don’t have the right to use the computers at every public computing site, which represents one form of digital divide; (2) Public computing sites play an important role in narrowing the digital divide if free computer services are provided, as well as related training; (3) the national library and public libraries act well as public computing sites, but support from the government becomes an important factor for further development. (4) A University library has strong potential capacity to serve the local community in terms of computer use, but the public access policy needs to be changed. The authors want to acknowledge nine teams of students in the Summer School at PKU for their data collection and basic analysis.