Tuesday, November 16, 2010
Unit 12
Tuesday, November 9, 2010
unit 11
Perhaps because it is the freshest memory or maybe I have had more experience with evaluating websites in this field, but I was impressed with the Omeka site. In part I felt it was nicely geared for the beginner with both screencast and text based documentation. There was a logical set-up to the site and I felt the general overview was complete. One negative was that the FAQ page was deleted and had not been replaced for over a year. I particularly liked the Use Cases forum, in which developers explained how they used Omeka in different real life institutions. This gave me both insight into how to use Omeka but also a more general sense of what other types of people want to develop digital collections beyond libraries, archives and museums.
Drupal is aesthetically a little fussy for me but the documentation was significantly more detailed on the technical side than Omeka. It is also clearly much more popular and has been around longer which means its background information and forums have tackled more problems and offer more solutions. Drupal and Dspace home sites have a lot in common in terms of numerous links, depth of documentation and general sense of being overly full. An example from Dspace is the feature of linking ‘child pages’ to each main section. There are reasons why this would be helpful to follow a topic throughout the website but it adds on numerous links to the page that is unnecessary/confusing for the general user.
Jhove strikes a balance between the bustling atmosphere of Drupal and Dspace compared to the cleaner Omeka. It is clearly focused for IT staff. A concern is that some of the information is a little old. An example being that the news link has only two links, both from 2008. Has nothing happened in two years or is no one managing the website? Neither inspire a sense of confidence.
The OAI-PMH main site is deep and shows that the project has history and is a major international endeavor. It is a little impersonal and expects a certain amount of previous knowledge from its users on things like acronymns. My experience was positive with the install process but the website itself is a little intimidating.
One of the key features of all these systems is that they are open source based. To me this means the sense of community, communication and forum options, documentation and current news would be very important considerations in the choosing process. I like Omeka but the appeal of Dspace or Drupal is the large and active community of users that could provide support for free. It is a balancing act and I would give a lot of consideration to future support before making commitments.
Sunday, November 7, 2010
Unit 10
It is an interesting question to determine how successful a service provider of harvested metadata is as we are entirely dependent on their efforts for our results. Without the service providers, the information does not get found easily. Which makes it curious to me that the service providers I was able to examine were rather a mix of strange bedfellows.
Ex. 1. http://www.perseus.tufts.edu/hopper/search?redirect=true
The collections that provide the sources are disparate to say the least. The dominant collection is on the Art and Artifacts of Greek and RomanMaterials, while the second largest contributor is a collection of 19th century American history, including the digital archive of the Richmond Times Dispatch newspaper. Subject, time period, place, etc. are not held in common and when searching the two faces of the collection are very apparent. What I am assuming is that it is a collection that the host, Tufts University, is finding this a convenience for its own reasons but it is not logical combination for the general user.
Ex. 2 http://re.cs.uct.ac.za/
This was a different style of searching than the norm. What the site does (from its frankly hideous looking interface) is allows you to set metadata parameters that you can then apply to the OAI compliant providers listed. There is not a way to search by key words and it requires knowledge of how OAI harvesting works to make sense. Again the collection of providers is from all sorts of institutions, around the world and with little obviously in common. It is kind of an inside out search tool. Another confusing point, the Open Archives list shows Virginia Tech as the host, but the site itself is from the University of Cape Town. Bit of a difference between the two!
Ex. 3 http://hispana.mcu.es/es/estaticos/contenido.cmd?pagina=estaticos/presentacion
By far the most successful of the examples was the Hispana site in terms of relevancy of search results. In large part this is due to the fact that it is a dual project with one side a directory of digital projects from Spain and the other a harvester for those same projects. This two in one approach meant that searches for common terms gave relevant results. The limitation is that it is all related to Spain. However, I would prefer to go to more than one service provider and get relevant results if they are both like the Hispana site rather than go to the Perseus site and have an unusable mix.
Tuesday, October 26, 2010
Week 9
Ah, the metadata dilemma. Too much is too expensive while too little makes the whole project pointless since no one will find it. For digital libraries in general I think this is an area that is both art and science. For my digital collection in particular, the potential audience and known creators are who I have in mind when I am experimenting with cataloging. What I mean by that is I envision my collection of webcomics as being of popular culture interest and not for a specialized field or academia. For general users, who are comfortable online, traditional subject terms are not adequate as they can be old fashioned or non-intuitive. Because of that I have been playing mainly with key words and tags. Neither are perfect. Key words have potential as being natural language based and a well known search method. I think it is the system most users will be comfortable with. I personally like tagging as method of description but without high user involvement and/or collection density it doesn’t necessarily work well. In both cases, consistency is dependent on me (the administrator) paying attention and keeping track of what I had chosen in previous cases. The only way around this problem I can think of is to include decision making for terminology in the planning stages. And then hope one is prescient enough to cast the net wide enough to give full coverage.
Tuesday, October 19, 2010
Week8
Thursday, October 14, 2010
Week 7
I have had a week of discouragements in regards to DSpace installation as my late posts can attest. My beloved laptop has had sudden battery problems . The phone company has done a lot of repair work in my neighborhood after a big storm that has meant random internet blackout periods. And I am missing something about DSpace. Intellectually I understand the hierarchical nature of the DSpace setup but it is not a natural fit. When I try to apply those principles of organization to my digital collection it does not work well. I struggled to pick a collection at the beginning of the semester and still feel that it is not well formed. My conceptualization is not very firm as I don’t have any practical experience to draw off of. In previous posts I have been quite critical of dull or not very relevant collections being chosen for digital projects by institutions as being a cop-out. I still think that but I have new appreciation for the difficulties inherent in the process.
On more positive notes, the readings for Unit 7 were both very interesting. I appreciate the perspective the Stanford authors laid out as to the successes and failures of a major digital repository. It gave a new sense of the speed of change the digital reservation community is experiencing. The Johns reading about the context of repository software design gave insight into the root causes of differences between systems. The Greenstone open-source system is one I particularly find interesting because of its focus on multilingualism. The New Zealand Digital Library Project and the University of Waikato developed the project and a partnership with UNESCO has helped make it an international community. A phrase from the website has particular resonance regarding increasing the “awareness of the social implications of information technology”. This is brought home by the use of Greenstone for bilingual digital collections for minority languages at risk of extinction, as the New Zealand project has included Maori, there are others in Welsh, Kazakh, Hawaiian and more. It pleases me to see a digital library have two preservation roles to play. The Greenstone project also focuses on its interoperability with OAI-PMH and METS. It is also able to import and export collections with DSpace. How that works is something I will be interested in exploring further.
Tuesday, October 5, 2010
week 6
Wednesday, September 29, 2010
Week 5
I found myself browsing through the list of module downloads for Drupal for a long time. Kind of like Wikipedia, where one item leads to another to another to another. Many modules offered tools I didn’t know about or understand. Many were named in ways that didn’t make its purpose clear (personal favorites include Bad Judgment, Bespin (planet home of Lando Calrissian’s Cloud City from the Empire Strikes Back) and Awesome Relationships. One way I thought would help me find a useful module was to think in terms of the collection content, specifically the webcomic format. Two modules looked promising, Webcomic and Comic, but neither were supported in the current Drupal version and so could be downloaded but not enabled. I was able to find a module that supports a type of animation process called page-flipping that was compatible. But I don’t have a collection example to test it on so it is only theoretically helpful at this point. Another approach was to think of features I use consistently in my own life, which led me to the print-friendly module. The install was successful even if the purpose is a little dull. However, it is an option I use often online. Another idea I thought would be helpful was a module for Site Documentation. Ironically, the documentation for that module was incomplete and so it seemed suspect. One tool I looked for but could not find was for a random button, a common feature on most webcomic sites. I went about it as logically as I could but I did not find anything by name or keyword. While it could certainly be that I misunderstood the description of the correct module, a possibility being Random Viewer, but I honestly don’t think so. Which made the inability to search the (thousands) of modules disappointing. However, given the active support community I feel confident that someone will point me in the right direction. All in all, Drupal was a success.
Week 4
Monday, September 20, 2010
Week 3
Tuesday, September 7, 2010
Week 2 - Library Hi Tech article summary & review
The article I chose to review from Library Hi Tech was "CMS/CMS: content management system/change management strategies" by Susan Goodwin, Nancy Burford, Martha Bedard, Esther Carrigan and Gale C. Hannigan
In summary:
The Problem – the five libraries that serve the 45,000 users at Texas A&M web presences were inconsistent, difficult to navigate, decentralized and underutilized.
The Action – A Web Integration Team was formed and given the responsibility to develop the Universities Libraries’ web presence. In an effort to be inclusive and promote staff involvement, team members were chosen who were in a position to be ‘agents of change.’
The Goal – to chose and implement a content management system that would provide consistent, integrated and user-friendly web presence.
The Results – the WIT offers lessons learned and recommendations for the process of choosing a CMS.
Lesson 1.
Systems people should not be the only ones tasked with content management. Recommendation: Content management goes beyond technical requirements and demands the attention of upper management, subject specialists, collection developers and other library departments. Most important is to have a way for the decisions of the group to be implemented by people in positions of authority.
Lesson 2.
Developing an integrated approach to web presence revealed lack of organization unity. Recommendation: Use this time as an opportunity to examine old hierarchies and concepts that may be holding back knowledge sharing.
Lesson 3.
Creating an integrated web emphasized the need for a new work culture of collaboration between libraries. Recommendation: Purposefully promote knowledge sharing and a ‘big picture’ perspective amongst staff and other stakeholders.
Lesson 4.
Details can derail the process. Recommendation: it is better to view development as an iterative process of continuous improvements. Make the logical decision for the current environment and plan to revisit in the future.
Lesson 5.
Focusing on electronic resources highlights the paradigm shift for libraries to move from internal organizations focus to user based focus. Recommendation: Acknowledge that internal staff are users also. Do not downplay the impact of this change.
Lesson 6.
Roles and responsibilities will change. Recommendation: rethink job descriptions, redesign hiring requirements, reassign and train people to meet the new needs of the library.
Lesson 7.
Developing a website creates change that must be managed. Recommendation: acknowledge that it is a process that requires advanced management and leadership skills.
Lesson 8.
Communicate. Recommendation: establish regular methods of communication at all levels of staff and solicit feedback for ‘reality checks’.
Lesson 9.
The work load will not decrease over time. Recommendation: Without long term commitment, content management system software is wasted.
While trying to recap the article, I found that the lessons offer valuable insights. I think the authors are correct in calling it a paradigm shift that will create opportunities and obstacles from a management perspective. It is also key to acknowledge that staff involvement will have its ups and downs. There is a noticeable lack of faculty involvement described which may lead to major problems for the library if outside stakeholders were not included in the process. I found the changes to the organizational chart as well as a sample of the content management deadlines to be helpful as an indicator of the level of change and the amount of work the process entails. The final analysis of the authors was that while choosing the software for CMS was a major accomplishment, more importantly the development of a significant web presence heralded major changes for the library culture and its staff that required delicate and affirmative management. All in all, I appreciated the common-sense reflection and resolutions the authors present from their experiences.
Tuesday, August 31, 2010
675 week 1
Monday, August 2, 2010
Week 11
Wednesday, July 28, 2010
Week 10 - maybe a little whiny
While working on the database sections, I was reminded of a special project I was part of about two years back at my public library. The cataloging department had a massive backlog of items from the transition to outsourcing for the cataloging and processing incoming items. It was not a smooth transition and for almost a year there were problems with a sizable percentage of incoming items. Most of these were failed MARC records that due to one problem or another, weren’t working in the Millinium system. I was hired for about ten months to find, update or replace the record as appropriate. If you ever really want to learn cataloging rules, look at several hundred bibliographic records every day and try to find what is wrong with them. Why I mention this is that we needed to spend the least amount of time on each record as we possibly could while still allowing the public to use them and allow us to track carefully for reimbursement. For our internal needs, we wanted to be able to track every last one of them and each failed record had to include several data points that could be used for various retrieval queries; date, format, error type(s) , record id #1 and failed record id #2. By looking at what was wanted at the end, we worked out the minimum of what we needed to do to each record. It generally worked but it was never more than adequate and I have never really understood why.
Having no experience with MySQL at the time I am finding it interesting to see the parts that we did that matched up with the logic behind MySQL and to contrast that with the areas where we didn’t match. By focusing too much on the most immediate need for the changed records – which was to prove we should get money returned, we didn’t do a very good job of organizing the failed records into other useful tables that could have helped us later on. Getting the concepts behind MySQL and it’s query levels has given me some ideas as to where the problems might have been.
I used Wikipedia as a MySQL database example in this week's discussion post and do not feel I did a very good job on the assignment. The words are just out of reach. I read too fast on a subject I am not easily understanding and I will have to slow down and think this through again when I can devote a real block of time to it. Databases feel like the section that will never end.
Wednesday, July 21, 2010
Week 8 - again
Tuesday, July 20, 2010
Week 9 - it was hard!
Tuesday, July 13, 2010
two kinds of tech plans
As I was digesting the readings for the week it occurred to me that it might make the most sense to have a kind of two-faced technology plan, meaning embedding a useful one into a generic plan so as to meet all the necessary goals. On the one hand, a technology plan can be a useful guide for laying out an organization’s goals and technology philosophy as well as a way to inspire and invest staff, stakeholders and the public. On the other hand a technology plan needs to inspire and involve the very disparate groups of staff, stakeholders and the public and making all three happy can be a tricky business. As Michael Schuyler writes, most people don’t know nor care to know the intricacies of how technology is managed; they just want it to work when they turn it on. So for them you give a generic technology plan that satisfies on the surface level. But for the purpose of actually knowing what to do, who should do it and how to pay for it for people who really do need and want to know a realistic technology plan is a necessity. One that involves elements I do believe are important to the health and vitality of a library like the concepts behind environmental scanning. Some kind of double speak might emerge where behind the generic topics, useful information is coded for those who are truly listening. Also I can’t decide if the idea of peer reviewing technology plans as Schuyler suggests would be a brilliant or if it is like comparing apples with oranges and the potential wide range of differences between libraries make it essentially useless.
Wednesday, July 7, 2010
How I learned xml and grew to love youtube
Wednesday, June 30, 2010
a little knowledge
Tuesday, June 22, 2010
Teaching
Learning style & Myers Briggs
Wednesday, June 16, 2010
trials and tribulations
This week was a reminder to stay focused and humble. Previous assignments in Ubuntu like the downloads have gone well for me and have been without significant problems. This time, because of personal plans over the weekend, I have been running late and have had problems with both Ubuntu and the sandbox. Help from the activity discussion board has been great and I remain deeply impressed with how timely Professor Fulton has been (really, not in a smarmy way but in a thank-goodness-he-is-around-and-patient-or-I-would-freak way). And boy do I love the snapshot feature! Once in, however, I felt that this week’s topic of user and group creation and permissions made more intuitive sense than previous topics have and I felt fairly confident moving around in the CLI. I also had a big mistake with the sandbox by powering it down when I shouldn’t have. Which I will never do again and which has permanently impressed upon me what the differences are for the two. Which is a good thing.
ETA: have just tried to re-do something from the assignments and it is not working again. Which is not a good thing